anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
TweetResolver class, to be used in a graphql project | Question: This is my module TweetResolver (tweet-resolver.js) :
import Tweet from '../../models/Tweet';
import { requireAuth } from '../../services/auth';
export default {
getTweet: async (_, { _id }, { user }) => {
try {
await requireAuth(user);
return Tweet.findById(_id);
} catch (error) {
throw error;
}
},
getTweets: async (_, args, { user }) => {
try {
await requireAuth(user);
return Tweet.find({}).sort({ createdAt: -1 });
} catch (error) {
throw error;
}
},
createTweet: async (_, args, { user }) => {
try {
await requireAuth(user);
return Tweet.create(args);
} catch (error) {
throw error;
}
},
updateTweet: async (_, { _id, ...rest }, { user }) => {
try {
await requireAuth(user);
Tweet.findByIdAndUpdate(_id, rest, { new: true });
} catch (error) {
throw error;
}
},
deleteTweet: async (_, { _id }, { user }) => {
try {
await requireAuth(user);
await Tweet.findByIdAndRemove(_id);
return {
message: 'Tweet has been deleted'
}
} catch (error) {
throw error;
}
}
}
It is used in my graphql project like this:
import GraphQLDate from 'graphql-date';
import TweetResolvers from './tweet-resolvers';
import UserResolvers from './user-resolvers';
export default {
Date: GraphQLDate,
Query: {
getTweet: TweetResolvers.getTweet,
getTweets: TweetResolvers.getTweets
},
Mutation: {
createTweet: TweetResolvers.createTweet,
updateTweet: TweetResolvers.updateTweet,
deleteTweet: TweetResolvers.deleteTweet,
signup: UserResolvers.signup,
login: UserResolvers.login
}
}
Correct me if I am wrong, but I feel there is a lot of code duplication in my tweet-resolver.js module.
Almost every method signature on this module is identical (as I only want authenticated users to be able to do anything via graphql).
Every execution is inside a try/catch block.
Is there a way to remove code duplication in this. Or is this style of coding acceptable?
Answer:
If all your catch is doing is re-throwing the error, there is no reason to have the try/catch in the first place.
If you find yourself using export default {}; to export an object, it is a huge sign that you should be using named exports instead, since you're essentially emulating them. This means you'll want to do import * as Foo from instead of import Foo from on these "bag of functions" modules.
You can make a wrapper function to remove the repetitive requireAuth call.
A wrapper function also allows you to remove the params you don't care about in the implementation, like the first and third parameters to these functions.
To update your code with these, I'd do
import Tweet from '../../models/Tweet';
import { requireAuth } from '../../services/auth';
let withAuth = callback => async (_, args, data) => {
await requireAuth(data.user);
return callback(args);
};
export const getTweet = withAuth(({ _id }) => Tweet.findById(_id));
export const getTweets = withAuth(() => Tweet.find({}).sort({ createdAt: -1 }));
export const createTweet = withAuth((args) => Tweet.create(args));
export const updateTweet = withAuth(({ _id, ...rest }) => Tweet.findByIdAndUpdate(_id, rest, { new: true }));
export const deleteTweet = withAuth(async ({_id}) => {
await Tweet.findByIdAndRemove(_id);
return {
message: 'Tweet has been deleted'
};
});
with
import GraphQLDate from 'graphql-date';
import * as TweetResolvers from './tweet-resolvers';
import UserResolvers from './user-resolvers';
export const Date = GraphQLDate;
export const Query = {
getTweet: TweetResolvers.getTweet,
getTweets: TweetResolvers.getTweets,
};
export const Mutation = {
createTweet: TweetResolvers.createTweet,
updateTweet: TweetResolvers.updateTweet,
deleteTweet: TweetResolvers.deleteTweet,
signup: UserResolvers.signup,
login: UserResolvers.login,
}; | {
"domain": "codereview.stackexchange",
"id": 27041,
"tags": "javascript, node.js, ecmascript-6, wrapper, twitter"
} |
Zoom out in rqt/plot | Question:
Dear all,
I am using hydro under ubuntu 12.04LTS and it turns out I can find no way to zoom out the time in the rqt/plot window. You can zoom in using the Zoom to rectangle tool, but I can find no way to zoom out. Moreover the zoom is reset when enabling autoscroll, hence I could find no way to change the zoom on the time axis (x axis) when autoscrolling.
Any idea if zooming out in time is possible at all in the rqt/plot view?
Thanks,
Antoine.
Originally posted by arennuit on ROS Answers with karma: 955 on 2014-07-18
Post score: 2
Answer:
It looks like it is possible if you disable auto-scroll. Click on the cross-shaped icon with arrows labelled "pan axes with left mouse, zoom with right". Then go to the graph itself, hold down the right mouse button, and move the mouse to the left. That should zoom out your time axis, if I understand correctly what you are trying to do.
Originally posted by vkulikauskas with karma: 126 on 2014-07-21
This answer was ACCEPTED on the original site
Post score: 11
Original comments
Comment by Cyril Jourdan on 2017-03-13:
With the "Zoom to rectangle tool" in Kinetic you can also right click and draw a rectangle to zoom out, even with autoscroll on. | {
"domain": "robotics.stackexchange",
"id": 18672,
"tags": "rqt-plot"
} |
Nicer way to work params | Question: if params[:package]
city_ids = []
city = params[:package].delete 'city'
city_ids << city if city.to_i > 0
cities = params[:package].delete 'cities'
city_ids += cities if cities.class == Array && cities.length > 0
params[:package][:cities] = city_ids.map{ |c| City.get(c) }.compact
end
Answer: This code looks pretty clean to me. Here are a few thoughts:
I'm not sure what your ORM is (or even that City is database backed), but most have a way to fetch more then one record at a time. Making a single call will save time and Database resources.
Another thought is that you are inducing a lot of local vars: city_ids, city, cities. What about trying to build up the array of ids in params[:package][:cities]? e.g.
#if cities is not an array, it is bad date get rid of it.
params[:package].delete 'cities' unless params[:package][:cities].kind_of? Array
params[:package][:cities] ||= []
params[:package][:cities] << (params[:package].delete 'city') if params[:package][:cities].to_i > 0
params[:package][:cities] = params[:package][:cities].map{ |c| City.get(c) }.compact
The code above doesn't seem very DRY because it has all this params[:package] stuff. If you do things like this a lot (your question makes it sound like you do) factor the operation out into a separate method:
def extract_nested_reference(hash,values) # change name as appopriate
if hash
hash.delete values unless hash[values].kind_of? Array
hash[values] ||=[]
hash[values] << hash[values.singularize] if hash[values.singularize].to_i > 0
hash[values] = hash[values].map do |x|
Object.const_get(values.singularize.constantize).get(c)
end.compact
end
end
This method could then be called with extract_nested_reference(params[:package],'cities'). Refactoring like this will also encourage you to be consistent in how you build you params hash, and allow you to reuse code.
Finally, your question asked about better ways of dealing with hashes. I don't think there is. You will find code very similar to this living in most popular gems. AS you say, it is kind of clunky, and therefore and excellent candidate for a code redactor. | {
"domain": "codereview.stackexchange",
"id": 282,
"tags": "ruby"
} |
Stage Wifi Model | Question:
Hi,
it the stage wifi model [http://playerstage.sourceforge.net/doc/stage-cvs/group__model__wifi.html] available in the stageros node ? I always get Unknown model type wifi in world file
am i doing something wrong or need i to applay a patch? Or isn't it supported by the ros node.
Stage Version is 4.1.1
Originally posted by pkohout on ROS Answers with karma: 336 on 2013-11-14
Post score: 0
Answer:
Hi, as far as i know stageros node only supports laser and recently camera support was added. Maybe you have to use a patch to use this model.
Originally posted by gustavo.velascoh with karma: 756 on 2013-11-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by pkohout on 2013-11-14:
hi, thanks for the fast replay. I have serched on google but i do not finde where i can download stage patches, do you know where I can get a list ? | {
"domain": "robotics.stackexchange",
"id": 16157,
"tags": "ros, simulation, wifi, simulator-stage, stage"
} |
Understanding basic operation of TM and FA | Question: As turing machine always halts on dead configuration,so consider the case where it halts in final state,if on second last state by reading an input alphabet it comes to final state. Now in final state it will try to find the transition for the next symbol(i.e blank) and as there will be no transition on final state so machine will halt/dead in final and we can say string is accepted.
If i consider the case of Finite automate,now FA reaches the final state and the next is blank then we say machine will stop and string is accepted.But this is not dead configuration.
But in both cases the input is coming from tape and in case of FA ,it will halt when blank comes on tape and there is no dead configuration.But same behaviour in TM is having dead configuration behaviour.
Can someone please explain the difference in these two case from point of dead configuration.?
P.S:- By dead configuration,i mean that transition on the state is not defined for given alphabet
Answer: Turing machines (and RAM machines) and (most) finite automata operate in two different input models. Finite automata usually accept their input in a streaming fashion: they read one input symbol, do something, read another input symbol, do something, and so on. When the input is exhausted, they report their output. Turing machines, in contrast, get all their input on the input tape, and then do whatever processing they wish. They don't have to stop after they have scanned the entire input.
This difference in the input model explains the different semantics you have indicated. There are, however, finite automata which have input semantics similar to a Turing machine, namely two-way finite automata. Although behaving superficially like Turing machines, they are equivalent in power to deterministic finite automata (DFAs). | {
"domain": "cs.stackexchange",
"id": 9770,
"tags": "turing-machines, finite-automata"
} |
Why do we count reverse compliments when counting kmers in a single DNA strand? | Question: When looking for repeat patterns in a DNA sequence, we can look for the pattern and its reverse compliment with up to d mismatches. However, why do we look for reverse compliments if we're analyzing a single DNA strand?
I thought about it and my guess is that this lets us analyze both strands of DNA. For example, if we find the reverse compliment at position x, that means that the other strand of DNA has the pattern at position x - pattern.length.
Is that correct?
Answer: When we sequence genomes, the individual reads come from both strands, and that also applies for assembled sequences* (contigs / scaffolds). Therefore for most of the genome assemblies out there, strandeness of individual scaffolds is actually an arbitrary property, and reverse complementing any sequence will generate an equally valid representation (with reverted coordinates).
The coordinates of the reverse complementary k-mer matches depend on the notation. Usually, the notations are still in the coordinates of the reference, for example, blast gives for reverse complimentary strand matches the from subject coordinate greater than the to coordinate. Some others are denoting "the leftmost" position and strand encoded in a different column (e.g. sam).
*perhaps with the exception of chromosomal level assemblies | {
"domain": "bioinformatics.stackexchange",
"id": 1959,
"tags": "k-mer"
} |
rosdep update error | Question:
Hello,
I'm installing ros hydro on ubuntu 12.04 in beaglebone black,
the processor is ARMv7,
the OS type is 32-bit.
After running "sudo rosdep init" successed, when i run "rosdep update", it shows error as follows:
ubuntu@ubuntu-armhf:~$ rosdep update
reading in sources list data from /etc/ros/rosdep/sources.list.d
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml]:
<urlopen error [Errno -2] Name or service not known>
(https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml]:
<urlopen error [Errno -2] Name or service not known>
(https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml]:
<urlopen error [Errno -2] Name or service not known>
(https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml)
Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml
Query rosdistro index https://raw.github.com/ros/rosdistro/master/index.yaml
Add distro "groovy"
Add distro "hydro"
Add distro "indigo"
ERROR: error loading sources list:
<urlopen error <urlopen error [Errno -2] Name or service not known>
(https://raw.github.com/ros/rosdistro/master/indigo/distribution.yaml)>
And I run "which rosdep" & "rosdep --version" & "apt-cache policy python-rosdep", the result is:
ubuntu@ubuntu-armhf:~$ which rosdep
/usr/bin/rosdep
ubuntu@ubuntu-armhf:~$ rosdep --version
0.10.30
ubuntu@ubuntu-armhf:~$ apt-cache policy python-rosdep
python-rosdep:
Installed: 0.10.30-1
Candidate: 0.10.30-1
Version table:
*** 0.10.30-1 0
500 http://packages.namniart.com/repos/ros/ precise/main armhf Packages
100 /var/lib/dpkg/status
Please help me do with these errors.
Originally posted by lanxuedonghe on ROS Answers with karma: 11 on 2015-08-17
Post score: 1
Original comments
Comment by marcobecerrap on 2016-05-16:
I also have this error, anyone to help?
Answer:
The error here is Name or service not known, which suggests that you don't have a network connection, or your DNS settings are incorrect.
Are you sure your network connection is working? Try ping www.google.com
Originally posted by ahendrix with karma: 47576 on 2015-08-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by artemiialessandrini on 2019-11-01:
He never gets a ping Google, it's blocked in China:D | {
"domain": "robotics.stackexchange",
"id": 22467,
"tags": "ros, ros-hydro, ubuntu, ubuntu-precise, beagleboneblack"
} |
A Java subclass of ArrayList that supports rotation in constant time - follow-up | Question: (See the previous (and initial) iteration.)
Compared to the previous iteration, I have added more methods that maintain the invariant required for constant time rotations, such as bulk add/remove, iterators, dumping contents to array.
I have more progress on java.util.ArrayList subclass that supports rotations in constant time. See what I have now:
RotableArrayList.java
package net.coderodde.util;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Comparator;
import java.util.ConcurrentModificationException;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.ListIterator;
import java.util.NoSuchElementException;
import java.util.Objects;
import java.util.Set;
import java.util.Spliterator;
/**
* This class implements a rotable list. Pushing to the front or the end of this
* list runs in constant amortized time. Rotation runs in constant time.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 24, 2016)
*/
public class RotableArrayList<E> extends ArrayList<E> {
private int finger;
@Override
public E get(int index) {
checkAccessIndex(index);
return super.get((index + finger) % size());
}
@Override
public E set(int index, E element) {
checkAccessIndex(index);
E ret = get(index);
super.set((index + finger) % size(), element);
return ret;
}
@Override
public void add(int index, E element) {
checkAdditionIndex(index);
super.add((index + finger) % (size() + 1), element);
}
@Override
public boolean add(E element) {
add(size(), element);
return true;
}
@Override
public boolean addAll(Collection<? extends E> coll) {
if (coll.isEmpty()) {
return false;
}
super.addAll(finger, coll);
finger += coll.size();
return true;
}
@Override
public boolean addAll(int index, Collection<? extends E> coll) {
if (coll.isEmpty()) {
return false;
}
int actualIndex = finger + index;
if (actualIndex >= size()) {
actualIndex %= size();
finger += coll.size();
}
super.addAll(actualIndex, coll);
return true;
}
@Override
public E remove(int index) {
checkRemovalIndex(index);
E ret = this.get(index);
super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
@Override
public void clear() {
super.clear();
finger = 0;
}
@Override
public int indexOf(Object o) {
int size = size();
for (int index = 0; index < size; ++index) {
if (Objects.equals(o, get(index))) {
return index;
}
}
return -1;
}
@Override
public int lastIndexOf(Object o) {
for (int index = size() - 1; index >= 0; --index) {
if (Objects.equals(o, get(index))) {
return index;
}
}
return -1;
}
@Override
public void sort(Comparator<? super E> c) {
super.sort(c);
finger = 0;
}
@Override
public Iterator<E> iterator() {
return new Iterator<E>() {
private final ListIterator<E> listIterator = listIterator(0);
@Override
public boolean hasNext() {
return listIterator.hasNext();
}
@Override
public E next() {
return listIterator.next();
}
@Override
public void remove() {
listIterator.remove();
}
};
}
@Override
public ListIterator<E> listIterator() {
return listIterator(0);
}
@Override
public ListIterator<E> listIterator(int index) {
return new RotableListIterator(index);
}
@Override
public Spliterator<E> spliterator() {
throw new UnsupportedOperationException();
}
@Override
public List<E> subList(int fromIndex, int toIndex) {
throw new UnsupportedOperationException();
}
@Override
public Object[] toArray() {
Object[] array = new Object[size()];
int index = 0;
for (E element : this) {
array[index++] = element;
}
return array;
}
@Override
public <E> E[] toArray(E[] a) {
if (a.length < size()) {
a = Arrays.copyOf(a, size());
}
int index = 0;
for (Object element : this) {
a[index++] = (E) element;
}
if (a.length > size()) {
a[size()] = null;
}
return a;
}
@Override
public boolean remove(Object o) {
int size = size();
for (int index = 0; index < size; ++index) {
if (Objects.equals(o, get(index))) {
remove(index);
// size = 10, finger = 7, index = 4
if (index + finger >= size()) {
--finger;
}
return true;
}
}
return false;
}
@Override
public boolean removeAll(Collection<?> coll) {
if (coll.isEmpty()) {
return false;
}
Set<?> set = (coll instanceof HashSet) ?
(Set<?>) coll :
new HashSet<>(coll);
Iterator<E> iterator = this.iterator();
while (iterator.hasNext()) {
E current = iterator.next();
if (set.contains(current)) {
iterator.remove();
}
}
return true;
}
@Override
public boolean retainAll(Collection<?> coll) {
if (coll.isEmpty()) {
return false;
}
Set<?> set = (coll instanceof HashSet) ?
(Set<?>) coll :
new HashSet<>(coll);
Iterator<E> iterator = iterator();
while (iterator.hasNext()) {
E current = iterator.next();
if (!set.contains(current)) {
iterator.remove();
}
}
return true;
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder("[");
int size = size();
for (int index = 0; index < size; ++index) {
sb.append(get(index));
if (index < size - 1) {
sb.append(", ");
}
}
return sb.append("]").toString();
}
public void rotate(int offset) {
finger -= offset;
finger %= size();
if (finger < 0) {
finger += size();
}
}
private void checkAccessIndex(int index) {
if (index < 0) {
throw new IndexOutOfBoundsException(
"The access index is negative: " + index + ".");
}
if (index >= size()) {
throw new IndexOutOfBoundsException(
"The access index is too large: " + index + "." +
"The size of the list is " + size() + ".");
}
}
private void checkAdditionIndex(int index) {
if (index < 0) {
throw new IndexOutOfBoundsException(
"The access index is negative: " + index + ".");
}
if (index > size()) {
throw new IndexOutOfBoundsException(
"The addition index is too large: " + index + "." +
"The size of the list is " + size() + ".");
}
}
private void checkRemovalIndex(int index) {
if (index < 0) {
throw new IndexOutOfBoundsException(
"The removal index is negative: " + index + ".");
}
if (index >= size()) {
throw new IndexOutOfBoundsException(
"The removal index is too large: " + index + "." +
"The size of the list is " + size() + ".");
}
}
private final class RotableListIterator implements ListIterator<E> {
// Index is an arrow that points between two array elements:
// array[index - 1] and array[index].
private int expectedModCount = RotableArrayList.super.modCount;
private int index;
private int indexOfIteratedElement = -1;
private boolean lastMoveWasNext;
public RotableListIterator(int index) {
this.index = index;
}
@Override
public boolean hasNext() {
return index < RotableArrayList.this.size();
}
@Override
public E next() {
checkConcurrentModification();
if (!hasNext()) {
throw new NoSuchElementException(
"No next element in this iterator.");
}
indexOfIteratedElement = index;
lastMoveWasNext = true;
return (E) RotableArrayList.this.get(index++);
}
@Override
public boolean hasPrevious() {
return index > 0;
}
@Override
public E previous() {
checkConcurrentModification();
if (!hasPrevious()) {
throw new NoSuchElementException(
"No previous element in this iterator.");
}
indexOfIteratedElement = --index;
lastMoveWasNext = false;
return (E) RotableArrayList.this.get(index);
}
@Override
public int nextIndex() {
return index;
}
@Override
public int previousIndex() {
return index - 1;
}
@Override
public void remove() {
if (indexOfIteratedElement == -1) {
throw new IllegalStateException(
"There is no element to remove.");
}
checkConcurrentModification();
E ret = RotableArrayList.this.remove(indexOfIteratedElement);
indexOfIteratedElement = -1;
expectedModCount = RotableArrayList.super.modCount;
if (lastMoveWasNext) {
index--;
}
}
@Override
public void set(E e) {
if (indexOfIteratedElement == -1) {
throw new IllegalStateException("There is no current element.");
}
checkConcurrentModification();
RotableArrayList.this.set(indexOfIteratedElement, e);
expectedModCount = RotableArrayList.super.modCount;
}
@Override
public void add(E e) {
checkConcurrentModification();
RotableArrayList.this.add(nextIndex(), e);
index++;
indexOfIteratedElement = -1;
expectedModCount = RotableArrayList.super.modCount;
}
private void checkConcurrentModification() {
if (expectedModCount != RotableArrayList.super.modCount) {
throw new ConcurrentModificationException(
"Expected mod count: " + expectedModCount + ", " +
"actual mod count: " + RotableArrayList.super.modCount);
}
}
}
public static void main(String[] args) {
RotableArrayList<Integer> list = new RotableArrayList<>();
for (int i = 0; i < 5; ++i) {
list.add(i);
}
System.out.println("Rotating to the right:");
for (int i = 0; i < list.size(); ++i) {
System.out.println(list);
list.rotate(1);
}
System.out.println("Rotating to the left:");
for (int i = 0; i < list.size(); ++i) {
System.out.println(list);
list.rotate(-1);
}
}
}
RotableArrayListTest.java
package net.coderodde.util;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.ListIterator;
import java.util.NoSuchElementException;
import java.util.Set;
import org.junit.Test;
import static org.junit.Assert.*;
import org.junit.Before;
public class RotableArrayListTest {
private final RotableArrayList<Integer> list = new RotableArrayList<>();
@Before
public void before() {
list.clear();
}
private void load(int n) {
for (int i = 0; i < n; ++i) {
list.add(i);
}
}
@Test
public void testGet() {
load(5);
for (int i = 0; i < 5; ++i) {
assertEquals(Integer.valueOf(i), list.get(i));
}
list.rotate(2);
assertEquals(Integer.valueOf(3), list.get(0));
assertEquals(Integer.valueOf(4), list.get(1));
assertEquals(Integer.valueOf(0), list.get(2));
assertEquals(Integer.valueOf(1), list.get(3));
assertEquals(Integer.valueOf(2), list.get(4));
list.rotate(-4);
assertEquals(Integer.valueOf(2), list.get(0));
assertEquals(Integer.valueOf(3), list.get(1));
assertEquals(Integer.valueOf(4), list.get(2));
assertEquals(Integer.valueOf(0), list.get(3));
assertEquals(Integer.valueOf(1), list.get(4));
list.rotate(-8);
for (int i = 0; i < 5; ++i) {
assertEquals(Integer.valueOf(i), list.get(i));
}
}
@Test
public void testSet() {
load(4);
list.set(0, 3);
list.set(1, 2);
list.set(2, 1);
list.set(3, 0);
assertEquals(Integer.valueOf(3), list.get(0));
assertEquals(Integer.valueOf(2), list.get(1));
assertEquals(Integer.valueOf(1), list.get(2));
assertEquals(Integer.valueOf(0), list.get(3));
}
@Test
public void testAdd_int_GenericType() {
load(2);
list.add(1, 10);
list.add(0, 11);
list.add(4, 12);
assertEquals(Integer.valueOf(11), list.get(0));
assertEquals(Integer.valueOf(0), list.get(1));
assertEquals(Integer.valueOf(10), list.get(2));
assertEquals(Integer.valueOf(1), list.get(3));
assertEquals(Integer.valueOf(12), list.get(4));
}
@Test
public void testAdd_GenericType() {
load(2);
list.add(10);
assertEquals(Integer.valueOf(0), list.get(0));
assertEquals(Integer.valueOf(1), list.get(1));
assertEquals(Integer.valueOf(10), list.get(2));
}
@Test
public void testRemove_int() {
load(5);
list.remove(4);
list.remove(2);
list.remove(0);
assertEquals(Integer.valueOf(1), list.get(0));
assertEquals(Integer.valueOf(3), list.get(1));
list.clear();
load(5);
list.rotate(2);
list.remove(0);
assertEquals(Integer.valueOf(4), list.get(0));
assertEquals(Integer.valueOf(0), list.get(1));
assertEquals(Integer.valueOf(1), list.get(2));
assertEquals(Integer.valueOf(2), list.get(3));
list.clear();
load(10);
list.rotate(3);
list.remove(Integer.valueOf(3));
}
@Test
public void testIndexOf() {
load(10);
list.rotate(-3);
assertEquals(1, list.indexOf(4));
}
@Test
public void testLastIndexOf() {
load(10);
list.rotate(-3);
assertEquals(8, list.lastIndexOf(1));
}
@Test
public void testSort() {
list.add(5);
list.add(2);
list.add(1);
list.add(4);
list.rotate(-1);
list.sort(Integer::compare);
assertEquals(Integer.valueOf(1), list.get(0));
}
@Test
public void testRemove_Object() {
load(5);
list.rotate(-1);
list.remove(Integer.valueOf(3));
assertEquals(4, list.size());
assertEquals(Integer.valueOf(1), list.get(0));
assertEquals(Integer.valueOf(2), list.get(1));
assertEquals(Integer.valueOf(4), list.get(2));
assertEquals(Integer.valueOf(0), list.get(3));
}
@Test
public void testRotate() {
load(10);
list.rotate(2);
assertEquals(Integer.valueOf(8), list.get(0));
assertEquals(Integer.valueOf(9), list.get(1));
assertEquals(Integer.valueOf(0), list.get(2));
list.rotate(-5);
assertEquals(Integer.valueOf(3), list.get(0));
assertEquals(Integer.valueOf(4), list.get(1));
assertEquals(Integer.valueOf(5), list.get(2));
}
@Test
public void testListIteratorNextAndHasNext() {
load(5);
ListIterator<Integer> iterator = list.listIterator(2);
assertTrue(iterator.hasNext());
assertEquals(Integer.valueOf(2), iterator.next());
assertTrue(iterator.hasNext());
assertEquals(Integer.valueOf(3), iterator.next());
assertTrue(iterator.hasNext());
assertEquals(Integer.valueOf(4), iterator.next());
assertFalse(iterator.hasNext());
assertTrue(iterator.hasPrevious());
try {
iterator.next();
fail();
} catch (NoSuchElementException ex) {
}
}
@Test
public void testListIteratorPreviousAndHasPrevious() {
load(5);
ListIterator<Integer> iterator = list.listIterator(2);
assertTrue(iterator.hasPrevious());
assertEquals(Integer.valueOf(1), iterator.previous());
assertTrue(iterator.hasPrevious());
assertEquals(Integer.valueOf(0), iterator.previous());
assertTrue(iterator.hasNext());
assertFalse(iterator.hasPrevious());
try {
iterator.previous();
fail();
} catch (NoSuchElementException ex) {
}
}
@Test
public void testListIteratorAdd() {
List<Integer> list2 = new ArrayList<>();
load(5);
for (int i = 0; i < 5; ++i) {
list2.add(i);
}
ListIterator<Integer> iter = list.listIterator();
ListIterator<Integer> iter2 = list2.listIterator();
iter.add(10);
iter.add(11);
iter2.add(10);
iter2.add(11);
assertTrue(listsEqual(list, list2));
try {
iter.remove();
fail("List should have thrown IllegalStateException.");
} catch (IllegalStateException ex) {
}
try {
iter2.remove();
fail("List should have thrown IllegalStateException.");
} catch (IllegalStateException ex) {
}
}
@Test
public void testListIteratorSet() {
List<Integer> list2 = new ArrayList<>(Arrays.asList(0, 1, 2, 3, 4));
load(5);
ListIterator<Integer> iter = list.listIterator(2);
ListIterator<Integer> iter2 = list2.listIterator(2);
try {
iter2.set(10);
fail("ListIterator.set should have thrown IllegalStateException.");
} catch (IllegalStateException ex) {
}
try {
iter.set(10);
fail("ListIterator.set should have thrown IllegalStateException.");
} catch (IllegalStateException ex) {
}
iter.previous();
iter2.previous();
iter.set(10);
iter2.set(10);
assertTrue(listsEqual(list, list2));
}
@Test
public void testListIteratorRemove() {
List<Integer> list2 = new ArrayList<>(Arrays.asList(0, 1, 2, 3, 4));
load(5);
ListIterator<Integer> iter = list.listIterator(2);
ListIterator<Integer> iter2 = list2.listIterator(2);
try {
iter2.remove();
fail("ListIterator.remove should have thrown " +
"IllegalStateException.");
} catch (IllegalStateException ex) {
}
try {
iter.remove();
fail("ListIterator.remove should have thrown " +
"IllegalStateException.");
} catch (IllegalStateException ex) {
}
iter.next();
iter2.next();
iter.remove();
iter2.remove();
assertTrue(listsEqual(list, list2));
try {
iter2.remove();
fail("ListIterator.remove should have thrown " +
"IllegalStateException.");
} catch (IllegalStateException ex) {
}
try {
iter.remove();
fail("ListIterator.remove should have thrown " +
"IllegalStateException.");
} catch (IllegalStateException ex) {
}
iter.next();
iter2.next();
iter.remove();
iter2.remove();
assertTrue(listsEqual(list, list2));
iter.previous();
iter2.previous();
iter.remove();
iter2.remove();
assertTrue(listsEqual(list, list2));
}
@Test
public void testAddAll() {
load(4);
list.rotate(-2); // 2, 3, 0, 1
list.addAll(Arrays.asList(4, 5, 6));
assertEquals(Integer.valueOf(2), list.get(0));
assertEquals(Integer.valueOf(3), list.get(1));
assertEquals(Integer.valueOf(0), list.get(2));
assertEquals(Integer.valueOf(1), list.get(3));
assertEquals(Integer.valueOf(4), list.get(4));
assertEquals(Integer.valueOf(5), list.get(5));
assertEquals(Integer.valueOf(6), list.get(6));
List<Integer> list2 = new ArrayList<>();
assertFalse(list.addAll(Arrays.asList()));
assertFalse(list2.addAll(Arrays.asList()));
assertTrue(list.addAll(Arrays.asList(1)));
assertTrue(list2.addAll(Arrays.asList(1)));
}
@Test
public void testAddAllInt() {
load(4);
list.rotate(-2); // 2, 3, 0, 1
list.addAll(1, Arrays.asList(4, 5, 6));
assertEquals(Integer.valueOf(2), list.get(0));
assertEquals(Integer.valueOf(4), list.get(1));
assertEquals(Integer.valueOf(5), list.get(2));
assertEquals(Integer.valueOf(6), list.get(3));
assertEquals(Integer.valueOf(3), list.get(4));
assertEquals(Integer.valueOf(0), list.get(5));
assertEquals(Integer.valueOf(1), list.get(6));
List<Integer> list2 = new ArrayList<>(list);
assertFalse(list.addAll(1, Arrays.asList()));
assertFalse(list2.addAll(1, Arrays.asList()));
assertTrue(list.addAll(1, Arrays.asList(1)));
assertTrue(list2.addAll(1, Arrays.asList(1)));
list.clear();
load(5);
list.rotate(2); // 3, 4, 0, 1, 2
list.addAll(3, Arrays.asList(10, 11));
assertEquals(Integer.valueOf(3), list.get(0));
assertEquals(Integer.valueOf(4), list.get(1));
assertEquals(Integer.valueOf(0), list.get(2));
assertEquals(Integer.valueOf(10), list.get(3));
assertEquals(Integer.valueOf(11), list.get(4));
assertEquals(Integer.valueOf(1), list.get(5));
assertEquals(Integer.valueOf(2), list.get(6));
list.clear();
load(5);
list.rotate(2); // 3, 4, 0, 1, 2
list.addAll(1, Arrays.asList(10, 11)); // 3, 10, 11, 4, 0, 1, 2
assertEquals(Integer.valueOf(3), list.get(0));
assertEquals(Integer.valueOf(10), list.get(1));
assertEquals(Integer.valueOf(11), list.get(2));
assertEquals(Integer.valueOf(4), list.get(3));
assertEquals(Integer.valueOf(0), list.get(4));
assertEquals(Integer.valueOf(1), list.get(5));
assertEquals(Integer.valueOf(2), list.get(6));
}
@Test
public void testToArray() {
load(5);
list.rotate(3); // 2, 3, 4, 0, 1
Object[] array = list.toArray();
assertEquals(2, array[0]);
assertEquals(3, array[1]);
assertEquals(4, array[2]);
assertEquals(0, array[3]);
assertEquals(1, array[4]);
}
@Test
public void testGenericToArray() {
load(5);
list.rotate(-2); // 2, 3, 4, 0, 1
Integer[] array = new Integer[4];
Integer[] result = list.toArray(array);
assertTrue(array != result);
assertEquals(5, result.length);
assertEquals(Integer.valueOf(2), result[0]);
assertEquals(Integer.valueOf(3), result[1]);
assertEquals(Integer.valueOf(4), result[2]);
assertEquals(Integer.valueOf(0), result[3]);
assertEquals(Integer.valueOf(1), result[4]);
array = new Integer[5];
result = list.toArray(array);
assertTrue(array == result);
assertEquals(5, result.length);
assertEquals(Integer.valueOf(2), result[0]);
assertEquals(Integer.valueOf(3), result[1]);
assertEquals(Integer.valueOf(4), result[2]);
assertEquals(Integer.valueOf(0), result[3]);
assertEquals(Integer.valueOf(1), result[4]);
array = new Integer[6];
result = list.toArray(array);
assertTrue(array == result);
assertEquals(6, array.length);
assertEquals(Integer.valueOf(2), result[0]);
assertEquals(Integer.valueOf(3), result[1]);
assertEquals(Integer.valueOf(4), result[2]);
assertEquals(Integer.valueOf(0), result[3]);
assertEquals(Integer.valueOf(1), result[4]);
assertNull(result[5]); // Cut off value.
}
@Test
public void testRemoveAll() {
load(5);
list.rotate(3);
Set<Integer> set = new HashSet<>(Arrays.asList(2, 3, 1));
list.removeAll(set);
assertEquals(Integer.valueOf(4), list.get(0));
assertEquals(Integer.valueOf(0), list.get(1));
list.clear();
load(10);
list.rotate(3); // 7, 8, 9, 0, 1, 2, 3, 4, 5, 6
set.clear();
set.addAll(Arrays.asList(0, 2, 3, 6, 9));
list.removeAll(set);
assertEquals(Integer.valueOf(7), list.get(0));
assertEquals(Integer.valueOf(8), list.get(1));
assertEquals(Integer.valueOf(1), list.get(2));
assertEquals(Integer.valueOf(4), list.get(3));
assertEquals(Integer.valueOf(5), list.get(4));
}
@Test
public void testRetainAll() {
load(5);
list.rotate(3); // 2, 3, 4, 0, 1
Set<Integer> set = new HashSet<>(Arrays.asList(2, 3, 1));
list.retainAll(set); // 2, 3, 1
assertEquals(Integer.valueOf(2), list.get(0));
assertEquals(Integer.valueOf(3), list.get(1));
assertEquals(Integer.valueOf(1), list.get(2));
list.clear();
load(10);
list.rotate(3); // 7, 8, 9, 0, 1, 2, 3, 4, 5, 6
set.clear();
set.addAll(Arrays.asList(0, 2, 3, 6, 9));
list.retainAll(set);
assertEquals(Integer.valueOf(9), list.get(0));
assertEquals(Integer.valueOf(0), list.get(1));
assertEquals(Integer.valueOf(2), list.get(2));
assertEquals(Integer.valueOf(3), list.get(3));
assertEquals(Integer.valueOf(6), list.get(4));
}
private boolean listsEqual(List<Integer> list, List<Integer> list2) {
if (list.size() != list2.size()) {
return false;
}
for (int i = 0; i < list.size(); ++i) {
if (!list.get(i).equals(list2.get(i))) {
return false;
}
}
return true;
}
}
Please, tell me anything that comes to mind.
Answer: Remove checks index twice:
@Override
public E remove(int index) {
checkRemovalIndex(index);
E ret = this.get(index);
super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
Here's what get looks like:
@Override
public E get(int index) {
checkAccessIndex(index);
return super.get((index + finger) % size());
}
Here's remove with the get call inlined:
@Override
public E remove(int index) {
checkRemovalIndex(index);
checkAccessIndex(index);
E ret = super.get((index + finger) % size());
super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
It checks the index twice!
I think you could do better by directly using the return value of super.remove:
@Override
public E remove(int index) {
checkRemovalIndex(index);
E ret = super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
addAll with index greater than list size corrupts finger:
@Override
public boolean addAll(int index, Collection<? extends E> coll) {
if (coll.isEmpty()) {
return false;
}
int actualIndex = finger + index;
if (actualIndex >= size()) {
actualIndex %= size();
finger += coll.size();
}
super.addAll(actualIndex, coll);
return true;
}
It will throw an exception, but after that you'll be in a dirty state. If you then call addAll again, but without an index, (addAll(Collection)) then you'll get more exceptions for going out of bounds. | {
"domain": "codereview.stackexchange",
"id": 19201,
"tags": "java, array, unit-testing, collections"
} |
Is the finite inverse semigroup isomorphism problem GI-complete? | Question: Is the finite inverse semigroup isomorphism problem GI-complete? Here the finite inverse semigroups are assumed to be given by their multiplication tables.
Answer: Yes, the finite inverse semigroup isomorphism problem is GI-complete! This is a corollary of
Theorem: Lattice isomorphism is isomorphism complete
from section 7.2 Lattices and Posets in
Booth, Kellogg S.; Colbourn, C. J. (1977), Problems polynomially equivalent to graph isomorphism, Technical Report CS-77-04, Computer Science Department, University of Waterloo.
because a (semi-)lattice is also an (idempotent commutative) inverse semigroup.
Proof of theorem from technical report:
It suffices to represent a graph uniquely as a lattice. Given a graph $G$ with $n$ vertices and $m$ edges, we define a lattice with an element for each vertex, an element for each edge, and two additional elements $\text{'}0\text{'}$ and $\text{'}1\text{'}$. Element $\text{'}1\text{'}$ dominates all others (the supremum), and element $\text{'}0\text{'}$ is dominated by all other elements (the infimum). An edge dominates exactly those vertices which are its endpoints. The result is a lattice which uniquely represents $G$.
The idea for this answer came from a discussion with vzn about sufficiently focused questions. The motivation to spend time on graph isomorphism at all also came from vzn's repeated prodding. J.-E. Pin asked in the comment whether there are any specific reasons to consider inverse semigroups. The idea was to have a structure slightly generalizing groups, which is GI complete. I wanted to better understand the relation between group isomorphism and graph isomorphism, but I fear this answer doesn't provide any insight of this sort. | {
"domain": "cstheory.stackexchange",
"id": 3336,
"tags": "cc.complexity-theory, graph-isomorphism"
} |
Nyquist–Shannon sampling theorem for Quantum Evolution | Question: In classical digital signal processing one can try to identify the dynamics of a system by sampling its evolution from an initial time $t_0$ to a final time $t_1$. Sampling $N$ times results in a discrete recreation of its evolution and one can try to reconstruct with various methods the actual continuous time evolution later.
By the Nyquist–Shannon sampling theorem if $N$ is sufficiently large then the discrete dynamics error should not be conceivable (e.g. if we are sampling the sound waves from a piano). Thus, to the observer non-observable.
Question
Is there some analog theorem or application of the Nyquist–Shannon sampling theorem when one wants to sample the evolution of a quantum state evolving under some Hamiltonian $\hat H$? (e.g. a qubit within some superconducting processor.)
One would record a time-series $\{|\psi_0\rangle, \ldots,|\psi_N\rangle \}$ where,
$$
|\psi_i\rangle = e^{-i\hat H (t_i-t_0)}|\psi_0\rangle.
$$
Could one prepare a state at least as many times as the state is supposed to be sample, say $N$ and allow evolution each time measuring with frequency $1/N$? I think the answer here is yes.
What can be said then about the state evolution and where could "quantum noise" ruin the reconstruction of the dynamics compared to a classical system?
Some errors would originate from the error reconstruction, but what other sources of error one would have to tackle?
Answer: In order to apply Nyquist-Shannon sampling theory, we need to know the maximum frequency that will be present in the signal we intend to measure. We will do this by rewriting a time-dependent observable in terms of the frequencies present in the Hamiltonian $H$.
Consider an arbitrary observable $O$ which will time evolve under application of $H$ in the Heisenberg picture (with $\hbar = 1$) according to
\begin{align}
O(t) &= e^{i H t} O e^{-i H t} \tag{1}
\\&= \sum_{j, k} e^{i(\omega_j - \omega_k) t} \langle \omega_j|O |\omega_k\rangle |\omega_j \rangle \langle \omega_k | \tag{2}
\end{align}
where for simplicity I consider the finite-dimensional case such that $H$ admits a spectral decomposition
\begin{equation}
H = \sum_{k} \omega_k |\omega_k\rangle\langle\omega_k |\tag{3}
\end{equation}
Take any initial state $|\psi_0\rangle$ and define $c_{jk} = \langle \omega_j|O |\omega_k\rangle \langle \psi_0|\omega_j \rangle \langle \omega_k | \psi_0\rangle$ for housekeeping, and we find that the expected value of $O(t)$ is
\begin{align}
\langle O(t) \rangle &= \sum_{j,k} c_{jk} e^{i(\omega_j - \omega_k) t}\tag{4}
\end{align}
Then defining the discrete set of frequency differences
\begin{align}
\Delta = \{\omega_j - \omega_k: \omega_k,\omega_j \in \text{spec}(H)\}\tag{5}
\end{align}
we can gather terms of the same frequency difference to rewrite (4) as
\begin{align}
\langle O(t) \rangle &= \sum_{\delta \in \Delta} \left(\sum_{\substack{j,k: \\\omega_j - \omega_k = \delta}}c_{jk}\right) e^{i\delta t} \tag{6}
\\&:= \sum_{\delta \in \Delta} a(\delta)e^{i\delta t}\tag{7}
\end{align}
This provides the expected value as a Fourier decomposition, and we can identify the highest frequency present as $\max (\Delta) = \omega_{max}(H) - \omega_{min}(H)$ (no need to worry about the sign since by construction $\delta \in \Delta$ implies $-\delta \in \Delta$). Then applying Nyquist-Shannon sampling theorem we find that in order to reconstruct $\langle O(t) \rangle$ it is sufficient to measure $O$ at a fequency $\omega_s$ satisfying
\begin{equation}
\omega_s \geq 2\max(\Delta) \tag{8}
\end{equation}
which of course will need to be performed over many different trials of preparing $|\psi_0\rangle$ and measuring $O$.
Applying this to your question, if you are trying to reconstruct the complete wavefunction $|\psi (t)\rangle$ then yes, you should be able to do so by constructing each element in the set
\begin{equation}
\{|\psi (0)\rangle, |\psi(\Delta t)\rangle, |\psi(2 \Delta t)\rangle, \dots |\psi(N \Delta t)\rangle\}\tag{9}
\end{equation}
at a time interval $\Delta t \leq 1 / f_s = 2 \pi / \omega_s$. This follows because determining the elements of $|\psi (t)\rangle$ (using, e.g. quantum state tomography) involves a series of experiments measuring different observables. None of these observables can have time dependence with a frequency greater than $\max (\Delta)$ and therefore no composition of these observables needed for the state tomography can have a frequency greater than $\max (\Delta)$ either. Of course, tomography experiments are generally expensive as they require an experimental overhead scaling like $O(\text{dim}(H))$. Though maybe there might be savings for that scaling in the context of this experiment where each state in the sequence is somehow related to the previous.
On the other hand I'm not sure if a sampling rate $1 / f_s$ is necessary for every set of observables; it may be that some sets of observables exhibit different maximum frequencies in their time dependence (all upperbounded by $\max(\Delta)$ of course) and so this might be interesting to look into further.
Finally, yes you will need to be concerned with shot noise for the estimator $\tilde{O}(t)$ of $\langle O(t)\rangle$, as the deviations of $\tilde{O}(t)$ around the true mean can possibly introduce much higher frequencies to the spectrum of $\tilde{O}(t)$ compared to $\langle O(t)\rangle$. | {
"domain": "quantumcomputing.stackexchange",
"id": 3381,
"tags": "quantum-state, error-correction, hamiltonian-simulation, error-mitigation"
} |
Identify binary stars in nbody simulation | Question: I'm doing a nbody simulation, and I'm interested in the formation of binary systems in the temporal evolution. I can identify them by eye, but I don't know an algoritmic criteria that can say me if two stars are a binary system or not.
Is there a deterministic criteria that can say me if two arbitrary stars in the simulation are a binary system?
Answer: This is really a difficult problem, but possibly not for the reason you imagine. The following naif criterion seems at first highly appropriate:
take any pair of stars, subtract the center-of-mass motion, compute the kinetic ($T$) and gravitational ($W$) energies, and check whether
$T + W < 0$.
If so, the pair is bound, otherwise it is not.
How can this criterion fail? I can think of three ways:
Even in an uncrowded field, there may be three or more stars bound. It is well-known that multiple systems are unstable, so establishing which of the stars will remain bounded requires more analysis. Normally, the two heaviest stars are those that remain bounded, because the lighter ones gain energy at the expense of the heavy double, to escape the bound system.
A supposedly unbound pair may later turn out to be bound after all, because of loss of angular momentum. It is well-know that the centrifugal barrier, under angular momentum conservation from the original cloud in which the stars form, is so high that nearly no stars should form. Since gravity conserves the overall amount of angular momentum, there must be non-gravitational forces at work that remove it from the collapsing gas. No one knows for sure what is responsible for these torques, even though everyone claims that they must be magnetic in origin.
Thus it can happen that a given pair has, initially, so much angular momentum as to violate the above condition, but it can also happen that angular momentum removal is still occurring (at the time your simulations end) so that more binary pairs will form than is apparent from your simulations. Since this involves processes which are, as of now, largely speculative, no one can claim to know the exact time-evolution of these torques. This is a fundamental element of ignorance, on our part.
Lastly, and least likely, in crowded fields, interactions between the (transient) pair and passers-by may unbind the binary, at the expense of the kinetic energy of the passer-by. In the limit of fully-developed stars, this has been studied especially in connection with the evolution of binaries in globular clusters (GCs, see Lyman Spitzer's Dynamical Evolution of Globular Clusters, especially Ch. 6). But you should keep in mind that in GCs two circumstances enhance the relevance of these interactions: the high stellar densities (higher than in Open Clusters), and the long time scales available, of the order of a Hubble time as compared to the few million years available before newly-formed stars walk away from their formation sites.
Still, one circumstance makes many-body encounters more effective in Open Clusters, the large size of forming stars: when a binary encounters a passer-by, energy may be dumped from the enterloper's kinetic motion into tidal heating of either star in the binary, to be later dissipated thru radiative mechanisms. This tidal heating slows down the interloper, hence it makes the contact last longer, and the impulse transferred to the stars in the binary larger. In other words, it facilitates unbinding the binary.
Of the above, objections 1 and 3 are easily kept under control if your simulations do not involve unrealistically crowded fields (after all, Open Clusters are barely bound, and do no last long!). But objection 2 is more fundamental, and harder to keep under control. Keep in mind that, in Nature, 1 out of two stellar systems observed from Earth is a binary, so just about any simulation resulting in many fewer binaries than this is likely to be tainted by an improper accounting of electromagnetic torques. | {
"domain": "physics.stackexchange",
"id": 30909,
"tags": "gravity, simulations, stars"
} |
A Spin up particle in QFT | Question: This appears like a question that is rarely addressed in field theory pedagogy (perhaps because the answer is obvious): how does one describe a particle of definite spin in quantum field theory?
For example, given some state in a theory of spinors (say a single particle for simplicity): $|\psi\rangle = \int \psi(p) a^\dagger(p)|p\rangle$, where $a$ is the ladder operator you obtain by canonically quantizing, say a Dirac field. Since this is a Dirac field, it describes a particle of spin 1/2. How does one extract information about the spin of this particle?. For example, what would the creation operator for a single particle of definite spin up look like?
Answer: Fields in QFT are promoted to operators.
The Dirac field operator describes both a particle (electron) and and a anti-particle (positron), and there exist different creation and anihilation operators for the particle ($a^+_s(p), a_s(p))$ and the anti-particle ($b^+_s(p), b_s(p)$). The subscript $s$ indicates that you are creating or destroying a particle of spin $s$ $(\frac{1}{2}$ or $-\frac{1}{2})$
.
For more details, see in Wikipedia Dirac fields.
A particle is described by a state, for instance an electron with definite position and spin, is described by the state: $|p,s\rangle = a^+_s(p)|0\rangle$, that is you apply the creation operator $a^+_s(p)$ on the vacuum state $)|0\rangle$ | {
"domain": "physics.stackexchange",
"id": 15165,
"tags": "quantum-field-theory, mathematical-physics, field-theory, foundations, hilbert-space"
} |
Exact meaning of constant volume heat capacity | Question: From Wikipedia:
$$
\left(\frac{\partial U}{\partial T}\right)_V = \left(\frac{\partial Q}{\partial T}\right)_V = C_V,
$$
$C_V$ is what known to be constant volume heat capacity. I don't really get the exact meaning of 'constant volume'. Does it mean that (a) the whole process takes place under a constant volume, i.e, it can be any volume as long as the volume does not change during the process, the volume is maintained at its initial value throughout the process, or (b) the $C$ is measured under a fixed constant volume $V$?
If (a) is correct, are the rates of change of $Q$ with respect to $T$ same at, for example, $V = \pu{1 dm3}$ and $V = \pu{2 dm3}$? We are not considering only perfect gases.
If (b) is correct, what is the value of $V$ for standard $C_V$?
Answer: (a) is correct. (b) is wrong. $U$ depends on $V$ only in the combination $v=V/N$. $v$ is the specific volume, an intensive quantity that does not depend on the size of the system.
We can derive this from first principles. Take $U = U(T,V,N)$. Because $U$ is extensive, $$U(T,\lambda V,\lambda N) = \lambda U(T,V,N) \quad \Longleftrightarrow \quad U(T,V,N) = \lambda^{-1}U(T,\lambda V,\lambda N),$$ by taking $\lambda = N^{-1}$, we can write this expression as $$U(T,V,N) = NU(T,V/N,1) =: Nu(T,v),$$ which produces the desired functional relationship. | {
"domain": "chemistry.stackexchange",
"id": 11179,
"tags": "physical-chemistry, thermodynamics"
} |
[rosjava] Error using rosrun | Question:
I am trying to learn how to use rosjava. I am following the official tutorial, however in the second part Creating and Running Talker-Listener I am having some trouble.
When I run:
rosrun rosjava_catkin_package_a my_pub_sub_tutorial com.github.rosjava.rosjava_catkin_package_a.my_pub_sub_tutorial.Talker
I get the following error:
Loading node class: com.github.rosjava.rosjava_catkin_package_a.my_pub_sub_tutorial.Talker
Exception in thread "main" org.ros.exception.RosRuntimeException: Unable to locate node: com.github.rosjava.rosjava_catkin_package_a.my_pub_sub_tutorial.Talker
at org.ros.RosRun.main(RosRun.java:56)
Caused by: java.lang.ClassNotFoundException: com.github.rosjava.rosjava_catkin_package_a.my_pub_sub_tutorial.Talker
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.ros.internal.loader.CommandLineLoader.loadClass(CommandLineLoader.java:239)
at org.ros.RosRun.main(RosRun.java:54)
Also, is it necessary to use catkin_create_rosjava_project my_pub_sub_tutorial ? Is it possible to run programs the same way it is done with python? Like just creating a file.py and adding permissions to run?
Ps: I have very little knowledge about java.
Originally posted by rezenders on ROS Answers with karma: 122 on 2018-10-10
Post score: 0
Answer:
Hi @rezenders,
The error is because the class you are trying to execute is not in the classpath.
Also, is it necessary to use catkin_create_rosjava_project my_pub_sub_tutorial ? Is it possible to run programs the same way it is done with python? Like just creating a file.py and adding permissions to run?
The short answer is no.
Let me explain a bit: every Java program has a fixed entrypoint (aka main method) that is the first thing executed. In rosjava you regularly write a ROS node overriding AbstractNodeMain, so you need some main method to actually run the node for you. That is provided in RosRun class, which is the standard entrypoint for any rosjava application (look at the project level build.gradle file; you should see mainClassName declared there). When you execute the command rosrun [package] [project] [class name] you are actually calling an executable script generated by Gradle, which runs RosRun class (which is part of rosjava), which in turn loads your node to the classpath and executes it. There are some more details around all of this, but as you can guess it's not the same as executing a python script.
Did you follow the steps of the tutorial one by one without skipping anything? You shouldn't encounter any errors there.
Originally posted by jubeira with karma: 1054 on 2018-10-30
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jubeira on 2018-11-21:
@rezenders did that work for you?
Comment by rezenders on 2018-12-12:
It worked, I was making a mistake. Thank you for you explanation! | {
"domain": "robotics.stackexchange",
"id": 31893,
"tags": "ros-kinetic"
} |
DNA extraction from plants and algae don't use phenol. Why? | Question: I don't know if here is the correct place to make this question, but I've been noticing that protocols for the extraction of photosynthetic organisms DNA don't use phenol. Only CTAB / Chloroform.
Why?
Because bacterial or animal extraction are phenol:chloroform based.
Thank you
Answer: CTAB is for removing polyphenols and polysaccharides which are found in plant tissues. Animal cells/bacteria don't have these. You can do an additional phenol-chloroform extraction if you want.
See this protocol by Monsanto.
Weigh out 6g of processed tissue into a 50ml conical tube appropriate for centrifugation. Note: For unprocessed tissue, weighing
m ay occur prior to processing as long as entire processed sample is
transferred to the conical tube.
For each 6g sample add 25ml of a solution consisting of 24.25ml, pre-warmed (55°C) CTAB extraction buffer, 0.5ml 2-mercaptoethanol
(2-ME), and 0.25 ml of 10mg/ml proteinase K for a final concentration
of 2% (2-ME) and 100µg/ml (proteinase K).
Incubate the tube for 60 minutes at 55°C. Cool the tube briefly on bench (10 minutes)
Add 20ml of phenol:chloroform:isoamyl alcohol (PCI, 25:24:1). Cap the tube and mix vigorously by vortex or inversion.
Centrifuge for 10 minutes at 13,000×g and 20-25°C to separate the aqueous and organic phases. Transfer the upper aqueous phase to a
clean 50ml conical tube.
Repeat extraction two times for a total of three extractions (steps 4-5).
Transfer upper aqueous phase to a new tube and add approximately 2/3 volume of -20°C isopropanol and gently invert the tube several
times to mix.
To precipitate the DNA place the tubes at -20°C for at least 30 minutes and up to three days.
To pellet the DNA centrifuge the tubes at approximately 13,000×g for 20 minutes at 4°C.
Redissolve the pellet in 4 ml of TE pH 8.0. Transfer to a 13ml Sarstedt tube and add approximately 40µl of 10mg/ml RNase, then
incubate at 37°C for 30 minutes.
To extract the DNA add 4ml of chloroform:isoamyl alcohol (CIA, 24:1). Centrifuge for 10 minutes at approximately 13,000×g at room
temperature. Transfer the upper aqueous phase to a clean Sarstedt
tube.
Repeat step 11 then add half volume of 7.5M ammonium acetate, gently mix by inversion/pipette and add 2 volumes of 100% ethanol. Mix
by inversion/pipette and place at -20°C for 30 minutes to overnight.
Centrifuge at 13,000×g for 20 minutes at 4°C to pellet the DNA. | {
"domain": "biology.stackexchange",
"id": 6146,
"tags": "molecular-biology, dna-isolation"
} |
Difference between Fock space and Hilbert Space | Question: I am beginner in QFT. I would like to know why there is a need of constructing Fock space for a $N$-particle system? Why can't we represent this many body system just as the tensor product of Hilbert space itself? In short what is the difference between a Fock space and a tensor product of Hilbert space $H_n$?
Answer: A Fock space is just one special construction of a Hilbert space. The basic idea is that the Fock space allows you to superpose tensor products of distinct degree. In other words, it allows you to make sense of expressions of the form $$|a\rangle+|b\rangle\otimes |c\rangle.$$
where $|a\rangle,|b\rangle,|c\rangle$ are one-particle states. From the quantum mechanical point of view, if ${\cal H}_0$ is the "one-particle Hilbert space", representing the states of a single particle, then the states of a collection of $N$ particles of this kind, with this $N$ fixed, form a subspace of the tensor product of ${\cal H}_0$ with itself $N$ times: ${\cal H}_0\otimes\cdots \otimes {\cal{H}}_0$.
The Fock space allows you to superpose such states and hence allows you to have a state on which for every $N\in \mathbb{N}$ you have probabilities for the number of particles being $N$: speak differently, you are allowed to describe states on which the very number of particles is uncertain and becomes an observable with probabilities and mean values as any other observable.
A very transparent example where this would be necessary is in relativity theory. The relation $E = mc^2$ implies that given enough energy particles can be created and that particles can be destroyed. This makes relativistic quantum mechanics with fixed number of particles problematic and the Fock space picture helps quite a lot.
So the construction is to simply form the direct sum of all symmetric or skew-symmetric tensor powers of ${\cal H}_0$. This yields either the bosonic or fermionic Fock space: $${\cal F}_\pm ({\cal H}_0)= \bigoplus_{n=0}^\infty (\cal H_0)^{\pm \otimes n}$$
where $({\cal H}_0)^{\pm \otimes n}$ means in my notation to take the tensor product of ${\cal H}_0$ with itself $n$ times and symmetrize for $+$ or anti-symmetrize for $-$.
To answer your question the distinction between the Fock space and a tensor product of Hilbert spaces is simply that the Fock space is a direct sum of infinitely many tensor products of one Hilbert space with itself. | {
"domain": "physics.stackexchange",
"id": 64114,
"tags": "quantum-field-theory, hilbert-space, definition"
} |
Where does the pH come from in the equation of Standard Hydrogen Electrode? | Question: $$\mathrm{E=-{2.303RT \over F}pH - {RT \over 2F}\ln {p_{H_2}/p^0}}$$
This is a equation on Wikipedia page of Standard Hydrogen Electrode, but I have no idea where the $pH$ come from.
Since $\mathrm{a_{H^+}=f_{H^+} C_{H^+} /C_0}$, how does the $\mathrm{f_{H^+}}$ disappear?
Answer: There are a few logarithm rules that you need to apply here.
\begin{align}
E &= \mathrm{RT \over F}\ln {a_{H^+} \over (p_{H_2}/p^0)^{1/2}}\\
&= \mathrm{RT \over F}\ln a_{H^+} - \mathrm{RT \over F} \ln \left({p_{H_2} \over p^0}\right)^{1/2}\\
&= \mathrm{\ln(10) RT \over F} \lg a_{H^+} - {1 \over 2}\mathrm{RT \over F}\ln \left({p_{H_2} \over p^0}\right)\\
&= -\mathrm{2.303 RT \over F} \mathrm{pH} - {1 \over 2}\mathrm{RT \over F}\ln \left({p_{H_2} \over p^0}\right)
\end{align}
You need a few logarithm rules:
$\log\left({a \over b}\right) = \log(a) - \log(b)$
$\log_b(x) = \frac{\log_a(x)}{\log_a(b)}$
$\log_b(x^r) = r \log_b(x)$ | {
"domain": "chemistry.stackexchange",
"id": 5172,
"tags": "thermodynamics, electrochemistry"
} |
Why is the ketone not attacked by the Grignard reagent/cuprate in this case but a double bond is? | Question:
According to the book the answer is (a)
Here I assume transmetallation occurs and then the nucleophilic carbon of methyl attacks the ring but what I don’t understand is why the ketone remains unaffected?
Answer: The book is correct. This is a classic example of a 1,4-addition (also called a conjugate addition) of an alkyl nucleophile to an α,β-unsaturated ketone. Normally, a Grignard reagent such as MeMgBr would indeed attack the carbonyl carbon in what is referred to as a 1,2-addition (also called a direct addition). However, treatment of the Grignard with a cuprous halide (CuI, in this case) changes the reaction entirely.
This can be explained well by hard-soft acid-base theory, or HSAB theory.
Methylmagnesium bromide is a relatively "hard" nucleophile and therefore preferentially reacts with the hard electrophile (the carbonyl carbon) in a 1,2-addition. After treatment with a proton source, product D is afforded.
But, in the presence of CuI, MeMgBr reacts to form an organocuprate with the stoichiometry of Me2CuMgBr. The details of the cuprate formation — and of the true structure of the cuprate itself — are complicated and still somewhat unclear. Nevertheless, this resulting organocuprate species is a much softer nucleophile than is the Grignard. It therefore preferentially attacks the softer carbon electrophilic site. A lone pair on the oxygen of the resulting enolate anion then reforms the carbonyl double-bond as the π-electrons of the enolate C=C double-bond attack nPrI to form a new C-C single bond, affording product A. | {
"domain": "chemistry.stackexchange",
"id": 17961,
"tags": "organic-chemistry, grignard-reagent"
} |
how to perform simple collision detection between models in gazebo | Question:
I am applying force to a rectangular model(created with spawn_model) that sends it colliding with another much denser model. What is the simplest way to access collision information? I was able to find the package collision_map for ros, but is there something else existing that I could use?
Originally posted by mugetsu on Gazebo Answers with karma: 3 on 2013-03-02
Post score: 0
Answer:
You can add a contact sensor to your model (here's a tutorial), which will tell you the number of contact points, their location on the collision body, penetration depth, and reaction force/torque.
With Gazebo 1.4, you can also visualize contact in the gazebo client under the View menu, select Contacts. This will display each contact point as a blue sphere. You may also need to View->Transparent in case the contacts are blocked by visuals.
Originally posted by scpeters with karma: 2861 on 2013-03-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3086,
"tags": "gazebo, collision"
} |
Alphabet Rangoli Challenge | Question: The question is about printing alphabets in a rangoli pattern of size/dimension input by the user.
Example:
size 3
----c----
--c-b-c--
c-b-a-b-c
--c-b-c--
----c----
The picture contains details about the challenge
OR
https://www.hackerrank.com/challenges/alphabet-rangoli/problem
Sample Code:
n = int(input('Enter a size: '))
alpha = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
for j in range(0, n-1):
ls_1 = str(''.join(alpha[n-1:abs(n-(j+2)):-1]))
ls_1 = ls_1 + ls_1[-2::-1]
ls = str('-'.join(ls_1))
print('-' * (((n - j) * 2) - 2) + ls + '-' * (((n - j) * 2) - 2))
ls_1 = str(''.join(alpha[n-1::-1]))
ls_1 = ls_1 + ls_1[-2::-1]
ls = str('-'.join(ls_1))
print(ls)
for j in range(n-2, -1, -1):
ls_2 = str(''.join(alpha[n-1:abs(n-(j+2)):-1]))
ls_2 = ls_2 + ls_2[-2::-1]
ls_s = str('-'.join(ls_2))
print('-' * (((n - j) * 2) - 2) + ls_s + '-' * (((n - j) * 2) - 2))
Sample Output:
Enter a size: 5
--------e--------
------e-d-e------
----e-d-c-d-e----
--e-d-c-b-c-d-e--
e-d-c-b-a-b-c-d-e
--e-d-c-b-c-d-e--
----e-d-c-d-e----
------e-d-e------
--------e--------
Is the above code that I have done...is it a good code to say?
Also, I couldn't find any other way to solve the problem.
(I am a beginner in Python)
Answer: Use built-ins
alpha = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
This is a very verbose and error-prone way of getting all of the ASCII lowercase letters.
from string import ascii_lowercase as alpha
will give approximately the same result. It is a string, instead of a list, but Python strings are effectively just lists of characters. In particular, alpha[0] will be 'a' and alpha[25] will be 'z'.
Unnecessary str()
The result of a ''.join(...) call will be a string. Wrapping this in a str(...) call is pointless.
Unnecessary abs()
for j in range(0, n-1) means that j will always be less than n-1. In other words:
$$ j \le n - 2 $$
Consider, abs(n-(j+2)):
$$ j \le n - 2$$
$$ j + 2 \le n$$
$$ n - (j+2) \ge 0$$
In other words, the abs(...) is unnecessary, and just adds confusion.
General Comments
Follow PEP-8 guidelines (spaces around operators, etc). Use functions. Use a main-guard.
Reworked & Simplified Code
from string import ascii_lowercase
def print_rangoli(n: int) -> None:
alpha = ascii_lowercase[:n]
for row in range(- n + 1, n):
row = abs(row)
dashes = "-" * (2 * row)
print(dashes + "-".join(alpha[:row:-1] + alpha[row:]) + dashes)
if __name__ == '__main__':
n = int(input("Enter a size: "))
print_rangoli(n)
Format Specification Mini-Language
Python's format statement (and f-strings) use a format specification mini-language. This allow you substitute values into a larger string in fixed-width fields, with your choice of alignment and fill characters. Here, you'd want centre alignment, with '-' for the fill character.
For example, f"{'Hello':-^11}" is a f-string, which places the string 'Hello' into a field 11 characters wide, centre justified (^), with '-' used as a fill character. Producing '---Hello---'. Instead of a hard-coded width (11 in the above example), we can use a computed {width} argument.
Using this, we can further "simplify" (for some definition of simplify) the print_rangoli function:
def print_rangoli(n: int) -> None:
width = n * 4 - 3
alpha = ascii_lowercase[:n]
for row in range(- n + 1, n):
row = abs(row)
print(f"{'-'.join(alpha[:row:-1] + alpha[row:]):-^{width}}") | {
"domain": "codereview.stackexchange",
"id": 38616,
"tags": "python, beginner, python-3.x, programming-challenge"
} |
Default value for a state created by QuantumRegister | Question: What's the default value for a state created by QuantumRegister(1,'name_of_the_register')? Is it a $|0\rangle$ or a $|1\rangle$?
Answer: Here's the source code for quantumregister.py and quantumcircuit.py.
The default is $|0\rangle$. The code goes like:
from qiskit import QuantumCircuit, QuantumRegister
qr = QuantumRegister(1)
circuit = QuantumCircuit(qr)
By the way, if you're just beginning with Qiskit, you could check out Dr. Moran's textbook (this specific example is covered in chapter 5, ~p. 83). | {
"domain": "quantumcomputing.stackexchange",
"id": 729,
"tags": "quantum-state, programming, qiskit, initialization"
} |
Bash Backup Script | Question: The following script closes a virtual machine (which is run in a screen session), waits for the session to close, creates a backup of the VM, and restarts the VM. The shutdown and bootup scripts speak for themselves, but I can post them if necessary. Is there any way to clean up the sockets_found function? It seems like there should be an easier way to detect whether or not screen has any open sessions.
#!/bin/bash
now=`date '+%Y%m%d'`
# No Sockets found
# There is a screen on
function sockets_found {
screen -ls | grep "There is a screen on"
if [ $? -eq 1 ]; then
return 1
else
return 0
fi
}
function wait_for_sockets_to_close {
while sockets_found; do
echo "Waiting for screen to close..."
sleep 1
done;
}
echo "Shutdown VM..."
/bin/bash ~/shutdown.sh
wait_for_sockets_to_close
# ensure that the backup directory exists
mkdir -p ~/backup
echo "Copying VM to backup directory..."
cp -Rf ~/VirtualBox\ VMs/ ~/backup/VirtualBox\ VMs${now}/
echo "Booting VM..."
/bin/bash ~/bootup.sh
Answer: This can be improved and simplified:
function sockets_found {
screen -ls | grep "There is a screen on"
if [ $? -eq 1 ]; then
return 1
else
return 0
fi
}
Like this:
function sockets_found {
screen -ls | grep -q "There is a screen on"
}
That is:
You can remove the if statement, as the exit code of grep naturally becomes the exit code of the function
If the exit code of grep is greater than 1, the original function exits with 0. That's inappropriate. Such exit codes indicate errors in grep, and do not imply that screen sessions exist
I added the -q flag to suppress the output of grep in case of success
Other minor things:
No need for the ; in done;
It would be better to double-quote ${now} in the backup target directory to prevent accidental globbing or word splitting, in case you might make a mistake in the syntax of the date command
It would be good to add some blank lines in the last chunk of commands for better readability
Like this:
echo "Shutdown VM..."
/bin/bash ~/shutdown.sh
wait_for_sockets_to_close
# ensure that the backup directory exists
mkdir -p ~/backup
echo "Copying VM to backup directory..."
now=$(date '+%Y%m%d')
cp -Rf ~/VirtualBox\ VMs/ ~/backup/VirtualBox\ VMs"${now}"/
echo "Booting VM..."
/bin/bash ~/bootup.sh | {
"domain": "codereview.stackexchange",
"id": 19985,
"tags": "bash"
} |
Acetylation reaction | Question: In the Friedel - Crafts acetylation of bromobenzene, how is it possible to obtain a black-coloured product?
When the acetic anhydride was slowly added to the reactants - $\ce{AlCl3}$, DCM, bromobenzene - a small, hard, yellow lump formed.
Would this be $\ce{AlCl3}$?
When the acetic anhydride was added, the reflux was initiated. The lump appeared to disappear, leaving behind a dark red solution.
However, when the reflux was turned off, the solution appeared black. I'm trying to think of potential products, such as ortho and para bromobenzene or acetic acid or $\ce{Al(OH)3}$ but why so black?
Does anyone have any ideas?
Answer: I assume "DCM" represents dichloromethane, if so...
There is an old paper that describes the formation of a low molecular weight polymer from the double Friedel-Crafts alkylation of benzene by dichloromethane. That's probably what's happening in your reaction as a minor pathway. The dissolved polymer and tar is what's producing your black solution.
I actually ran that reaction once, I wanted to prepare that polymer. It didn't work for me, but at the end, after the aluminum chloride had been removed, I did have a dark solution. | {
"domain": "chemistry.stackexchange",
"id": 2701,
"tags": "organic-chemistry, synthesis, aromatic-compounds"
} |
What makes the (non-abelian) strong interaction so special that it leads to confinement? | Question: The strong interaction has a coupling constant of $\alpha_s(91GeV)\approx 0.1$ whereas the weak interaction has a much lower coupling constant $\alpha_w \approx 10^{-6}$. Both theories are non-abelian gauge theories, the strong interaction is based on SU(3) gauge symmetry, whereas the electroweak interaction is based on $U(1)\times SU(2)$ gauge symmetry.
What makes the strong interaction so special that it leads to confinement, whereas for the electroweak interaction it is not the case? It is certainly related with the $\beta$-function of the corresponding interaction, but why is the $\beta$-function of the electroweak interaction positive and the $\beta$-function of the strong interaction negative? Actually, I am not very familiar with use of renormalisation group arguments, so I would prefer a not too formal answer based on essentially on physics arguments.
Answer: Just to state the result for the beta-function associated to QCD
\begin{equation}
\beta = -\frac{g^2}{32 \pi^2} \left(\frac{11}{3}N_c - \frac{2}{3}N_f \right)
\end{equation}
in which $N_c$ is the amount of colours and $N_f$ is the amount of flavours.
Essentially, both terms boil down to antiscreening and screening respectively.
A single quark can be surrounded by quark-antiquark pairs which tend to screen it from effects of the environment (much like in QED with electron-positron pairs).
However, the important piece is now the antiscreening effect due to the self-interaction of the gluons (because of the non-Abelian gauge symmetry). This tends to make the quark more susceptible to its environment.
Filling in the constants gives a negative beta-function. In other words, if we probe quarks at higher energies, the coupling constant decreases sufficiently such that we may regard them as being free (eg. a quark-gluon plasma). At small energies, the coupling constant becomes enormous and quarks are effectively bound together into hadrons - confinement. | {
"domain": "physics.stackexchange",
"id": 83122,
"tags": "gauge-theory, renormalization, quantum-chromodynamics, yang-mills, confinement"
} |
How is the wave function Lebesgue integrable? | Question: Let's assume we have a plane wave $\psi(x,t)= A_{0}e^{i(kx-wt)}$ in position space. To find the momentum representation of this wave we'd apply the Fourier transform. However, I don't see how this is mathematically allowed, since the function we want to transform must be Lebesgue-integrable for the Fourier integral to exist, right? As far as I can see this isn't the case here, i.e. $ \int^{+\infty}_{-\infty} \left|\psi(x,t)\right|dx = \left|A_{0}\right| \int^{+\infty}_{-\infty} dx = \infty$. Where am I going wrong?
Answer: You are not going wrong anywhere. The plane wave is not Lebesgue-integrable.
However, it is Fouriertransformable as a distribution. | {
"domain": "physics.stackexchange",
"id": 54715,
"tags": "hilbert-space, wavefunction, fourier-transform, integration, mathematics"
} |
First-name, Last-name and Password logging in functionality using C# | Question: Summary
I have finished creating a secure login functionality. I have used 1 resource to help me with the security. The security section was copy and pasted but implemented into a class to suit it in my own way, but all the logic and properties, commands etc.. is all my work. https://medium.com/@mehanix/lets-talk-security-salted-password-hashing-in-c-5460be5c3aae
Would appreciate any helpful review to my current code I have, from the community in areas where I can improve.
Current Code
SecurePasswordHasher.cs
At the moment I do not use public static string Hash(string password) anywhere, the focus is at the verification.
public static class SecurePasswordHasher
{
public static string Hash(string password)
{
// Create salt
byte[] salt;
new RNGCryptoServiceProvider().GetBytes(salt = new byte[16]);
// Create hash
var pbkdf2 = new Rfc2898DeriveBytes(password, salt, 10000);
byte[] hash = pbkdf2.GetBytes(20);
byte[] hashBytes = new byte[36];
Array.Copy(salt, 0, hashBytes, 0, 16);
Array.Copy(hash, 0, hashBytes, 16, 20);
return Convert.ToBase64String(hashBytes);
}
public static bool Verify(string savedPassword, string givenPassword)
{
byte[] hashBytes = Convert.FromBase64String(savedPassword);
byte[] salt = new byte[16];
Array.Copy(hashBytes, 0, salt, 0, 16);
var pbkdf2 = new Rfc2898DeriveBytes(givenPassword, salt, 10000);
byte[] hash = pbkdf2.GetBytes(20);
int ok = 1;
for (int i = 0; i < 20; i++)
{
if (hashBytes[i + 16] != hash[i])
{
ok = 0;
}
}
if (ok == 1)
{
return true;
}
else
{
return false;
}
}
}
LoginViewModel
public class LoginViewModel : BaseViewModel
{
// User Properties
private UserModel _User;
public UserModel User
{
get { return _User; }
set
{
_User = value;
OnPropertyChanged(nameof(User));
}
}
private string _FirstName;
public string FirstName
{
get { return _FirstName; }
set
{
_FirstName = value;
User = new UserModel
{
FirstName = _FirstName,
LastName = this.LastName,
Password = this.Password
};
OnPropertyChanged(nameof(FirstName));
}
}
private string _LastName;
public string LastName
{
get { return _LastName; }
set
{
_LastName = value;
User = new UserModel
{
LastName = _LastName,
FirstName = this.FirstName,
Password = this.Password
};
OnPropertyChanged(nameof(LastName));
}
}
private string _Password;
public string Password
{
get { return _Password; }
set
{
_Password = value;
User = new UserModel
{
FirstName = this.FirstName,
LastName = this.LastName,
Password = _Password
};
OnPropertyChanged(nameof(Password));
}
}
// Login Command
public ICommand LoginCommand { get; set; }
// Function
private bool LoginFunction(object param)
{
// Get user credentials
UserModel user = param as UserModel;
// Veryify credentials have data to work with
// Also, reason this method return a bool, because it prevents NullReferenceException.
if (user == null)
{
MessageBox.ShowMessageBox("Credentials not specified");
return false;
}
else if (user.FirstName == null || string.IsNullOrEmpty(user.FirstName))
{
MessageBox.ShowMessageBox("Firstname cannot be empty");
return false;
}
else if (user.LastName == null || string.IsNullOrEmpty(user.LastName))
{
MessageBox.ShowMessageBox("Surname field cannot be empty");
return false;
}
else if (user.Password == null || string.IsNullOrEmpty(user.Password))
{
MessageBox.ShowMessageBox("Password field cannot be empty");
return false;
}
// Find Firstname and Surename in the database
else
{
using (var conn = new MySqlConnection(ConnectionString.ConnString))
{
conn.Open();
string query = @"SELECT
*
FROM USERS u
JOIN DEPARTMENT d
on d.id = u.departmentid
JOIN usergroup ug
ON ug.id = u.UserGroupID
WHERE FirstName = @Firstname AND Lastname = @LastName";
var userDetails = conn.Query<UserModel>(query, new { FirstName = user.FirstName, LastName = user.LastName}).ToList();
// If user is found, proceed to verification
if(userDetails.Count() == 1)
{
// Get password from the user
string savedPasswordHash = userDetails.First().Password;
bool verification = SecurePasswordHasher.Verify(savedPasswordHash, Password);
if (verification == true)
{
// Store Username
UserData.FullName = $"{user.FirstName} {user.LastName}";
ShowDashboard.ShowDashboard();
return true;
}
// Password is incorrect
else
{
MessageBox.ShowMessageBox("User not found");
return false;
}
}
// User with the given Firstname and surename is not found
else
{
MessageBox.ShowMessageBox("User not found");
return false;
}
}
}
}
// Messagebox Interface
public IMessageBoxService MessageBox { get; set; }
// Open Dashboard Interface
public IShowDashboardService ShowDashboard { get; set; }
public LoginViewModel(IMessageBoxService messageBox, IShowDashboardService showDashboard)
{
this.MessageBox = messageBox;
this.ShowDashboard = showDashboard;
LoginCommand = new RelayCommand(param => LoginFunction(param));
}
}
Answer: Storing password in string is totally insecure.
Because string is immutable object and can be kept in memory for undefined period of time. This makes easy to get clean password through not complicated reverse engineering operation.
By the way
The focus is at the verification.
Ok
Renamed savedPassword because it's a hash not clean password.
public static bool Verify(string savedPasswordHash, string givenPassword)
{
byte[] hashBytes = Convert.FromBase64String(savedPasswordHash);
byte[] salt = hashBytes.Take(16).ToArray();
byte[] hash = new Rfc2898DeriveBytes(givenPassword, salt, 10000).GetBytes(20);
return hashBytes.Skip(16).SequenceEqual(hash);
}
That's it. Easy Linq query. | {
"domain": "codereview.stackexchange",
"id": 40868,
"tags": "c#, mysql"
} |
How to build 4 codewords with a code distance of 5? | Question: I wonder how can I construct 4 (distinct) codewords given the fact that code distance is 5. As far as I know that the code distance is the number of distinct bits between any 2 codewords. How to achieve this code distance for the 6 possible pairs from the 4 codewords available(of any bit length). I named, for example, codewords u, v, w, & x so d(u,v) = d(u,w) = d(u,x) = d(v,w) = d(v,x) = d(w,x) = 5. The code words must be in binary format i.e 00100 10011. I considered a code length of 10 to achieve this but still struggling to find a solution.
Answer: 0000000000
1111100000
0000011111
1111111111 | {
"domain": "cs.stackexchange",
"id": 17366,
"tags": "information-theory, coding-theory, code-generation"
} |
Follow-up: Timer utilizing std::future | Question: The previous code (as displayed in Timer utilizing std::future) is now:
#include <chrono>
#include <functional>
#include <future>
#include <iostream>
#include <list>
#include <mutex>
#include <thread>
// PeriodicFunction
// =============================================================================
namespace Detail {
/// A functor to invoke a function periodically.
template <typename Value, typename Derived>
class PeriodicFunction
{
// Types
// =====
public:
typedef std::chrono::milliseconds period_type;
typedef Value value_type;
protected:
typedef std::function<Value ()> function_type;
typedef std::future<Value> future_type;
// Construction
// ============
protected:
/// Initialize with a function where the arguments are bounded to that function.
/// \SEE std::bind
template <typename Callable, typename... Arguments>
PeriodicFunction(Callable&& callable, Arguments&&... arguments)
: m_function(std::bind(
std::move(callable),
std::forward<Arguments>(arguments)...)),
m_period(period_type::zero())
{}
public:
PeriodicFunction(const PeriodicFunction&) = delete;
PeriodicFunction& operator = (const PeriodicFunction&) = delete;
/// Calls stop.
~PeriodicFunction() { stop(); }
// Start/Stop
// ==========
/// True, if an invocation thread is active.
bool active() const noexcept { return m_thread.joinable(); }
/// Start an invocation thread and repeatedly invoke the function (in the given period)
/// after the given delay.
/// - If a previous invocation thread is active, no invocation of the function
/// takes place.
/// - After the first invocation of the function (at least one is done):
/// - If the period is or has become zero (due to a call to stop) the
/// invocation thread stops without any further invocation of the function.
/// - As long as an invocation of the function has not finished, the next
/// possible invocation is delayed by the given period.
/// \EXCEPTION All exceptions stop the invocation thread and the exception of the
/// last invocation is avaliable through a call to rethorow_exception.
/// \RETURN True if no previous call to start is active, otherwise false.
/// \NOTE The period is not an exact value (due to interleaving calls between
/// each invocation of the function).
public:
template <
typename Period, typename PeriodRatio,
typename Delay, typename DelayRatio>
bool start(
const std::chrono::duration<Period, PeriodRatio>& period,
const std::chrono::duration<Delay, DelayRatio>& delay) noexcept;
/// Start an invocation thread and repeatedly invoke the function without delay.
template <typename Period, typename PeriodRatio>
bool start(const std::chrono::duration<Period, PeriodRatio>& period) noexcept {
return start(period, std::chrono::duration<Period, PeriodRatio>::zero());
}
/// Set the period of invocations to zero.
/// If an invocation thread is active, stop invocations of the function
/// and wait until the thread has finished.
/// \SEE std::thread::join
void stop() {
m_period = period_type::zero();
if(active()) m_thread.join();
}
// Period
// ======
public:
const period_type& period() const noexcept { return m_period; }
// Exception
// =========
public:
/// True if an exception occured in the last invocation thread.
bool exception() const { return bool(m_exception); }
/// Throw the exception of the last invocation thread, if availabe.
void rethrow_exception() const {
if(exception())
std::rethrow_exception(m_exception);
}
// Utility [invoke_synchron]
// =========================
private:
void invoke_synchron() {
invoke_synchron(std::is_same<value_type, void>());
}
// not void
void invoke_synchron(std::false_type) {
static_cast<Derived*>(this)->transfer(m_function());
}
// void
void invoke_synchron(std::true_type) {
m_function();
}
// Utility [invoke_asynchron]
// ==========================
private:
void invoke_asynchron() {
m_future = std::async(std::launch::async, m_function);
}
// Utility [transfer_asynchron]
// ============================
private:
void transfer_asynchron() {
transfer_asynchron(std::is_same<value_type, void>());
}
// not void
void transfer_asynchron(std::false_type) {
static_cast<Derived*>(this)->transfer(m_future.get());
}
// void
void transfer_asynchron(std::true_type) {
m_future.get();
}
private:
function_type m_function;
period_type m_period;
std::thread m_thread;
future_type m_future;
std::exception_ptr m_exception;
};
// Start/Stop
// ==========
template <typename Value, typename Derived>
template <
typename Period, typename PeriodRatio,
typename Delay, typename DelayRatio>
bool PeriodicFunction<Value, Derived>::start(
const std::chrono::duration<Period, PeriodRatio>& period,
const std::chrono::duration<Delay, DelayRatio>& delay) noexcept
{
if(active()) return false;
try {
m_exception = std::exception_ptr();
m_period = std::chrono::duration_cast<period_type>(period);
m_thread = std::thread([this, delay]() {
try {
std::this_thread::sleep_for(delay);
if(this->m_period == period_type::zero()) this->invoke_synchron();
else {
this->invoke_asynchron();
while(true) {
std::this_thread::sleep_for(this->m_period);
if(this->m_period != period_type::zero()) {
if(this->m_future.wait_for(period_type::zero()) == std::future_status::ready) {
this->transfer_asynchron();
this->invoke_asynchron();
}
}
else {
this->m_future.wait();
this->transfer_asynchron();
break;
}
}
}
}
catch(...) {
this->m_exception = std::current_exception();
}
});
}
catch(...) {
this->m_exception = std::current_exception();
}
return true;
}
} // namespace Detail
// PeriodicFunction
// =============================================================================
template <typename Value>
class PeriodicFunction : public Detail::PeriodicFunction<Value, PeriodicFunction<Value>>
{
// Types
// =====
private:
typedef Detail::PeriodicFunction<Value, PeriodicFunction<Value>> Base;
friend Base;
public:
typedef typename Base::period_type period_type;
typedef typename Base::value_type value_type;
typedef std::list<value_type> result_type;
private:
typedef std::mutex mutex;
// Construction
// ============
public:
template <typename Callable, typename... Arguments>
PeriodicFunction(Callable&& callable, Arguments&&... arguments)
: Base(callable, std::forward<Arguments>(arguments)...)
{}
// Result
// ======
/// True, if the internal result buffer is empty.
bool empty() const { return m_result.empty(); }
/// Return the current result of invocations of the function and clear
/// the result buffer.
/// \NOTE If the invoking thread is still running, new results
/// might become available.
result_type result() {
std::lock_guard<mutex> guard(m_result_mutex);
return std::move(m_result);
}
// Utility
// =======
private:
void transfer(value_type&& result) {
std::lock_guard<mutex> guard(m_result_mutex);
m_result.push_back(result);
}
private:
mutex m_result_mutex;
result_type m_result;
};
// PeriodicFunction<void>
// ======================
template <>
class PeriodicFunction<void> : public Detail::PeriodicFunction<void, PeriodicFunction<void>>
{
// Types
// =====
private:
typedef Detail::PeriodicFunction<void, PeriodicFunction<void>> Base;
friend Base;
public:
typedef typename Base::period_type period_type;
typedef typename Base::value_type value_type;
// Construction
// ============
public:
template <typename Callable, typename... Arguments>
PeriodicFunction(Callable&& callable, Arguments&&... arguments)
: Base(callable, std::forward<Arguments>(arguments)...)
{}
};
// Test
// ====
#define TEST_OUTPUT 0
class Exception : public std::runtime_error
{
public:
Exception() noexcept
: std::runtime_error("Test")
{}
};
const unsigned f_limit = 10;
std::atomic<unsigned> f_count;
char f() {
++f_count;
return '.';
}
const unsigned g_limit = 3;
std::atomic<unsigned> g_count;
void g() {
if(++g_count == g_limit) throw Exception();
}
int main()
{
try {
using std::chrono::milliseconds;
// With Return Type
{
PeriodicFunction<char> invoke(f);
#if(TEST_OUTPUT)
std::ostream& stream = std::cout;
#else
std::ostream null(0);
std::ostream& stream = null;
#endif
invoke.start(milliseconds(10));
for(unsigned i = 0; i < f_limit; ++i) {
std::this_thread::sleep_for(milliseconds(100));
if(i == f_count - 1) invoke.stop();
auto result = invoke.result();
for(const auto& r : result)
stream << r;
stream << '\n';
}
}
// Void
{
PeriodicFunction<void> invoke(g);
invoke.start(milliseconds(10), milliseconds(100));
// A thread shall throw an exception before stopping
std::this_thread::sleep_for(milliseconds(200));
invoke.stop();
try { invoke.rethrow_exception(); }
catch(Exception) { return 0; }
}
}
catch(...) {}
return -1;
}
Fixes:
The stop function will fail if called from the same thread, hence I changed it to:
void stop() {
m_period = period_type::zero();
if(is_active() && std::this_thread::get_id() != m_thread.get_id())
m_thread.join();
}
The PeriodicFunction::result() does not clear the internal state of the result container after the move:
result_type result() {
std::lock_guard<std::mutex> guard(m_result_mutex);
result_type result(std::move(m_result));
m_result.clear();
return result;
}
Answer: Generally speaking, the code already seems really good. Therefore, I have nothing more than a few notes:
It is more of a matter of taste, but I would use some type aliases instead of the old typedef. The type aliases can be templated (contrary to typedef) and provide a syntax that is close to variable assignements (auto a = 89;) and namespace aliases (namespace foobar = foo::bar;). That means that you can write code which looks more consistent:
using period_type = std::chrono::milliseconds;
using value_type = Value;
using function_type = std::function<Value ()>;
using future_type = std::future<Value>;
C++ has special syntax to import type names from a base class without having to use a full typedef or type alias, you can use it to reduce the amount of code and make explicit your intent in PeriodicFunction:
using typename Base::period_type;
using typename Base::value_type;
typedef std::mutex mutex; seems useless and potentially confusing. I would drop the typedef and use std::mutex everywhere instead. It shouldn't hinder readability.
In their answer to your previous question, @ruds said that Timer::m_results should not be mutable, which is right, but I see no harm in having the corresponding std::mutex be mutable. It does not seem to be currently useful though.
Really no more than a tidbit, but namespaces are generally lowercase. Consider replacing Detail by detail.
Functions that return a bool tend to be more understandable when their name is more than just a name. For example, bool exception() {} makes me think that the function returns an std::exception or a derived class (which would be odd). Consider renaming it exception_occured instead. Also, consider renaming active to is_active.
Unfortunately, the standard library containers have a function empty while is_empty would have been a better name ("empty" somehow means "empty that container" to me); I understand that you keep this particular name for consistency.
Discussing what can be improved is good, but I also want to put a note about what you did right: tag dispatching with std::true_type and std::false_type is great. It helps to write simple and readable code and to avoid some std::enable_if and template boilerplate. You also correctly implemented perfect forwarding and used proper scoped locks. Kudos for that :) | {
"domain": "codereview.stackexchange",
"id": 7306,
"tags": "c++, multithreading, c++11, timer"
} |
What is the highest speed time dilation has been tested? | Question: What is the highest speed time dilation has been tested?
How close to the Special Relativity prediction did it get?
Answer: Special relativistic time dilation correctly predicts the decay rates of high energy particles. For instance atmospheric muons survive much longer than you would predict based on their lifetime unless you take into a $\gamma$ factor of about 40, for a velocity of $v\approx 0.9997 c$.
The Large Hadron Collider routinely (until the maintenance shutdown started this year!) accelerated protons to an energy of 3.5 TeV (a total collision energy of 7 TeV), which is $\gamma\approx3500$, a velocity $v\approx0.99999995c$.
At even higher energies there is the GZK limit, being tested but likely true, which involves protons moving at a $\gamma$ factor of the order of $10^{10}$! This corresponds to a velocity of $0.999999999999999999995 c$!
EDIT: The opposite limit gives you an idea of how good clocks are these days. It turns out that time dilation has been measured in agreement with special relativity at speeds of 35 km/hr (and also gravitational time dilation at height differences of 33 cm)!! | {
"domain": "physics.stackexchange",
"id": 6557,
"tags": "special-relativity, experimental-physics, inertial-frames, time-dilation, observers"
} |
Temperature invariant color space | Question: I'd like to build a color difference metric that will be invariant to small changes in lighting conditions. In other words, I should be able to tell that a red shirt is not the same as a blue shirt, even if one image is lit differently (brighter / darker, outdoors / incandescent).
Moving to YUV space and ignoring the luminance component I can deal with changing brightness conditions by considering the metric:
$$d = \sqrt{\Delta U^2 + \Delta V^2}$$
But this metric still gives large "distances" when different light sources are used, due to the change in chrominance along the Planckian locus.
My question is:
Is there a color difference metric that is invariant to temperature differences? in a sense, I'm looking for a color space in which one of the three axes is temperature. Does such a space exist?
Answer: Other than searching for a color space, why don't you just first apply a color correction and then get the hue space?
If illumination is an issue, you might consider an illumination invariant color correction:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108.6859&rep=rep1&type=pdf
Then simple convert your image to HSV and get H channel.
Additionally, if you have some clue about what you are looking at (such as roads, football fields etc) then you might use that too. Such an algorithm is mentioned here:
https://stackoverflow.com/questions/10922467/illumination-invariant-image | {
"domain": "dsp.stackexchange",
"id": 1429,
"tags": "image-processing, color"
} |
How can, e.g., Dijkstra find all shortest paths in linear time? | Question: As I understand one can modify BFS for unweighted graph or Dijakstra for weighted graph to find all possible shortest paths from $s$ to $t$ in linear time. But how can this be, when there are $O(2^n)$ such paths?
What am I missing here?
Answer: As you (almost) observe, there can be $2^{\Theta(n)}$ shortest paths between a pair of vertices:
* * *
/ \ / \ / \ / \
s* * * ... * *t
\ / \ / / \ /
* * *
With $k$ cycles, this graph has $3k+1$ vertices and there are $2^k = 2^{n/3-1}$ shortest $s$–$t$ paths: you can go clockwise or anticlockwise around each cycle, independently of the others.
The simple answer to your question is that you are mistaken. It is impossible for any algorithm to output all shortest paths in linear time (time $O(n)$), just because it is impossible to output an exponential amount of data in a polynomial number of steps: a Turing machine can only write one character of output per step.
Outputting all shortest paths is an example of an enumeration problem. Typically, we use different measures for these problems because it seems "unfair" to say that an algorithm is slow just because it has a lot of work to do. The usual standard for enumeration algorithms is to look at the number of steps between each successive output. Using modified Dijkstra will (I assume) have polynomial delay: there's a constant $c$ such that, for any large enough graph on $n$ vertices, the algorithm will output some shortest path after at most $n^c$ steps and, after each output, there will be at most $n^c$ more steps before the algorithm outputs another shortest path or terminates. | {
"domain": "cs.stackexchange",
"id": 9097,
"tags": "algorithms, complexity-theory, graphs, time-complexity"
} |
Classes of quantum circuits that can be "efficiently" simulated | Question: Which kind of quantum circuits can be simulated with classical algorithms in a reasonable amount of time (for a large number of qubits)?
For example the ones with only Clifford gates. Or the ones with few non-local gates between the two halves of the circuit (via quasiprobability decomposition).
Are there any other examples of "constraints" that we can apply to a quantum circuit in order to be simulated faster than a standard classical simulation (i.e. it doesn't grow exponentially with the number of qubits)?
Answer: As mentioned, Clifford-gates circuits can be efficiently simulated but, in general, the most promising simulation methods are based on tensor networks. Through compact tensor representations and efficient operations, tensor network-based quantum simulation can scale to hundreds of qubits on a single GPU and thousands of qubits on multiple GPUs (today, the largest full statevector simulation can't go beyond 50 qubits even by using the most powerful supercomputer). Tensor networks work fine as soon as the level of entanglement in your quantum circuit is not too high.
For more details, take a look to TensorLy-Quantum, open-source Python library for tensor methods applied to quantum machine learning (GitHub repo). | {
"domain": "quantumcomputing.stackexchange",
"id": 4500,
"tags": "quantum-circuit, clifford-group"
} |
Time is out of dual 32-bit range | Question:
I am using ROS Kinetic on Ubuntu 16.04. After updating my system this morning (which included a lot of ROS packages), rosout and other nodes are crashing with the following exception:
terminate called after throwing an instance of 'std::runtime_error'
what(): Time is out of dual 32-bit range
Minimal example to reproduce the issue:
Run roscore in a terminal
In another terminal run the following Python script:
import rospy
rospy.init_node("foo")
rospy.sleep(1)
When the script terminates rosout crash with the exception given above (in the roscore terminal)
If running more stuff, several other nodes crash with the same error.
Since everything was well yesterday, I suspect that this is caused by the update. Is anybody else having the same issue and/or knows how to fix it (apart from trying to roll back the update)?
Originally posted by Felix Widmaier on ROS Answers with karma: 382 on 2018-09-07
Post score: 2
Original comments
Comment by Felix Widmaier on 2018-09-07:
I tried to rollback the update but it seems that ROS packages of older versions are not kept in the apt repository, so apt-get is not able to install the old version.
Comment by Felix Widmaier on 2018-09-07:
I figured out it is not related to gazebo (apparently not even sim time). I updated the question with more minimal reproduction steps.
Answer:
It turned out I had some outdated clone of ros_comm in my workspace that apparently is incompatible with the latest update. Removing it from the workspace fixed the issue.
Originally posted by Felix Widmaier with karma: 382 on 2018-09-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-09-09:
This was most likely an ABI compatibility issue.
Did you rebuild your workspace after installing updates?
Technically, you should. Most of the times it works without doing it, but if the updates changed the ABI, strange things like this may happen.
Comment by Felix Widmaier on 2018-09-10:
I did a complete clean & rebuild which did not help. The state of ros_comm I had in my workspace was pretty old, I think it just became incompatible with some other core package. I anyway don't need it anymore, though, so I am good with simply removing it from the workspace.
Comment by gvdhoorn on 2018-09-10:
My "rebuild the workspace" statement was a bit ambiguous. What I meant to say is that after upgrading, rebuilding the workspace is essentially required.
Orthogonal to that: when ABI incompatibilities are introduced (either upstream or downstream), these sort of things can happen. | {
"domain": "robotics.stackexchange",
"id": 31729,
"tags": "ros, rosout, ros-kinetic, roscore"
} |
Determining the microstructure of steel after welding in modern computer software | Question: How can the microstructure of steel after welding be determined in modern computer software?
I am aware that there are 'classic' methods of predicting microstructures like Schaeffler, DeLong or WRC constitutional diagrams. What I am wondering about is: which algorithms are used in programs based on finite element method (FEM) computing (i.e. ESI Group software, particularly SYSWELD)
What I want to find out in particular:
1. How does a FEM model made of austenite differ from the one that is made of ferrite?
2. Is a FEM model enough to determine the microstructure of a material, that is represented by that model? If not, what else do you need to examine transformations of microstructure during the welding process?
Answer: Summary:
1) The answer to this question is difficult. You would need to know how austenite and ferrite behave in relation to what you are doing to them. You would also need to know their compositions, temperature field, etc. The results here could vary significantly depending on the specific parameters and how they change with time and with each other.
2) Yes and no. You can get a statistical model of each FEM element at a macro scale to determine the statistical nature of the microstructure such as grain size, particle density, etc. Or you can model what happens at a microscale when welding to determine what the microstructure might look like under a microscope. To do either requires a lot of detailed thermodynamic data about all of the components present and their phases, as discussed below.
More Specifically...
My approach to the problem would be to use statistical modeling to determine grain size, particle size and distribution, etc. The data would come from phase diagram data based on known compositions, together with assumed kinetic governing equations. The actual thermal and compositional kinetics are governed at a bulk scale by well known PDE models, but certain features such as size and distribution are determined by microscale kinetics. What phases appear and the order in which they appear are governed by the phase diagrams. We can generally assume that bulk kinetics provides an input into the microscale kinetic and thermodynamic relationships, but that the reverse is largely irrelevant. The microstructural data in turn provides an input into structure-property relationships.
A complete model would tie all of these together more-or-less in that order, and would look something like:
Use the bulk scale kinetics to get the temperature and composition profiles for each element.
Use the temperature and composition profiles to generate statistical microstructural data for each element.
Use the statistical microstructural data to determine properties.
The algorithms for common software are generally proprietary, so I can't say for sure, but I believe any package such as SYSWELD or MAGMA uses something along these lines. SYSWELD may even use the Schaeffler, DeLong, and WRC diagrams as part of its thermodynamic modeling. It depends to what degree they are making assumptions about the thermodynamic data and how much effort they've put into that part of their model.
Microstructural FEM
FEM may be used to model microstructural behavior at a microscale, such as mechanical and thermal behavior. Generally this is done by capturing a microstructural image representation, either by microscope (optical or SEM) for 2D or by computed x-ray tomography (CT) for 3D, and converting the representation into an FEM by identifying or segmenting different phases and their interfaces, and assigning appropriate (often anisotropic) material properties to each phase and each phase-pair interface.
To do all this, you need to be able to accurately segment phases, interfaces, and crystallographic orientation, which may take some expensive equipment and characterization work. Alternately, a phase field model may be used to attempt prediction of the microstructural morphology, and then relevant data captured from the phase field model. There are limitations in using a phase field model this way, which are discussed in the next section.
There is a tool on nanoHUB.org which does microstructural FEM for 2D images called OOF2. To use the tool you would need to create an account, and it is generally intended for educational purposes, but should give a general idea how a microstructural FEA might work in the 2D case. You might need to upload your own microstructure image, it's been awhile since I've used it and I've forgotten the details.
The results of microstructural FEM are useful for determining how textured microstructures might behave. They are also useful for determining how microstructure can be linked to fatigue properties by identifying stress concentrations in the microstructure and how the microstructure might play a role in crack and void initiation.
Phase Field Models
To model phase transformation kinetics at a microscale, generally phase field models (Wikipedia) are used. The models involve a number of moving parts, so to speak, but are sometimes faster and usually more robust for capturing moving interfaces than with traditional FEM models.
The primary concept is that, for a field of elements containing two phases, the phase discrimintation of the entire field may be modeled with a scalar value varying from 0 to 1. If the value of an element is (very close to) 0 it is one phase, and (very close to) 1 the other phase. If it has an intermediate value it is part of an interface between the phases. Thus rather than a sharp interface as would be required with an FEM, the interface is modeled by assuming it is diffuse.
Phase field models typically also track composition, temperature and free energy, and have a collection of governing equations which must be solved at each time step to determine the evolution of the next step. To use a phase field model thus requires knowledge of temperature dependent free energy curves and temperature dependent diffusion rates, both thermal and compositional, among other things.
It is possible to model diverse microstructural evolution phenomena with phase field models, including:
planar to dendritic solidification (nanoHub.org tool and solidification.org video)
unstable cellular eutectic solidification (solidification.org video)
spinodal decomposition (nanoHub.org tool)
grain growth (nanoHUB.org tool).
and certainly much more is possible, if challenging.
I am unaware of any professional phase field modeling packages, as there are drawbacks limiting their usefulness outside academia and early-stage research. One limitation is that, depending on the model, the specific variables may not have a clear relationship with physical values that can be experimentally determined. Thus, sometimes the model parameters need to be adjusted systematically until the results "look right." Additionally, getting useful information out of the model is another issue due to the same discrepancies between model parameter values and physical values. It is also possible to produce non-physical results quite easily without careful tailoring of governing equations to the specific model. Validation is another issue as performing the experiments is at best an expensive and time consuming process, and at worst virtually impossible depending on the specific parameters involved. Research is of course focused on reducing these issues, but because phase field models are relatively new (~10 years old at the earliest) there is much work yet to be done.
Generally phase field models are mostly useful for drawing a pretty picture of what a microstructure might look like without performing expensive experimentation and microscopic examination. They are also useful for creating animations of microstructural evolution. In the future their use may expand to predicting features such as statistical modeling and capturing data for FEM, but the limitations above restrict those uses.
Statistical Thermodynamic Models
Generally, at a useful level, most engineers deal with bulk properties. After all, most engineers are designing products at a visible, macroscopic scale. As a result, the specifics of a tiny fraction of the microstructure isn't particularly useful directly to how the product might behave at a macroscopic level. Instead, we want to determine how the product behaves as a bulk.
To model the bulk microstructural evolution and final properties of a material is usually done with a CALPHAD (Calculation of Phase Diagrams, Wikipedia) program. Microstructural CALPHAD models don't generate pretty pictures like in the previous two sections, but instead generate a statistical representation of certain classes of microstructure by generating a grain size or particle size and density distribution based on thermodynamic, kinetic, compositional, and temperature data.
Such a model can be used in conjunction with a process FEM to determine local microstructural distributions for each finite element. Thermocalc does thermodynamical statistical modeling and is a CALPHAD program. MAGMA casting process simulation software combines statistical thermodynamic modeling with FEM in some of their alloy packages. A MAGMA user might then be able to predict some statistical data throughout the bulk of their product, and then generate scalar fields representing mechanical properties which vary over the product. It appears SYSWELD does the same thing for heat treatment and the welding process, probably by the general method described here.
References
nanoHub.org - A site with many computational tools and educational resources focused on nano-scale. Some information and tools are related to larger scale modeling, especially OOF2 (Object Oriented Finite Element Analysis, 2D) and VKML (Virtual Kinetics of Materials Laboratory) tools.
solidification.org - A site with a number of neat movies of solidification processes, both experimental and phase field simulation.
I do not endorse any linked sites or softwares, the links are only intended for referential and educational purposes. | {
"domain": "engineering.stackexchange",
"id": 753,
"tags": "metallurgy, finite-element-method, structural-analysis, software, welds"
} |
What's the next big goal of the Large Hadron Collider? | Question: Obviously, physics lovers everywhere were excited at the announcement that the LHC had verified the existence of the Higgs Boson. As a non-physicist, I like to keep up with what I can regarding the latest developments (that laymen can appreciate). Now, I'm seeing all over news pages about the LHC starting back up again after a 2-year hiatus, and many of them report on the LHC "searching for evidence of parallel worlds". Now, scientific reporting is often sensationalized in the media, and quite often, it doesn't exactly match up with what's really being done.
So my question is: what experiments are currently planned at the LHC, and what specifically are they looking for now?
Answer: One has to be aware that the data from the LHC experiments are studied by about 3000 physicists in each collaboration, which include students for a PhD. Thus, even though many people study the same favorite theory, all the possible predictions from theoretical models are evaluated at some study group and a publication will come out giving either direct measurements or limits on predicted crossections and distributions. One can look at the publications page for the LHC to see the enormous variety of subjects that have already been explored in addition to the Higgs. These will be explored further with the higher energy data.
If the question is "what do physicists consider crucial to discover " (instead of just limits) with the new data, the answer is "the existence of Supersymmetry" and of course any more indications of "string theory phenomenological predictions" ( supersymmetry is one of them).
. | {
"domain": "physics.stackexchange",
"id": 20710,
"tags": "large-hadron-collider"
} |
Which queue does the long-term scheduler maintain? | Question: There are different queues of processes (in an operating system):
Job Queue: Each new process goes into the job queue. Processes in the job queue reside on mass storage and await the allocation of main memory.
Ready Queue: The set of all processes that are in main memory and are waiting for CPU time is kept in the ready queue.
Waiting (Device) Queues: The set of processes waiting for allocation of certain I/O devices is kept in the waiting (device) queue.
The short-term scheduler (also known as CPU scheduling) selects a process from the ready queue and yields control of the CPU to the process.
In my lecture notes the long-term scheduler is partly described as maintaining a queue of new processes waiting to be admitted into the system.
What is the name of the queue the long-term scheduler maintains? When it admits a process to the system is the process placed in the ready queue?
Answer: I found an appropriate answer. I was asking about the job queue (which I already described). The diagram included in this answer comes from a power-point that uses concise language to explain processes and schedulers and relates the topic to the diagram.
It may be of intrest to other users, also learning this topic, that sometimes time-sharing systems (such as UNIX) have no (or minimal implementations of a) long-term scheduler.
Check these sources for more information:
1.Wikipedia Article
2.Operating System Concepts (Pages 88-89)
©Bernard Chen 2007 | {
"domain": "cs.stackexchange",
"id": 12119,
"tags": "operating-systems, terminology, process-scheduling"
} |
What are good action outputs for reinforcement learning agents acting in a trading environment? | Question: I am trying to build an agent that trades commodities in a exchange setting. What are good ways to map the action output to real world actions? If the last layer is a tanh activation function, outputs range between [-1,+1]. How do I map these values to real actions? Or should I change the output activation to linear and then directly apply the output as an action?
So let's say the output is tanh activated and it's -0.4, 5. I could map this to:
- -0.4 --> sell 40% of my holdings for 5$ per unit
- -0.4 --> sell 40% for 5$ in total
if it was linear, I could expect larger outputs (e.g -100, 5). Then the action would be mapped to:
- sell 100 units for 5$ each
- sell 100 units for 5$ total
Answer: Working Backward
Working backward from the trading interface available to you Note 1, you will need two things for each Exchange-Traded Fund (ETF) or other tradable commodity of another class.
An operation to perform
An associated monetary amount
The system can have an output structure Note 2 for each ETF like this (depending of course on the trading interface available to you to you and your banks primary monetary system).
Ternary operation indicator (buy, sell, hold)
Trade amount in USD
Why Not Just One Number per ETF?
A few corroborating reasons exist for why a single positive, zero, or negative trade amount is not likely the optimal architectural choice.
The difference between holding and trading an amount of 1.0 monetary unit is not equivalent to the difference between trading 1.0 and 2.0 monetary units. Stated mathematically, the function of profitability to trade amount is not smooth and probably not even continuous.
When you implement the ternary output as two binary outputs {buy, sell}, training against the Boolean expression (buy AND sell) is likely to improve your initial performance and possibly your ongoing performance Note 3.
Limitations on Real Trades to Consider
Because you have the limit of the assets in the liquid account from which you can buy, you will need to train against breaking the bank or using a stage after the NN outputs imposing rules or a formula based on gain and loss probabilities. This financial constraint muddies your question because there are several ways to ensure you do not break the bank (get an insufficient funds response from your trade operation).
Optimizing a Deeper Architecture
Let's first assume you use probabilistic calculus to produce a closed form (formula) for how to trade based on predictions from the NN architecture you design. Then the NN outputs might be continuous values representing the distribution of outcomes for each ETF. Projections will almost always be dependent on investment duration.
In such an architecture the NN output activation scheme would be a continuous function (not necessarily linear) producing something like this Note 4.
Mean expected delta value in one day
Std deviation in one day expectation
Mean expected delta four weeks
Std deviation in two week expectation
Mean expected delta two years
Std deviation in two years expectation
Any NN optimization of an investment portfolio that does not inherently deal with probability is nonsense. Optimizing for maximum gain will introduce great risk. Optimizing for minimum risk could result in losses. The goal of optimization must be some representation of the balance between the desire to win and the fear of loosing, to put it in anthropological terms.
Mean and standard deviation are obvious starting choices, and the traditional categories of short, medium, and longer term investment is also reasonable to begin with in the temporal domain Note 5.
Pure NN
Now let's assume you replace the calculus with another NN scheme trained to maximize portfolio total assets Note 6. Such a replacement NN scheme must also take as an input your available liquid asset amount along with the above probabilistic projections. You must also train to ensure the aggregation of buys and sells do not break your bank, that your liquid asset account never drops below zero.
The trade amount activation should be a continuous function, but not necessarily linear and probably not tanh either because the asymptotes of that function would be counterproductive unless it is made proportional to your available liquid assets to train by aggregating your options in your architecture. However, that's not optimal because you may find a better deal to use those assets a minute later.
The odd roots (third root or fifth root or both with coefficients), when used along with training to not break the bank and to maximize the rate of portfolio growth will produce a better environment for learning in the earlier layers because of the probabilistic and aggregation realities of liquid asset limitations.
NOTES
Note 1 — Preferably a secure RESTful API, however experimentation can employ a web browser the HTTPS transactions you can control using https://github.com/SeleniumHQ/selenium or https://github.com/watir/watir.
Note 2 — Input would be things like number of companies, exposure ratings provided by investment firm(s), short options, inception date, and a sequence of events, each event containing fixed point numbers and flags like these.
Price per share
Closing price flag
Expense ratio
Dividend per share
Note 3 — The binary output vector {buy, sell} value of {1, 0} causes a buy. The value of {0, 1} causes a sell. The value {0, 0} is ignored, and {1, 1} may trigger another training operation to correct this illegal output, which is likely to be indicative of the staleness of the last round of training. If the NN is re-entrant (reinforced), the feedback vector could include this anomalous flag or weight it heavily in an aggregation of feedback sources. In summary, the ternary scheme can be expected to augment the training speed and resulting accuracy. More importantly, it opens additional options for continuous optimization.
Note 4 — Delta is an aggregation of price increase or loss, dividends, and holding and trading expenses because that is the proper metric related to profitability of the portfolio.
Note 5 — Four weeks and two years have been chosen for the middle and longer range projections so that the time ratios are 26.09 and 28 between the three durations. Common choices are temporally skewed. If 1 day, 1 week, and 1 year had been chosen, the ratios would have been 7.0 and 52.18. If 1 day, 1 month, and 1 year had been chosen, they would have been 30.44 and 12.
Note 6 — Do not assume that even a well trained NN will ever outperform formulae properly derived from probabilistic calculus for the last stage in a trading profitability architecture. | {
"domain": "ai.stackexchange",
"id": 551,
"tags": "neural-networks, reinforcement-learning"
} |
Converting Object to Map using Generics | Question: Problem
I am loading things from the localStorage and this has to be saved as json, so it needs to have a simple Object structure to be possible to JSON.parse().
However, som methods do not accept <any> as parameter, because they want a concrete class or interface, but i want to send my object as parameter, so i have to convert it to a Map in order to have the same structure, but seen as it has a type, it is now accepted as parameter.
My problem lies within the conversion from Object to Map
Solution
public static convertObjectToMap<V>(obj: any, classOfV): Map<string, V> {
let objectMap = new Map<string, V>();
if (obj !== undefined && obj !== null) {
for (let key in obj) {
if (obj.hasOwnProperty(key)) {
const initObject = new classOfV(obj[key]);
objectMap.set(key, initObject);
}
}
}
return objectMap;
}
I take an obj and the class which all values are going to be in the same type.
Example of usage
//This is purely for the example
const fibonacciObject:any = {"0": 1, "1": 1, "2": 2, "3": 3, "4": 5};
const fibonacciMap:Map<string, Number> = convertObjectToMap<Number>(fibonacciObject, Number);
fibonacciMap.get("0"); //1
fibonacciMap.get("4"); //5
Question
Is there a better way to do this conversion, i know about new () => V, but since i need it for each key, then it is not really feasible.
Also what Type would class of V be, i keep getting type errors when i try to give it a Type
Answer: A few points first.
Avoid any like the plague. You can nearly always figure out a better type. When dealing with a JSON serialized data, I like to have a function similar to this to get rid of any as soon as possible:
function verify<T>(obj: any, fallback: T, isT: (obj: any) => obj is T): T {
return isT(obj) ? obj : fallback;
}
Object.keys and Object.entries are a better fit for looping through an object if you are going to check hasOwnProperty. I prefer Object.entries when possible, if you have the browser support.
Choose const or let, don't mix them without good reason. const can result in better type inference so I prefer to use it when possible.
Here is how I would implement this function.
function convertObjectToMap<In, Out>(
obj: { [K: string]: In } | undefined | null,
classOfIn: new (v: In) => Out
): Map<string, Out> {
const result = new Map<string, Out>();
for (const [key, val] of Object.entries(obj || {})) {
result.set(key, new classOfIn(val));
}
return result;
} | {
"domain": "codereview.stackexchange",
"id": 29333,
"tags": "javascript, typescript"
} |
Is q=0 for irreversible adiabatic process? | Question: Well, I am a little bit confused about this question. I learn that reversible adiabatic processes are isentropic. So $\Delta S=0$. Through $\Delta S=\frac{ q}T$, we can say that $q=0$. But if you take an irreversible adiabatic process, due to friction, $\Delta S\gt0$. Which makes:
$$\begin{align}
\Delta S=\frac qT&\gt0\\
q&\gt0
\end{align}$$
So is it safe to say $ q\gt0$ for the irreversible adiabatic process? If not, where I am wrong?
Answer: The definition of an isentropic process is $\mathrm{d}S=0$.
The definition of an idiabatic process is $\delta q=0$.
Be aware that $\mathrm{d}S \ge \frac{\delta q}{T}$. So even if $\delta Q=0$ as the adiabatic process requirement, there is still possible $\mathrm{d}S \gt 0$.
Imagine isolated system as a water reservoir. There is done mechanical non-PV work on this system by mixing water by the blender screw - like the classical Joule experiment about heat/work equivalence.
The process is adiabatic with irreversible friction - but no heat exchange, so $\delta q = 0$.
The process is not isentropic, as it is irreversible and entropy increases, with $\mathrm{d}S \gt \frac{\delta q}{T} = 0$
If the irreversible action like friction occurs out of the system, like at the piston with friction or mechanical machinery behind, the system may do non-PV work on the surrounding via friction, which converts work to heat.
$$\mathrm{d}S_\mathrm{sys} = \frac{\delta q_\mathrm{sys}}{T} = 0$$
$$\mathrm{d}S_\mathrm{surr} \ge \frac{\delta q_\mathrm{surr}}{T} \gt 0$$
Formally, process can be isentropic even if $\delta q <> 0$, if $\mathrm{d}S_\mathrm{sys}= \mathrm{d}S_\mathrm{sys,irrev} + \frac{\delta Q}{T} = 0$, so $\mathrm{d}S=\frac{\delta q}{T}=0$ is not correct definition of isentropic process.
E.g. Assume a system consisting of 2 reservoirs with temperatures $T_1$ and $T_2$ and there is system surrounding of temperature $T_3$, when $T_1 \gt T_2 \gt T_3$. Entropy increase term of system will be caused by the spontaneous balancing of temperature in both system reservoirs, while the entropy decrease term will be caused by passing heat to the surrounding.
With properly chosen temperatures, there would be ongoing isentropic process for the system, with $Q \ne 0$. | {
"domain": "chemistry.stackexchange",
"id": 17353,
"tags": "thermodynamics, heat, entropy"
} |
DNA length and annealing kinetics | Question: I have a mixture plasmids and undesired short linear fragments that share the same sequences. During denaturation and annealing, I would like the plasmids to 'find each other' before annealing to the shorter linear fragments. Assuming the concentration of shorter fragments is significant, is there a temperature profile to bias towards re-annealing of longer DNA?
More specifically, this is for a variation of the Surveyor Mutation detection assay, where re-annealed DNA with mismatches are digested, leaving non-mutant DNA intact. I would like to keep non-mutant plasmids for E. coli transformation. However, some linear fragments ~10-50% of the length of the plasmid have escaped exonuclease treatment and would compete with the plasmids during annealing.
Answer: Short answer is that higher temperature favors annealing of longer sequences. There are number of ways to calculate melting temperature, but all of them produce similar results: longer polymers require more thermal energy to melt. Hence, quick cooling from higher (say, from 95C thermocycler can cool in 10-12 sec) to RT/4C will favor re-annealing of circular strands. Slower cooling should allow more ssDNAs to bind.
But again, if end goal is to select for circular dsDNA, simple transformation should take care of that. | {
"domain": "biology.stackexchange",
"id": 3808,
"tags": "molecular-biology, dna, pcr, experimental-design, enzyme-kinetics"
} |
What is the synoptic/atmospheric setup that is causing the U.S. severe weather outbreak in May 2019? | Question: I asked this in the chat and it was suggested I ask on the main site.
There's been a seemingly perpetual severe weather outbreak over the central and eastern U.S. over the past two weeks. Killer EF3s in Jefferson City, MO, Dayton, OH, and El Reno; flash flooding in Oklahoma; severe storms in Illinois/the Ohio Valley/Mid Atlantic; the tornadic supercells in northern NJ/Staten Island, the list goes on, and on, and on.
On top of this, the southeastern U.S. is seeing record high temperatures (earliest-ever triple digit temps for Alabama) and a heat wave is currently (as of today, Wednesday 5/29) smothering the east coast. What has been the synoptic and mesoscale (fronts, upper-air) setup that can explain this unusual pattern?
Answer: Summarizing the comments I made above in this question and in this one - Does this weather pattern have a name ? I believe significant parts of the US are experiencing a Stationary Front for the past one month. There has been plenty of media coverage of this event and if you google the term "stationary front" under news one will receive a lot of links to the TV news media with meteorologists giving detailed analysis of this event.
A few summarized below -
1) Week's bleak weather not an unusual spring occurrence
2)A stationary front is stuck across the middle parts of Illinois and Indiana
I will attempt to provide a synoptic scale explanation of the event from reanalysis data.
1) From JMA's site -
an upper level look at the 200 hPa surface reveals that an equatorward trajectory of the wave activity flux vector(seen as small arrows) is seen from the Upper North Pacific. Climatologically Rossby Wave Breaking can occur from boreal fall to boreal spring. The signal is seen at 30 day , 10 day period and 7 day period and 5 day period. The signal is present in the raw data as well as in the anomaly. Most importantly this wave activity flux is directed towards the continental USA all the way from New Mexico to the Great Lakes region.
A popular science version of the above information has been discussed over at this link - What's up with all this wet weird weather ?
2) At the surface level this "stationary front" presents itself as air masses of two different temperatures as seen from this plot and one can see the frontal boundary more or less coincides with the rising contour of the stream function plot in (1). Here the south easterlies are bringing in warm moist air from the Gulf coast and the Atlantic and the colder air masses are coming from Canada and North East Pacific. So the key question is whether the upper level influence is directing the surface level weather and that question can be answered by calculating PV anomalies when ERA 5 reanalysis data does become available.
Source - JMA reanalysis data
3) The moisture flux being pumped into the south eastern part of the USA can be seen from this plot over a seven day period - .
Again the signal is present over 30 day period, 10 day period and 3 day period and over a 1 day period. I would like to point out the moisture flux is seen at different levels of the atmosphere from surface level to the 300 hPa surface.
Source - NCAR NOAA reanalysis derived variables
4) Last year (2018) apparently a wavenumber - 7 was found to be causing weather stalls as shown in this link - Weather stalls becoming longer. This year in April it is a wavenumber 5 pattern as seen in this site - Current Weather reports and summarized here
From a global point of view, the weather pattern in the beginning parts of the 6-10 day (around day 7) will enter into a wavenumber 5 pattern. That's where you have 5 distinct upper-level troughs (areas where the weather is cool and stormy) around the globe. It's like a major traffic jam in the atmosphere and could lead to weather stagnation with little to no variation for days and/or weeks. This could also lead to extreme weather events such as droughts and severe weather/flooding across some parts of the world.
and I am able to pick up the wavenumber 5 or 6 as pointed by user Deditos (as well in this twitter feed - Wave number 6 planetary wave) from NCAR NOAA reanalysis data for the 500 hPa geopotential height anomaly for the month of May. | {
"domain": "earthscience.stackexchange",
"id": 1775,
"tags": "meteorology, atmosphere-modelling, tornado, severe-weather"
} |
Could speed of light be variable and time be absolute? | Question: I get my "demonstration" of time dilation from the textbook thought experiment.
A laser is mounted on a cart with a reflective ceiling. At $t=0$ the cart starts moving and the laser is fired. When the laser is reflected back at the starting point the (thought) experiment stops.
Now, two different observers, one sharing the frame of the cart and another standing on the ground perpendicular to the cart will observe two different things. For the first one, the laser bounces back and then down in a straight line. For the second one, the light travels in a triangular pattern which is longer than the path observed by the first guy.
Given that the speed of light is constant, the time has to dilate/contract.
Why is the speed of light held constant here? Could we work out a physics where time is absolute but the maximum speed of light variable?
Answer: Keep in mind that several other Einsteinian effects are hard to explain in an absolute-time scenario and are tested. For these purposes I would concentrate on:
The existence of a speed limit for massive particles
Looping accelerator systems based RF cavities only work because once the particles have enough energy their speed is effectively constant (and that speed is within a hair's breadth of $c$). But this constant speed doesn't mean constant kinetic energy or momentum: the accelerator continue to add energy and momentum with measurable consequences in the bends and the experimental halls.
Conversion of other energies to mass and vice versa
We can measure the kinetic energy and mass of reaction products in particle physics experiments and when we produce heavier products than we started with the extra mass is related to lost kinetic energy in keeping with $E = mc^2$. Likewise when particles decay to lighter products the products have extra kinetic energy in keeping with the mass difference.
The twin paradox
It is not just a fanciful notion you find in books, but something that we do with unstable particles in looping accelerators (example, muon-g2).
To fit all of those that into a absolute time framework will require more (and to my mind very arbitrary) assumptions. By contrast the Lorentz symmetry of special relativity explains them all in one go and is motivated by the structure of Maxwell's equations (that is has an experimental basis and is not at all arbitrary). | {
"domain": "physics.stackexchange",
"id": 96496,
"tags": "special-relativity, speed-of-light, inertial-frames, time-dilation, observers"
} |
Scikit-learn SelectKBest is picking up obviously unwanted Features | Question: Dataset
Dataset Summary: Bank Loan (classification) problem
Problem Summary:
I am exploring ways to simplify EDA Process (Exploratory Data Analysis) of finding the best fit variables
I came across SelectKBest from Scikit Package
The implementation went fine except some variables it returned me are obviously not going be a good factor (like primary keys in the dataset)
Is there a problem in the implementation? or is the package supposed to behave in that manner?
import numpy
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.preprocessing import LabelEncoder
# My internal code to read the data file
from src.api.data import LoadDataStore
# Preping Data
raw = LoadDataStore.get_raw()
x_raw = raw.drop(["default_ind", "issue_d"], axis=1)
y_raw = raw[["default_ind"]].values.ravel()
# NA and Encoding
for num_var in x_raw.select_dtypes(include=[numpy.float64]).columns.values:
x_raw[num_var] = x_raw[num_var].fillna(-1)
encoder = LabelEncoder()
for cat_var in x_raw.select_dtypes(include=[numpy.object]).columns.values:
x_raw[cat_var] = x_raw[cat_var].fillna("NA")
x_raw[cat_var] = encoder.fit_transform(x_raw[cat_var])
# Main Part of this problem
test = SelectKBest(score_func=f_classif, k=15)
fit = test.fit(x_raw, y_raw)
ok_var = []
not_var = []
for flag, var in zip(fit.get_support(), x_raw.columns.values):
if flag:
ok_var.append(var)
else:
not_var.append(var)
ok_var
['id', 'member_id', 'int_rate', 'grade', 'sub_grade', 'desc', 'title', 'initial_list_status', 'out_prncp', 'out_prncp_inv', 'total_rec_late_fee', 'recoveries', 'collection_recovery_fee', 'last_pymnt_d', 'next_pymnt_d']
not_var
['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'term', 'installment', 'emp_title', 'emp_length', 'home_ownership', 'annual_inc', 'verification_status', 'pymnt_plan', 'purpose', 'zip_code', 'addr_state', 'dti', 'delinq_2yrs', 'earliest_cr_line', 'inq_last_6mths', 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'last_pymnt_amnt', 'last_credit_pull_d', 'collections_12_mths_ex_med', 'mths_since_last_major_derog', 'policy_code', 'application_type', 'annual_inc_joint', 'dti_joint', 'verification_status_joint', 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_il_6m', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m']
Its clear id, member_id should NOT belong to the best features list! any idea what I am doing wrong?
Edit: Did more digging and the reply by @Icrmorin is right. (its a kaggle dataset, so will not know why) but here is the box plot for id
Answer: There seems to be two possible approaches to your problem :
If they are just identification features that you know aren't informative, you should remove them yourself. SelectKBest - like almost any other EDA tools - works on all the features you provide it, there is no way it knows what features are supposedly uninformative identification features and which are not.
It is possible that somehow the identification feature is informative. I can think of at least two reasons : correlation with time as the instance are entered in order and what you want to observe change with time. Or, if your identification feature is not unique (instances observed trough multiple different times), correlation between your observation. Depending on how your identification feature is built and what you want to achieve, you might want to keep this information or not. | {
"domain": "datascience.stackexchange",
"id": 8452,
"tags": "python, scikit-learn, feature-selection"
} |
How can I improve this code to display contact numbers? | Question: I am making an Address Book like application .
I have a contactType enum.
public enum ContactType
{
//just a cut down version
MOBILE,PHONE,FAX,EMAIL;
}
Then I have a Contact class
public class Contact
{
private ContactType contactType;
private String value;//e.g phone number or email
private String areaCode;//does not apply to email or mobile
//....getters and setters
}
Then I have a Person class
public class Person
{
private List<Contact>contacts;
//----Other attributes and getters and setters
}
I thought I have nailed very well :) but as it turns out probably not:( Because
I now needs to display a list of people(list of person) and in the table along with some other things (name etc)I have these columns:
Name|......|Phone|Mobile|Fax |Email
I now realised that I can not just loop through the list of people and display contact numbers since these contacts are in list inside every object in list of people.
In simple words as you might know by now already if I have a list of people e.g
List<Person>people ;
Then this is not a valid option
for(Person person: people)
{
//can not get phone number for example by going
person.getPhone();
}
but this is
for(Person person: people)
{
//but will have to get it from the list of contacts e.g
List<Contact>contacts = person.getContacts();
for(Contact contact:Contacts)
{
if (contact.getContactType.isPhone())
{
contact.getValue();
}
}
}
So I have now introduced a Rowitem Class
public class Rowitem {
private String phone;
private String fax;
private String mobile;
private String email;
....
}
and populating is like this
private void populateForDisplay(Rowitem item,List<Contact> contacts)
{
for (Contact contact : contacts)
{
if (contact.getType().isEmail()) {
item.setEmail(contact.getValue());
}
if (contact.getType().isFax()) {
item.setFax(contact.getValue());
}
if (contact.getType().isMobile()) {
item.setMobile(contact.getValue());
}
if (contact.getType().isPhone()) {
item.setPhone(contact.getValue());
}
}
}
While this approach works I just do not think it is right. Firstly I do not like the fact that I have to introduce this RowItem class and then all these if statements do not look right. How can I improve this?
The front end framework that I am using is JSF2 but probably not relevant here.
I need someone to review my approach rather than line by line code.I have read https://codereview.stackexchange.com/faq and I think my question is with the rules if not then please guide me and I will move this.Because not code problem I did not think will go to StackOverFlow
Answer: At first I would suggest to get rid of the contact type enumeration and instead derive concrete contacts from your contact class. This will increase cohesion dramatically as this way you do not need to have fields like area code in your base contact class.
Regarding your problem: You could use something like a visitor pattern to navigate the "tree" of data (Persons -> Person -> Contacts) with your Person and Contact acting as elements accepting the abstract visitor. This will give you the flexibility to walk the tree for different purposes. For your scenario you could write something like a TableVisitor that creates the table structure you want to generate. | {
"domain": "codereview.stackexchange",
"id": 687,
"tags": "java, object-oriented"
} |
What happens when you put water under intense pressure? | Question: Pretend you have an indestructible tube that cannot leak, inside which is water. Imagine that in each side of the tube, you have very powerful pistons
What would happen if you compress the water inside?
Would it turn into heat and escape the tube?
Would the water turn into solid because the water molecules are so close to each other?
Would the water turn into a black hole?
What would happen?
Answer: What you're asking about is usually shown in a phase diagram. The diagram shows how the "phase", i.e. liquid, gas, or one of various solid phases, exists at different temperatures and pressures:
If your cylinder starts at say $20{}^{\circ}\mathrm{C}$ and atmospheric pressure, it'll be in $\color{green}{\textbf{Liquid}}$ right near the center of the diagram. If you raise the pressure keeping the temperature constant, it'll switch to $\color{blue}{\textbf{Ice VI}}$ at about 1GPa, or about 10,000 atmospheres of pressure: it's hard to turn water to ice by compressing it; the water at the bottom of the ocean is still water.
As you keep raising the pressure further, keeping the temperature constant, it'll go through more and more compact forms of solid ice (the diagram doesn't show "black hole", as that would be many, many orders of magnitude off the top, and can't be physically reached).
I stress "keeping the temperature constant" because (a) that's something your experiment will have to choose to do or not do and (b) because it makes it much easier to read the diagram. The compression is adding energy to the water, from the work done by the pistons. If you go slow, and the cylinder isn't insulated, etc, that energy will conduct away as the cylinder naturally stays the temperature of its environment. If you go fast, or the cylinder is insulated, the temperature will rise and the water will tend to go up-and-right in the diagram: You'll hit the transitions at different points. | {
"domain": "physics.stackexchange",
"id": 47751,
"tags": "thermodynamics, pressure, water, phase-transition, phase-diagram"
} |
An equation that describes massless spin-1 particle | Question: Proca action/equation describes massive spin-1 particle, but I was unable to find an equation that describes massless spin-1 particle.
Can anyone tell me what the name of this equation is?
Answer: It's called Maxwell's equations.
A spin-1 relativistic particle has to have a 4-vector $A_\mu$ and the equation may essentially be written as $\Box A_\mu=0$, like always. However, when we quantize it, we find out that the squared norm of the states created by the time-like components has the opposite sign than the spacelike components. This would make the Hilbert space indefinite - probabilities could be negative.
So the single timelike mode has to be made unphysical. The only way to do so is to impose a gauge symmetry. So configurations related by $A'_\mu = A_\mu +\partial_\mu \lambda$ – and that's essentially the only way how the gauge invariance with 1 scalar parameter may act – must correspond to the same physical situations. Consequently, we may rewrite the equations as $\partial_\mu F^{\mu\nu}=0$, the usual equations of electromagnetism, which differ from the previous box-equation by a term that can be set to zero by a gauge choice. There can't be consistent spin-1 massless equations without a gauge invariance.
We never learn Maxwell's equations as a single-particle quantum mechanical equation because single-particle equations assume that the number of particles is approximately conserved. That's true in the non-relativistic limit. However, when the speeds approach the speed of light, we're heavily in the relativistic realm and the particle production and annihilation is important; quantum field theory is paramount. Obviously, that's also the case of the photons that move by the speed of light. After all, we know that the number of photons is changing all the time. | {
"domain": "physics.stackexchange",
"id": 4948,
"tags": "quantum-mechanics, quantum-field-theory, quantum-spin"
} |
Particle wavefunction and gravity | Question: Suppose a particle has 50% probability of being at location $A$, and 50% probability being at location $B$ (see double slit experiment). According to QM the particle is at both $A$ and $B$ at the same time, so is there a force of gravity between the two particle superpositions? Is there self-gravity when a wave-function reaches over a finite distance?
I cannot seem to wrap my head around this. Is the gravity a proportional fraction of the entire mass based on the probabilities. How do you combine a wavefunction with Gauss' law of gravity? I have being trying to think about self-gravity for a long time now.
Answer: There is some work by Roger Penrose on the subject. The papers title is, "On Gravity's Role in Quantum State Reduction", and it discuses how the interaction of two states that have different mass distributions with spacetime can cause the wavefunction to collapse in the one state or the other. There is also a following paper that discuses the same thing in Newtonian gravity, "Spherically-symmetric solutions of the Schrödinger-Newton equations" (and there is also this that you could have a look).
There is one thing that I should point out that is also pointed out by David. In a situation as the one described in the question (double slit experiment), the particle is not at two different places at the same time and interacts with it self. It is the two states (wavefunctions) that interact to give you the interference. | {
"domain": "physics.stackexchange",
"id": 81,
"tags": "quantum-mechanics, gravity, double-slit-experiment"
} |
Why servomotor has such low torque compared with stepper motors? | Question: It is easy to found stepper motors with torque over 2NM for a few coins, however, only expensive industrial servomotors achieve this torque.
Examples:
Stepper motor: 10usd for 1.3nm with 2.8V at 3.5A rpm 0.6kg
Servo AC motor: 278usd for 2.39nm with 220V~ at 3A 3000rpm 6kg
Servo DC motor: 67usd for about 0.4nm 24V at 7.8A 3000rpm 1Kg
Torque is not power, and closed loop has many benefits, but I am still confused:I believe many industrial purpose require quite high torque (CNCs, robotic, etc.) and it would be preferable (to my understanding) a lower rpm with higher torque. Additionally, it seem to me that any purpose for a servomotor require more than a few 0.4nm
Why is this torque that different?
What is the benefit / purpose of those non-hybrid servo-motors?
What am I missing?
Answer: Operating principle is different. In essence if you buy just a motor it will be just that. You can now add gearing that will reduce speed and increase torque. Since torque is directly proportional to gear ratio and mechanical advantage;
$$
MA = \frac{v_{input}}{v_{output}}.
$$
You get a hundred times more torque if you gear 3000 rpm down to 30 rpm. So now that DC motor you listed would be 40Nm and that AC servo a 239Nm monster. So indeed those are way more powerful.
Further there may be gearing inside your stepper motor, its quite common. You can get geared servos. It is just more uncommon as designing the drive train needed is kind of the point of using normal motors. As the drive train is the part where you optimize things for your application. That or manufacture/design the motor yourself. | {
"domain": "engineering.stackexchange",
"id": 1763,
"tags": "motors, stepper-motor, servo"
} |
F# Simple message reader using TCP | Question: I have a review request about this part of the code:
let private startReading (client:TcpClient) (bus:IBus) =
//TODO check if that can be changed to rec parameter
let frameNumber = ref 0
let rec StartReading (client:TcpClient) =
async{
match client with
| null -> async { return 0 } |> Async.StartAsTask |> ignore
| _ ->
match (client.IsConnectionEstablished()) with
| true ->
incr frameNumber
DeployService.DeployCabninetService
let! messange = MessagingHandler.ReadMessage client
match(messange) with
| Some data ->
let frameNumber = uint32 frameNumber.Value
logger.Info(sprintf "Received message with Frame Number: %i" frameNumber)
| None ->
_logger.Info(sprintf "Client disconnected : %O" client.Client.RemoteEndPoint)
return! DisconnectClient client None
return! StartReading client
| false ->
_logger.Info(sprintf "Client disconnected : %O" client.Client.RemoteEndPoint)
return! DisconnectClient client None
}
StartReading client |> Async.StartAsTask
It is simple reading message procedure from tcp client in f#, but I feel this can be written more gracefully, especially todo part with frame number incrementation using ref in f# feels wrong.
Any suggestions?
Answer: Why do you make this call Async.StartAsTask instead of just return the Async<T> to let the client handle that as needed?
let rec StartReading (client:TcpClient) =
I would call it reader and then give it the signature of:
let rec reader frameNo (client:TcpClient) =
in order to get rid of reffor frameNumber.
You can then initially call it:
reader 0 client |> Async.StartAsTask
and recursively:
return! reader (frameNo + 1) client | {
"domain": "codereview.stackexchange",
"id": 35129,
"tags": "f#, tcp"
} |
Why People have skin color between fair and dark? | Question: According to Mendel's laws traits do not show any blending. So according to his laws people only with fair or dark skin should exist. Do the alleles in this case blend? And why?
Answer: Mendel's laws were very basic to aid in the comprehension of genetics. You think of Mendel's genes as an +- pair where + is dominant and - is recessive, making our +- gene produce offspring with the + trait but still being a carrier of the - trait.
However, when it comes skin colour, eye colour, hair colour, height, tendency to muscle growth, and a thousand other things, the gene is more like a series of ++-++-+++-, + is more recessive than -, but some of + will still show up over -.
In short, the colour of a person's skin is not a simple +- gene of dark and fair; everyone carries a certain amount of pigment genes and certain number of non-pigment genes. The amount that one carries of each of the genes will determine his skin colour and his/her ability to pass on the dark/light characteristic. | {
"domain": "biology.stackexchange",
"id": 5723,
"tags": "genetics"
} |
Laemmli-SDS-PAGE problems | Question: I did Laemmli-SDS-PAGE for my Ammonium sulphate precipitate but I had very weak band and have very weird part at the end of gel. Please help me to solve that problem. Thanks
Answer: Your gel looks just fine to me. It simply indicates that your ammonium sulfate precipitate contains massive amounts of this small protein at the bottom (whatever this is). The upper bands look fine, the amount loaded is appropriate: they are not overloaded like the bottom one, but not too faint either. The fact that your first and last lanes look a bit smeary is because of the massive amount of the small protein (lanes do look smeary when overloaded this much). The bottom band is definitely really some protein, and not a staining artifact; I have seen this shape before with purified proteins overloaded on SDS-PAGE.
One thing you could do, assuming you still have enough of these samples, is to run a complementary gel with 100 times less material. Dilute by a factor of 1/100 in 1X Laemmli denaturation buffer, and load a new gel with the same volume as for this one. You won't see the upper bands anymore, but you will have a much cleaner bottom band that you will hopefully be able to assign an apparent molecular weight to (or even cut out for mass spec identification; wear gloves and use clean tools if you do that, otherwise they will mostly detect keratin from your skin).
Assuming the point of this gel was to check the outcome of ammonium sulfate precipitation used as a purification step, well obviously it didn't work well to separate all these bands, and you might need to repeat and try different concentrations. But if the bottom band is the one you are trying to purify, then you were successful (the top bands represent less than 10% contamination, I would say by eyeball estimation). If you're interested in one of the top bands, then it clearly didn't work.
Is one of these lanes the supernatant? If so, you also have massive amounts of the small protein there (it's in every lane). Such a high enrichment looks suspicious: could that be purified lysozyme that you added during the purification procedure to help break down bacterial cell wall? (assuming you're trying to purify a protein over-expressed in E. coli). | {
"domain": "biology.stackexchange",
"id": 8403,
"tags": "biochemistry, proteins, gel-electrophoresis, purification, sds-page"
} |
Lorentz contraction from the book by Electrodynamics by Griffiths | Question: In the book Electrodynamics by Griffiths, derivation of Lorentz contraction is done by taking a round trip of light in a moving train.
My question is: Why don't I get the expected result by taking only one way journey of light i.e. from back end to front end of the train.
Answer: You are misapplying the time dilation formula, $\Delta \overline t =\sqrt{1-v^2/c^2}\ \Delta t$.
$\Delta \overline t$ is the proper time, that is the time between two events as measured in a frame of reference in which the events occur at the same place. $\Delta t$ is an improper time, the time between the same two events as measured in a frame (moving with velocity ±$v$ wrt the other frame) in which the events occur in different places.
In your one-way light transit thought experiment, the emission and arrival of the light do not occur at the same place – even in your railway carriage! | {
"domain": "physics.stackexchange",
"id": 85126,
"tags": "special-relativity"
} |
Using MFCC and MFCC Delta features with a CNN | Question: A lot of studies feed MFCCs as well as MFCC delta and double deltas directly to a CNN for audio classification. My question is, are the MFCC Deltas concatenated with the MFCC matrix? Most papers simply state they used MFCC + MFCC Delta + MFCC Double Delta and the plus sign is left to interpretation!
Answer: Yes, the delta and delta-delta variants are concatenated. However the details may vary a bit based on model type:
If the model takes a 1d (features,) input (such as a multi-layer-perceptron, logistic regression, random forest etc), then the delta coefficients are concatenated. So features is [mfcc1,mfcc2...,dmfcc1,dmfcc2... ].
For a model that takes 2d (time,features) input like a RNN, then it is concatenated on the features axis.
For a model that takes 3d (time,features,channels) inputs like a CNN, then the delta coefficients are usually its own plane in the channels dimensions. This ensures that the delta MFCC coefficient is in the same time x feature position as the corresponding MFCC coefficient, which is easiest for the convolutional kernel to exploit. | {
"domain": "datascience.stackexchange",
"id": 9869,
"tags": "cnn, audio-recognition"
} |
Asynchronous network callback code | Question: I did not get the job after submitting this piece of work in an interview, but I have no feedback to know what "BAD" things are inside this block of code.
The requirements are:
Connect to the server on a known port and IP
Asynchronously send a message to the server in your choice of format
Calculate and display the round trip time for each message and the average round trip time for all messages sent
The solution should not be so hard. But I just don't know what's wrong? Bad design? Bad naming? Bad practise?
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.Socket;
import java.net.UnknownHostException;
public class EchoClient {
private String hostname;
private int port;
private Socket clientSocket;
private BufferedReader inFromUser, inFromServer;
private DataOutputStream outToServer;
private double averageTime = 0;
private int count = 0;
public EchoClient(String hostname, int port){
this.hostname = hostname;
this.port = port;
try {
this.clientSocket = new Socket(this.hostname, this.port);
} catch (UnknownHostException e) {
System.out.println("Connection Error: unknown host");
System.exit(1);
} catch (IOException e) {
System.out.println("Connection Error: connection refused");
System.exit(1);
}
try{
this.inFromUser = new BufferedReader( new InputStreamReader(System.in));
this.outToServer = new DataOutputStream(this.clientSocket.getOutputStream());
this.inFromServer = new BufferedReader(
new InputStreamReader(this.clientSocket.getInputStream()));
} catch (IOException e) {
System.out.println("Error on Initializing echoclient");
System.exit(1);
}
}
public void start(){
System.out.println("Connecting to " + hostname + " with port No " + port);
String msgSend;
try {
while ((msgSend = inFromUser.readLine()) != null){
// sendMessage asynchronous
sendMessage(msgSend, new Callback(){
// callback function and calculate the average time
public void callback(long timeUsed, String msgReceived){
averageTime = (count * averageTime + (timeUsed)) / (count + 1);
++count;
System.out.println(msgReceived +
" rtt=" + (double)Math.round(timeUsed * 100)/100 + " ms" +
" artt=" + (double)Math.round(averageTime * 100)/100 + " ms");
}
});
}
} catch (IOException e) {
System.out.println("Error on reading message from user");
}
}
private void sendMessage(String message, Callback cb){
Thread sendMessageThread = new Thread(new SendMessageRequest(message, cb));
sendMessageThread.start();
}
interface Callback {
public void callback(long time, String msg);
}
class SendMessageRequest implements Runnable{
private String message;
private Callback cb;
SendMessageRequest(String message, Callback cb){
this.message = message;
this.cb = cb;
}
@Override
public void run() {
String msgReceived;
long timeStart, timeEnd, timeUsed;
try {
timeStart = System.nanoTime();
outToServer.writeBytes(this.message + '\n');
msgReceived = inFromServer.readLine();
timeEnd = System.nanoTime();
// Calculate the time and get the output
timeUsed = (timeEnd - timeStart) / 1000000;
cb.callback(timeUsed, msgReceived);
} catch (IOException e) {
System.out.println("Error on sending message to server");
}
}
}
public static void showUsage(){
System.out.println("Usage: java EchoClient [hostname] [portNo]");
}
/**
* Entry of the program
*/
public static void main(String[] args) {
String hostname = "";
int port = 0;
if (args.length < 2){
showUsage();
System.exit(0);
}
else{
hostname = args[0];
port = Integer.parseInt(args[1]);
}
EchoClient client = new EchoClient(hostname, port);
client.start();
}
}
Answer: I think the biggest problem is the lack of synchronization. You modify the averageTime and count variables in the callback which runs concurrently. You should synchronize the access of this variables. There is a good book on this topic: Read the Java Concurrency in Practice, if you have time read it, it's very useful.
Some other things:
I don't like inner classes. Reference: Effective Java Second Edition, Item 22: Favor static member classes over.
I would also create an EchoClientMain class which contains the main method and parse the command line parameters. Furthermore, I'd move out to a new file the Callback anonymous inner class and I'd create a Statistics class which would be responsible to calculate and maintain the stats. (Check Single responsibility principle on Wikipedia.)
Instead of System.exit() you should rethrow the exceptions. This class is not reusable since a simple error stops the whole application. Just create a custom exception class and throw it:
try {
this.clientSocket = new Socket(this.hostname, this.port);
} catch (final UnknownHostException uhe) {
throw new EchoClientException("Connection Error: unknown host", uhe);
}
Let the caller to handle them. In this case the main method should catch the EchoClientException and print its message to the console.
try {
EchoClient client = new EchoClient(hostname, port);
client.start();
} catch (final EchoClientException ece) {
System.err.println(ece.getMessage());
}
You should NOT connect to the server in the constructor. I'd do it in the start() method.
Close the resources. Create a stop() method which close the opened streams.
Check at least for null input parameters: Effective Java, Item 38: Check parameters for validity
Clean Code by Robert C. Martin is also worth reading. | {
"domain": "codereview.stackexchange",
"id": 899,
"tags": "java, multithreading, interview-questions, asynchronous, callback"
} |
Do we fix divergence of the vector potential $A$, because $\nabla \cdot \nabla \psi \ne 0$? | Question: Because $\nabla \times \nabla \psi = 0$, we can transform the vector potential $A \longmapsto A + \nabla \psi$, without changing the magnetic field.
Is the reason we specify $\nabla \cdot A$ in the gauge theory, because $\nabla \cdot \nabla\psi\ne0$? Thus, we always can find a function $\psi$, such that $\nabla \cdot A$ equals to whatever we want.
A bit more elaborated:
Let's say I found $A$ and $\phi$, and $\nabla \cdot A \ne 0$. In principle, nothing stops me from finding a function $\psi$, such that $\nabla \cdot (A + \nabla \psi)=0$ (Coulomb gauge). But now my scalar potential will be modified. Then from Gauss' law:
$\nabla \cdot( \nabla\phi+\frac{\partial \nabla\psi}{\partial t}- \frac{\partial }{\partial t}(A + \nabla \psi))=\nabla \cdot( \nabla\phi+\frac{\partial \nabla\psi}{\partial t})$. Thus, the new 4-potential in the Coulomb gauge $(\phi,A)\longmapsto (\phi+\frac{\partial \nabla\psi}{\partial t},A + \nabla\psi)$. So, the freedom of choosing the divergence stems from the freedom of choice of $\nabla \phi$.
Answer: Gauge freedom is the freedom to transform $\mathbf{A} \to \mathbf{A}' = \mathbf{A} + \nabla \psi$ and $\phi \to \phi' = \phi - \partial \psi/\partial t$ for any scalar function $\psi$. (Note the slight difference from your expressions.) If we want to demand that $\nabla \cdot \mathbf{A}' = 0$ (Coloumb), this boils down to being able to find a function $\psi$ such that $\nabla^2 \psi = -\nabla \cdot \mathbf{A}$. Since we know that there is always a function $\psi$ that satisfies this equation, the condition $\nabla \cdot \mathbf{A}' = 0$ is always realizable via a gauge transformation.
That said, there are plenty of other gauge conditions out there that we could impose that do not lead to $\nabla \cdot \mathbf{A}' = 0$. Other common choices include $\nabla \cdot \mathbf{A}' + \partial \phi'/\partial t= 0$ (Lorenz gauge), $\phi' = 0$ (temporal gauge), and $\mathbf{A}' \cdot \hat{n} = 0$ for some unit vector $\hat{n}$ (axial gauge). All of these are realizable, in the sense that given an arbitrary $\mathbf{A}$ and $\phi$, there always exists a function $\psi$ such that the transformed potentials satisfy the desired condition.
So it's not accurate to say that we must impose Coulomb gauge because we have gauge freedom. It's more accurate to say that we are allowed this choice because of gauge freedom, and that gauge freedom allows us many other choices as well. | {
"domain": "physics.stackexchange",
"id": 47465,
"tags": "electromagnetism, maxwell-equations, gauge-invariance, gauge"
} |
PDA with N-Stacks comparison with Turing Machines | Question: Is it possible to compare PDA having N-Stacks with Turning Machines. Are they equally powerful in this situation?
It's been told that PDA with 2-Stacks is equally powerful to Turning Machine. But what if we add more stacks i.e. 3, 4, 5...N to PDA; will it become more powerful or it can serve same purpose.
Answer: 2 stacks is enough for a PDA to be as powerful as a Turing Machine. Basically, you can pop from one stack and push into the other to simulate moving across the tape head, writing, etc.
In fact, a 2-counter machine is as powerful as a Turing Machine, though the proof is much more involved. There's a sketch of it on Wikipedia | {
"domain": "cs.stackexchange",
"id": 4199,
"tags": "turing-machines, pushdown-automata"
} |
Can a linear programming method be used to solve systems of inequalities with OR (disparate) compound inequalities? | Question: I recently discovered linear programming and it seemed perfect for a CS problem I wanted to solve a few months ago. This task involved solving a large quantity of inequalities at once.
For example, one such system could be
X1,X2,X3,X4 >= 0
X1 >= X3 + 3.0 or X1 < X3 - 7.2
X2 >= X4 + 5
X2 >= X3 + 3.0 or X2 < X3 - 5.3
X2 >= X1 + 7.2 or X2 < X1 - 5.3
And then some reasonably complex objective function in terms of X1, X2, X3, X4. In fact, the function mostly just needs to aim to reduce the spread of the variables, whilst finding a solution.
(Optimally, the program would aim to find a solution that meets as many inequalities as possible, to allow it to find a "close enough" solution when there is no actual solution)
I can't find anyone else talking about the limitations of linear programming, but I understand these inequalities are only "kinda" linear. If anyone out there, perhaps with a formal higher CS education, I'd much appreciate it.
If this is not possible with normal linear programming algorithms, any leads into possible solutions would be much appreciated.
Thanks all in advance!
Answer: First off, a couple of observations.
If you allow three inequalities in each "compound inequality", then the problem of finding a feasible solution is NP-complete, since any 3-SAT problem can be recast as this problem.
The problem in parentheses, where you try to fit the maximum number of constraints, is NP-hard, for essentially the same reason: MAX-2-SAT is NP-hard.
OK, with that out of the way, this is called disjunctive programming, and the usual way to solve problems of this form is to introduce boolean variables to handle the disjunction, which turns it into a mixed-integer linear programming problem which is, again, NP-complete in general.
The feasibility problem (as opposed to the optimisation problem) can be directly implemented as a SMT problem and also has a natural expression in CLP(R). | {
"domain": "cs.stackexchange",
"id": 21348,
"tags": "algorithms, linear-programming, linear-algebra"
} |
Differences between 1+1D & 3+1D (background) electric field? | Question: in Sidney Coleman's paper More about massive Schwinger model (PDF). Section 2 "The origin of $\theta$" P.242 (beneath the eq. 2.5). Discuss about some differences between 1+1D & 3+1D electric field with background field $F$, Coleman's paper said that:
In 3+1D, "it is always energetically favorable for the vacuum to emit pairs until $F$ is brought down to zero". Which means an background field $F$ in 3+1D is trivial thus it wouldn't matter.
In 1+1D, "it is not energetically favorable for the vacuum to produce a pair if $F\leq \frac{1}{2}e$". Which is non-trivial, thus raise the $\theta$ term in Schwinger model.
My question is, why in 3+1D the pairs production is always possible thus $F$ can always strictly be $0$ ? Shouldn't we consider the energy conservation law will prohibit pair production when $F$ is less than an threshold value? Or I miss some thing important in 3+1D?
Answer:
This essentially boils down to the discrete and constant nature of the electric field$^1$ $$E~=~\sum_{i=1}^n\frac{q_i}{2\epsilon_0}{\rm sgn}(x-x_i)\tag{1}$$ of $n$ point charges $q_1, \ldots, q_n$ at positions $x_1, \ldots, x_n$ in 1+1D.
While in more than 1+1D, the attractive Coulomb energy between a particle and an anti-particle gets dwarfed in comparison to the repulsive separation energy in the background electric field $F$, this is no longer true in 1+1D.
To derive the upper limit $|F|\leq\frac{e}{2\epsilon_0}$ for the background electric field $F$ in 1+1D mentioned by Coleman, here is one approach: Consider the energy density
$${\cal E}~=~\frac{\epsilon_0}{2}\left(F + \frac{e}{2\epsilon_0}{\rm sgn}(x-b) - \frac{e}{2\epsilon_0}{\rm sgn}(x-a)\right)^2\tag{2}$$
of the electric field after a pair separation. Let us assume $F,e>0$. Then $0<a<b<L$. (The other cases are similar.) The relevant pieces are the cross-terms (ct)
$$\begin{align} {\cal E}_{\rm ct}~\stackrel{(2)}{=}~& \frac{Fe}{2}\left[{\rm sgn}(x-b)- {\rm sgn}(x-a)\right]\cr &-\frac{e^2}{4\epsilon_0}\underbrace{{\rm sgn}(x-b) {\rm sgn}(x-a)}_{=1+{\rm sgn}(x-b)- {\rm sgn}(x-a)}\cr
~=~&\frac{e}{2}(F-\frac{e}{2\epsilon_0})\left[{\rm sgn}(x-b)- {\rm sgn}(x-a)\right] -\frac{e^2}{4\epsilon_0}.
\end{align}\tag{3}$$
The corresponding energy is the integral
$$ \begin{align} \int_0^L\! dx~{\cal E}_{\rm ct}~\stackrel{(3)}{=}~&\frac{e}{2}(F-\frac{e}{2\epsilon_0})\left[|x-b|- |x-a|\right]_0^L-\frac{e^2}{4\epsilon_0}\left[ x \right]_0^L\cr
~=~& -\frac{e}{2}(F-\frac{e}{2\epsilon_0}) 2(b-a)-\frac{e^2}{4\epsilon_0}L
.\end{align}\tag{4}$$
In particular, pair separation is energetically favorable iff $F>\frac{e}{2\epsilon_0}$.
$^1$Eq. (1) follows from Gauss' law
$$\frac{dE}{dx}~=~\frac{\rho}{\epsilon_0}~=~\sum_{i=1}^n\frac{q_i}{\epsilon_0}\delta(x-x_i)\tag{5}$$
in 1+1D. | {
"domain": "physics.stackexchange",
"id": 99243,
"tags": "electromagnetism, quantum-field-theory, electric-fields, quantum-electrodynamics, pair-production"
} |
Golden and red colored light even after sunset | Question: I heard about the golden hour, but yesterday I saw golden and red colored patches in the sky even after sunset. Why does it happen? And I would like to know more about the science behind the golden hour.
Answer: This is a result of Rayleigh scattering, the same reason the sky is blue. As the sun sets the optical path for a photon passing through the atmosphere to your eye increases. Because of this increased distance through the atmosphere, more scattering occurs and the shorter wavelengths are scattered away, leaving the longer wavelengths. This results in light that is reaching your eye tending toward red and the yellow to orange and gold hues are produced on the way to red.
The reason you can still see this persist after sunset is because the sun below the horizon can still reach you through scattering though clouds can enhance this effect (because clouds are good at scattering all visible wavelengths and clouds you can see can be illuminated by a sun you cannot see). This is the same principle as above, with the addition of the scattering by the cloud. You'll have Rayleigh scattering on the way to the cloud through the atmosphere, uniform scattering off the cloud and then more Rayleigh scattering between the cloud and your eye. The longer these distances become, the more red the light reaching your eye has become, and thus the sky will take on the reddening hue (with the yellow/orange/gold colors in the transition to red). | {
"domain": "earthscience.stackexchange",
"id": 133,
"tags": "atmosphere"
} |
Spherical symmetry and mean of angular momentum | Question: I have the following problem.
Consider a 3 dimensional system with spherical symmetry. Consider a state $|\psi \rangle$ such that the possible results of a measure of operator $L^2$ (square of angular momentum) are $0$, $2\hbar^2$ and $6\hbar^2$. Let all possible results for simultaneous measurements of $L^2$ and $L_z$ (third component of angular momentum) be equiprobable. Evaluate the average of $L_z$ and give a lower bound for the product $\Delta L_x \Delta L_y$.
Here's my attempt. $|\psi\rangle = \sum_{l,m} c_{l,m} |l,m\rangle$ for some coefficients. Due to the equiprobability request, $|c_{l,m}|^2=|c|^2$ for some $c$. Therefore, $c_{l,m} = c e^{i \theta_{l,m}}$. So,
$\langle L_z\rangle=\hbar |c|^2 \sum_{l,m} \sum_{l',m'}e^{-i \theta_{l',m'}}e^{i \theta_{l,m}} \langle l',m'| m |l,m\rangle$, since $L_z |l,m\rangle = \hbar m |l,m\rangle$.
Now, $m=-l\dots l$ and $L=0,1,2$, so $\sum_{m=-l}^{l}m=0$ and $\langle L_z\rangle=0$.
But, from Heisenberg's principle, $\Delta L_x \Delta L_y \geq \frac{1}{2} |\langle[ L_x, L_y]\rangle| $, so this would be $0$ aswell.. which is incorrect, from my understanding. What am I doing wrong?
Answer: Heisenberg's inequality says that $\Delta L_x \Delta L_y \geq 0$; this is not false! It tells you that if $\langle L_z \rangle = 0$, uncertainty doesn't forbid a state of definite $L_x$ or $L_y$. | {
"domain": "physics.stackexchange",
"id": 50554,
"tags": "quantum-mechanics, homework-and-exercises, angular-momentum, hilbert-space"
} |
Is there a energy lower to -13.6 eV in a given atom/element? | Question: The Hydrogen atom fundamental energy is -13.6 eV.
Is there an atom that has an energy level lower to -13.6 eV ?
if no, then why, in semiconductor physics, the integral on energy start at $-\infty$ instead of $-13.6\ eV$ ?
Answer: Yes. Neglecting effects of other electrons, the ground-state energy scales like $Z^2$. So probably all other elements have more negative ground-state energies than hydrogen does.
I recommend reviewing either the Bohr or Schrodinger models for a hydrogen-like atom that has a nucleus that has $Z$ protons. | {
"domain": "physics.stackexchange",
"id": 65622,
"tags": "condensed-matter, solid-state-physics, semiconductor-physics"
} |
Does the uncertainty principle say that conservation of momentum is violated in quantum mechanics? | Question: The uncertainty principle of Heisenberg says that the uncertainty in the position of a particle multiplied by the uncertainty of the momentum of a particle is always more than or equal to $\frac{\hbar}{2}$:
$$\Delta x \Delta p \ge \frac{\hbar}{2}$$
Rearranging, you get this:
$$\Delta p \ge \frac{\hbar}{2\Delta x},$$
which means that the uncertainty in momentum will never be $0$ unless the uncertainty in position is $\infty$.
But that can never happen... I mean at maximum the uncertainty in position can't be more than a certain value because the universe doesn't have infinite space, right? Besides, if you think about electrons around an atom, that’s finite space.
That would mean that there will always be an uncertainty in momentum... Meaning that momentum is not conserved...
This question has been bothering me lately. It would be great if someone could give me an explanation.
Answer: The uncertainty principle says that you cannot know position and momentum simultaneously. The momentum of a particle on its own can be known to arbitrary precision. The momentum of any object/interaction is always conserved. | {
"domain": "physics.stackexchange",
"id": 72395,
"tags": "quantum-mechanics, momentum, conservation-laws, heisenberg-uncertainty-principle"
} |
Cross Correlation Between Input-Output Sine Waves | Question: I am writing an algorithm to estimate the frequency transfer function of the system. For this, I want to use the Cross-correlation Between Input-Output Sine Waves method. There are a few things I don't understand about method:
I - What does capital N mean?
II - Is and Ic scalar or vector(depend on N)?
These are my thoughts :
I - N represents the number of outputs I collected for a specific w value with a sampling periode.
II - Scalar.
Could you help?
Answer: These formulas are discrete fourier transform.
It's very standard procedure when you analyze wave-alike processes. What it does it transforms the signal from the time-space to the frequencies-space. The idea is that you take sin-wave with specific frequency, multiply it by signal and if the value is big then you have something similar to the wave of this frequency.
$\frac{1}{NT}$ term is just overall time of the sequence (N samples with length T)
The $I_{s}$ and $I_{c}$ is amplitude(intensity) for the sin and cos, respectively. You get them for the specific frequency $\omega$ and it's a single number for each. From this you can get full amplitude (G) and phase $\phi$. Why do you need phase? Because it's important - if you have two waves with similar frequency and $\phi = \pi$, then they would cancel each other, and if they have same phase, they would double. | {
"domain": "datascience.stackexchange",
"id": 7410,
"tags": "correlation"
} |
The Church-Turing thesis and Hyper-computation | Question: I am not a computer scientist and this is my first question.
This question is a question in layman terms and I also want the answer in layman terms.
When I searched hyper-computation. There was a list of models of hyper-computers.
Now, my question is that what are the problems with those models and if it is possible to create models of computers which can compute non-Turing computable things then doesn't, that already proves the Church-Turing thesis wrong and if it does prove that Church-Turing thesis is wrong then why people still consider it to be a thing?
Answer: The Church–Turing thesis is about physically realizable machines. To the best of our knowledge, hypercomputation models cannot be realized in the physical world. They are a figment of our imagination.
If someone would find a new law of physics which enables solving the halting problem, that would make a big fuss in the world of science. But I am not holding my breath. | {
"domain": "cs.stackexchange",
"id": 17589,
"tags": "church-turing-thesis, hypercomputation"
} |
How to identify component vectors correctly? | Question: Firstly I'm sorry for asking this very basic (& seemingly repeated) question but I want some more insight on it.
The main question is this only: Velocity of a ring tied to an inextensible rope
Now I HAVE read the accepted answer and Yes MATHEMATICALLY its CORRECT.
But............
It doesn't answers why questioner's following logic is incorrect:
The rope is pulling the ring at an angle so only the component of the
the velocity of the rope in the direction of movement of ring should
act make it move, which ultimately makes the velocity of ring as ${v}_{ring}={v}_{rope}\cos\left(\theta \right)$
Also it seems counterintuitive to say that velocity of ring has a component along rope whereas it sounds more reasonable to say that velocity of rope has a component along the ring because ORIGINALLY the force is applied on rope only. So at at intuitive level, how to make sense of this?
Even if ${v}_{rope}={v}_{ring}\cos\left(\theta \right)$ is correct, is there any shorter way to find out the CORRECT components of a vector in such confusing situations? I don't think while solving mechanics problems one would go through such long process just to identify the component vectors?
Answer: The reason that
$$v_{ring}=v_{rope}\cos (\theta)$$
is incorrect is that this assumes $\theta$ is constant, which it is not. Once you realise this, the correct derivation is quite straightforward. If the distance of the ring from the pulley is $y$ and the horizontal distance (along the bar) is $x$ then:
$y^2=x^2+\text{ constant}
\\ \displaystyle \Rightarrow 2y \frac {dy}{dt}=2x\frac {dx}{dt}
\\ \displaystyle \Rightarrow v_{rope}=\frac x y v_{ring} = v_{ring}\cos (\theta)$
Although this result looks superficially like “taking components” it is better not to think of it like this. | {
"domain": "physics.stackexchange",
"id": 82196,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, vectors"
} |
Deep learning - aesthetics data modelling | Question: I want to train neural network on aesthetics. I am getting confused on how to go about for training data.
Assume, I have large data set of landscapes, portraits, wildlife etc which are aesthetic according to humans. But, I want to train them for the quality, the kind of colours involved, contrast levels, background blur etc.
How do I train images for this criteria? Is there a way for doing this by unsupervised learning?
Answer: Modeling aesthetics in media is an example of ordinal classification.
One of the most actively maintained datasets for this can be found is Jen Aesthetics
A relatively recent paper using deep learning towards aesthetics modeling is
this
Prior to deep learning era, research groups were trying to translate methods/guidelines used in the photography community to create/capture good quality pictures. There are several guidelines that you can explore with a bit of search online. One popular example is the 'rule of thirds'. Here the primary subject should not be centered in the image but offset and ideally centered at the intersection of 1/3 and 2/3 horizontal and vertical lines.
This is easy to translate into an algorithm: use salient object recognition or visual attention detection and measure the distance of the center of the salient/attention patch from the 4 rule-of-thirds points. Use this off-set as a feature. The closer the salient patch is to any one of the rule-of-third points, the higher the aesthetic ordinal score for that image.
This is another good paper that explores what makes images popular.
Some researchers have also used the tags or descriptions of photos as features. The objective here is to learn an association between lexical features and image aesthetics. They have sourced their data from online repositories like Digital Photography Challenge.
This subjective task is needless to say very complex. If you plan to address it, I'd recommend beginning with a clear definition of the context within which you aim to address aesthetics.
Ideally, you'd like to map any given image (media) to some value in [0, ..., 1] $\in \mathcal{R}$.
However, this is very difficult unless you have access to a lot of training data. I suggest trying instead to simplify the problem. If you can reliably map images to just two classes, good aesthetics and bad aesthetics.
You can successively generalize from binary classification to full fledged multi-class ordinal classification, for which you'll very likely have to keep increasing the depth of your CNN.
Good luck! Since, there is more to aesthetics than meets the eye! :-) | {
"domain": "datascience.stackexchange",
"id": 2041,
"tags": "neural-network, deep-learning, dataset, keras, data-cleaning"
} |
Testing uptime of my personal server | Question: I made some code to test my personal servers uptime. It logs in to my router page and sees if it returns status 200. A cron job runs it every 5 minutes. Please let me know if I can make the code more efficient.
<?php
//Date
date_default_timezone_set('America/Los_Angeles');
$date = date('m/d h:i:s a', time());
//Output
$myFile = dirname(__FILE__) . "/output.txt";
$fh = fopen($myFile, 'a+') or die("can't open file");
//Count #
$count_file = dirname(__FILE__) . "/OK.txt";
$fil = fopen($count_file, r);
$dat = fread($fil, filesize($count_file));
fclose($fil);
$fil = fopen($count_file, w);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "MYURL");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERPWD, "USER:PASSWORD");
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_UNRESTRICTED_AUTH, 1);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if($retcode==200){
//fwrite($fh, "$date\tOK\n");
$OK = $dat + 1;
fwrite($fil, $OK);
echo "$date\tOK\n";
fclose($fh);
} else {
fwrite($fh, "$date\t\tDOWN\n");
echo "$date\tDOWN\n";
fclose($fh);
}
//Count OK/DOWN
$read = file_get_contents($myFile);
$DOWN = preg_match_all("/DOWN/", $read, $matches) - 1;
$UP = 100*bcdiv($OK,$OK+$DOWN,4);
//Remove first line
$read = file($myFile);
array_shift($read);
$server_up = "Server Uptime: " . $UP . "%\nOK: " . $OK . ", DOWN: " . $DOWN . "\n";
$read[0] = $server_up;
file_put_contents($myFile, $read);
?>
One thing I wanted to do was to somehow assign $OK to a variable and retrieve it the next time the page is loaded but I do not think PHP can do this so I saved it to an external file (OK.txt) and I read the value every time and increment it every time my site returns 200.
Here is a sample output.txt file:
Server Uptime: 92.5%
OK: 37, DOWN: 3
09/30 06:48:28 pm DOWN
09/30 06:48:28 pm DOWN
09/30 06:48:28 pm DOWN
Answer: Naming
You should choose longer variable names. For example, with names as $fil and $fh it's hard to remember which one was which. You could name them $output and $count.
But all other names should be expressive as well. $dat might be $uptime_count, and $myFile -> $output_file_name, $fh -> $output_file, $count_file -> $count_file_name (it't just a string, not a file), $ch -> $curl, etc.
For some names it's obviously more important than for others, but good naming is always an improvement.
Variable Scoping
$fh is only used once, far away from where it was defined, inside the else statement. I would move the opening code here, so you don't have to think about $fh outside the else.
It's closed in the if as well, but that is unnecessary. Don't open up the file if you don't plan on writing to it.
Functions
The code isn't all that long, but I would still extract some functionality to its own function. Especially the counting of previous up and down times, and maybe the connecting to the server. | {
"domain": "codereview.stackexchange",
"id": 9838,
"tags": "php, http, curl"
} |
Turtlebot Restarting Continuously | Question:
I have a turtlebot that was working properly in Diamondback, but when I upgraded it to electric (and upgraded the operating system on the workstation and netbook to Ubuntu 11.10), I am having several errors with the turtlebot.
The turtlebot restarts it's service continuously when I launch minimal.launch or turtlebot.launch, with the following output:
turtlebot@turtlebot-laptop:/opt/ros/electric/stacks/turtlebot/turtlebot_bringup/upstart$ roslaunch turtlebot.launch
... logging to /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/roslaunch-turtlebot-laptop-30106.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://192.168.11.59:44682/
SUMMARY
PARAMETERS
/use_sim_time
/robot_pose_ekf/sensor_timeout
/diagnostic_aggregator/analyzers/sensors/path
/robot_pose_ekf/imu_used
/turtlebot_node/odom_angular_scale_correction
/robot/name
/diagnostic_aggregator/analyzers/mode/timeout
/diagnostic_aggregator/analyzers/sensors/timeout
/diagnostic_aggregator/analyzers/power/type
/turtlebot_node/update_rate
/diagnostic_aggregator/analyzers/power/timeout
/diagnostic_aggregator/analyzers/mode/type
/turtlebot_node/gyro_scale_correction
/diagnostic_aggregator/analyzers/digital_io/path
/diagnostic_aggregator/analyzers/digital_io/timeout
/rosdistro
/robot_pose_ekf/odom_used
/robot_description
/diagnostic_aggregator/base_path
/robot_pose_ekf/freq
/app_manager/interface_master
/robot_pose_ekf/vo_used
/diagnostic_aggregator/analyzers/sensors/type
/diagnostic_aggregator/analyzers/digital_io/startswith
/diagnostic_aggregator/analyzers/power/path
/robot_pose_ekf/output_frame
/diagnostic_aggregator/analyzers/mode/path
/diagnostic_aggregator/analyzers/digital_io/type
/diagnostic_aggregator/analyzers/mode/startswith
/rosversion
/diagnostic_aggregator/pub_rate
/robot_state_publisher/publish_frequency
/diagnostic_aggregator/analyzers/sensors/startswith
/diagnostic_aggregator/analyzers/power/startswith
/turtlebot_node/bonus
/robot/type
/robot_pose_ekf/publish_tf
NODES
/
appmaster (app_manager/appmaster)
app_manager (app_manager/app_manager)
turtlebot_node (turtlebot_node/turtlebot_node.py)
turtlebot_laptop_battery (turtlebot_node/laptop_battery.py)
robot_state_publisher (robot_state_publisher/state_publisher)
diagnostic_aggregator (diagnostic_aggregator/aggregator_node)
robot_pose_ekf (robot_pose_ekf/robot_pose_ekf)
ROS_MASTER_URI=http://192.168.11.59:11311
core service [/rosout] found
process[appmaster-1]: started with pid [30127]
process[app_manager-2]: started with pid [30128]
process[turtlebot_node-3]: started with pid [30129]
process[turtlebot_laptop_battery-4]: started with pid [30130]
process[robot_state_publisher-5]: started with pid [30131]
process[diagnostic_aggregator-6]: started with pid [30132]
process[robot_pose_ekf-7]: started with pid [30133]
Unhandled exception in thread started by <bound method XmlRpcNode.run of <roslib.xmlrpc.XmlRpcNode object at 0x261fbd0>>
Traceback (most recent call last):
File "/opt/ros/electric/ros/core/roslib/src/roslib/xmlrpc.py", line 195, in run
self._run()
File "/opt/ros/electric/ros/core/roslib/src/roslib/xmlrpc.py", line 218, in _run
self.server = ThreadingXMLRPCServer((bind_address, port), log_requests)
File "/opt/ros/electric/ros/core/roslib/src/roslib/xmlrpc.py", line 98, in init
SimpleXMLRPCServer.init(self, addr, SilenceableXMLRPCRequestHandler, log_requests)
File "/usr/lib/python2.7/SimpleXMLRPCServer.py", line 590, in init
SocketServer.TCPServer.init(self, addr, requestHandler, bind_and_activate)
File "/usr/lib/python2.7/SocketServer.py", line 408, in init
self.server_bind()
File "/usr/lib/python2.7/SocketServer.py", line 419, in server_bind
self.socket.bind(self.server_address)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(args)
socket.error: [Errno 98] Address already in use
[ WARN] [1327603137.252565670]: The root link base_footprint has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF.
turtlebot_apps.installed
loading installation data for [turtlebot_apps.installed]
[INFO] [WallTime: 1327603138.372287] Starting app manager for turtlebot
[INFO] [WallTime: 1327603138.412161] Waiting for foreign master [http://localhost:11312] to come up...
[INFO] [WallTime: 1327603138.419327] Foreign master is available
[INFO] [WallTime: 1327603138.457863] Registering (/turtlebot/app_list,http://192.168.11.59:50069/) on master http://localhost:11312
[INFO] [WallTime: 1327603138.470535] Registering (/turtlebot/application/app_status,http://192.168.11.59:50069/) on master http://localhost:11312
turtlebot_apps.installed
[INFO] [WallTime: 1327603138.485023] Registering service (/turtlebot/list_apps,rosrpc://192.168.11.59:46104) on master http://localhost:11312
[INFO] [WallTime: 1327603138.496940] Registering service (/turtlebot/start_app,rosrpc://192.168.11.59:46104) on master http://localhost:11312
[INFO] [WallTime: 1327603138.511035] Registering service (/turtlebot/stop_app,rosrpc://192.168.11.59:46104) on master http://localhost:11312
[INFO] [WallTime: 1327603141.503025] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603141.503945] update_rate: 30.0
[INFO] [WallTime: 1327603141.504655] drive mode: twist
[INFO] [WallTime: 1327603141.505353] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [30488]
[INFO] [WallTime: 1327603151.363323] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603151.364292] update_rate: 30.0
[INFO] [WallTime: 1327603151.365230] drive mode: twist
[INFO] [WallTime: 1327603151.366290] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [30684]
[INFO] [WallTime: 1327603161.019963] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603161.020760] update_rate: 30.0
[INFO] [WallTime: 1327603161.021372] drive mode: twist
[INFO] [WallTime: 1327603161.022025] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [30881]
[INFO] [WallTime: 1327603170.690763] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603170.691764] update_rate: 30.0
[INFO] [WallTime: 1327603170.692763] drive mode: twist
[INFO] [WallTime: 1327603170.693762] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [31074]
[INFO] [WallTime: 1327603180.273073] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603180.274148] update_rate: 30.0
[INFO] [WallTime: 1327603180.275294] drive mode: twist
[INFO] [WallTime: 1327603180.276331] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [31269]
[INFO] [WallTime: 1327603190.048126] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603190.049202] update_rate: 30.0
[INFO] [WallTime: 1327603190.050300] drive mode: twist
[INFO] [WallTime: 1327603190.051241] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [31465]
[INFO] [WallTime: 1327603199.952413] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603199.953996] update_rate: 30.0
[INFO] [WallTime: 1327603199.955070] drive mode: twist
[INFO] [WallTime: 1327603199.956104] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [31659]
[INFO] [WallTime: 1327603209.652445] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603209.653340] update_rate: 30.0
[INFO] [WallTime: 1327603209.654194] drive mode: twist
[INFO] [WallTime: 1327603209.655096] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [31851]
[INFO] [WallTime: 1327603219.401412] serial port: /dev/ttyUSB0
[INFO] [WallTime: 1327603219.403077] update_rate: 30.0
[INFO] [WallTime: 1327603219.404601] drive mode: twist
[INFO] [WallTime: 1327603219.406203] has gyro: True
shutdown request: new node registered with same name
[turtlebot_node-3] process has finished cleanly.
log file: /home/turtlebot/.ros/log/7dcff916-484a-11e1-b127-485d607f80f5/turtlebot_node-3*.log
respawning...
[turtlebot_node-3] restarting process
process[turtlebot_node-3]: started with pid [32046]
When I try to turn on breaker 0, I get the following error:
Service call failed with error: service [/turtlebot_node/set_digital_outputs] unavailable
On the turtlebot dashboard, the message log contains the following warning:
Node: /diagnostic_aggregator
Time: 1327603137.674960700
Severity: Warn
Location: /tmp/buildd/ros-electric-diagnostics-1.6.1/debian/ros-electric-diagnostics/opt/ros/electric/stacks/diagnostics/diagnostic_aggregator/src/analyzer_group.cpp:AnalyzerGroup::init:109
Published Topics: /rosout
Analyzer specification should now include the package name. You are using a deprecated API. Please switch from GenericAnalyzer to diagnostic_aggregator/GenericAnalyzer in your Analyzer specification.
The practical upshot of this is that I cannot keep breaker 0 on in order to test the turtlebot arm which I am trying to run off of it. I am, however, able to run programs like the teleoperation successfully. Does anyone have insight as to why this might be happening, and what I can do to fix it? Thank you very much!
Originally posted by Aroarus on ROS Answers with karma: 122 on 2012-01-26
Post score: 0
Answer:
The turtlebot launch files run as a daemon in electric which would cause everything to continuously restart because the two launch files are killing each other and respawing. You don't need to launch minimal.launch in electric see this tutorial: http://ros.org/wiki/turtlebot_bringup/Tutorials/TurtleBot%20Bringup
Originally posted by mmwise with karma: 8372 on 2012-01-26
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Aroarus on 2012-02-08:
This is no longer working, and I have generated a new post about it. It worked once, but has not done so since. Starting the turtlebot service still leaves the dashboard stale forever. The only way I can get the dashboard to light up is to kill the turtlebot service and run minimal.launch.
Comment by Aroarus on 2012-01-26:
That fixed everything! The dashboard was appearing stale when I started the turtlebot service, so I assumed that I also had to run minimal.launch (based on my diamondback experience), but it turns out that it just needed a minute to get going. Thanks! | {
"domain": "robotics.stackexchange",
"id": 8010,
"tags": "ros, turtlebot, turtlebot-arm, turtlebot-bringup"
} |
Why am I getting different prediction result after every run? | Question: I have a simple lstm model
model =Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()
on which I train and test on the same data but each time I am getting different prediction. Why?
If the model parameters and train and test are same why the prediction is changing on every run?
If the results are not reproducible what is the point of training and testing lstm model?
the rmse value is almost similar but the predicted values are way off on each run.
Answer: Do you specify the random seed anywhere in your code? If you don't, that might be the explanation why your RMSE value differs on each run for your train/test datasets.
You could use the set_random_seed() function to set the random seed and have your training be more deterministic. You can also use enable_op_determinism() to make it even more deterministic, but the training speed will suffer as a result.
import tensorflow as tf
tf.keras.utils.set_random_seed(1)
tf.config.experimental.enable_op_determinism()
utils.set_random_seed() will automatically set the seed of random.seed() and numpy.random.seed() as you can see in the linked documentation, so you don't need to import and set those.
On a second note, did you split the train/test set ahead of training or do you use the built-in functions of Tensorflow/Keras to do this? If so, the data split will vary based on the random seed as well. | {
"domain": "datascience.stackexchange",
"id": 10661,
"tags": "lstm, training, prediction, validation"
} |
IIR - How the gain is calculated? | Question: I try to extend my understanding of digital filter design, so I play around with some YouTube videos and the MicroModeler DSP with the following IIR design (based on a video):
I use the following parameters:
Zeros: 0.57 ± 0.78i
Poles: 0.51 ± 0.7i
How does the calculator generate the additional gain with value 0.90134? As I understand it so far the gain value is generated by the equation of the zeros, but I can´t calculate the value for it.
Answer: Apparently it's normalized so that the gain at Nyqusit is 0 dB. Potentially they just normalize to the maximum being 0 dB. Gain at Nyquist is given by
$$H(\omega = \pi) = \frac{b_0-b_1+b_2}{a_0-a_1+a_2}$$
whereas gain at DC is
$$H(\omega = 0) = \frac{b_0+b_1+b_2}{a_0+a_1+a_2}$$
Both values are real. | {
"domain": "dsp.stackexchange",
"id": 8559,
"tags": "infinite-impulse-response"
} |
Can the fabric of space-time be contoured into hills instead of just wells? | Question: Einstein's general theory of relativity states that gravity is the distortion of space-time into gravity wells. In order to illustrate this, a flat plane is used to represent undistorted space-time grid. Around mass the flat space-time fabric is shown sinking down into a parabolic well which encloses the mass.
According to my primitive understanding of this representation:
The depth of the well is proportional to the total mass of the object.
The diameter of the well at the top is proportional to the volume of the mass.
Thus a post single star mortem black hole's space-time distortion would be represented by a narrow but very deep parabolic well depression in in the otherwise flat plane grid fabric.
A star like our sun's gravity well would be illustrated by a wider diameter at the top of the well but not as deep as the above mentioned black hole's well.
On the other hand, an entire galaxy gravity well would be illustrated by an extremely wide but shallower parabolic gravity well depression.
Please indicate if my understanding of the space-time fabric distortion above is in error.
I was wondering if there was more to this illustration that can help explain dark matter and dark energy.
Dark matter was theorized to explain the intra galactic phenomenon of orbital speed of stars in the outer arms of the galaxy being greater than the escape velocity for the galaxy at the said distance from galaxy center. The galaxy mass must be much greater than the observed mass to keep these orbiting stars from flying off due to centripetal force.
Dark energy was theorized in the attempt to explain the observed deep space red shift data that the expansion of the universe is accelerating. Sort of like an anti-gravitation force pushing the distant galaxies apart.
My intuition tells me that this illustration is over-simplified and may be omitting a more comprehensive overall effect on how mass distorts space-time. Indeed, completely flat planes are unnatural as are parabolic depressions without rims along the perimeter.
Consider that just as mass causes wells and depressions in the space-time fabric, why can't lack of mass in intergalactic space cause a hill or bulge in the other direction? Such a bulge or hill would have the opposite effect of mutual attraction of masses but rather mutual repulsion of them. The bulge or hill in space-time would be significant only at extremely long distances between galaxies where the inter-galactic space mass density would be extremely low to almost perfect vacuum. This mutual repulsion of distant galaxies due to the inflation of the space-time grid would explain the accelerating expansion of the universe without resorting to dark energy.
Accordingly, if we can imagine a space-time inflationary hill along the perimeter of a galactic gravity well, the extra relative depth from the top of the hill to the space time grid plane would explain the greater overall mutual attraction throughout the galaxy. The gravity well would be deeper due to the hill on the perimeter, thus raising the escape velocity for masses in the galaxy, eliminating the need to explain with dark matter.
Extending this train of thought using nature as example:
Just as a mass dropped in a pond of still water causes concentric ripples across the planar water surface, a galaxy would also cause a multitude of concentric hills (inflation) and valleys (compression) of the space time grid. This would start with the the largest inflationary bulge at the rim of the gravity well of the galaxy and concentric alternating valleys and hills of decreasing size with increasing distance from the galaxy.
The distance between the peaks and valleys of space time would be astronomical, thousands if not millions or billions of light years.
We can also imagine at some point in inter-galactic space the ripples merging into huge hills in space time grid causing the accelerating expansion of universe noted above.
The large valleys are where we would find galaxy clusters.
Is there any reason why the Einstein model of gravity is only the case of the compression of space-time in our realm of experience?
Answer: If you take an embedded spacetime slice and turn it upside down, it has the same intrinsic geometry, and therefore the same gravitational effect. A "hill" is the same as a "well."
The important property of the well/hill shape is that any circle around the center has a circumference smaller than π times its diameter. If the mass in the center was negative, then (theoretically, plugging in a negative value to the Schwarzschild solution) the circumference would be larger than π times the diameter. I don't know whether you'd be able to embed this exactly, but I imagine it would look something like a crocheted hyperbolic plane. It wouldn't look like a hill or well.
A locally higher value of the Newtonian gravitational potential does have an antigravity effect, a graph of it does look like a hill, and if you imagine the graph as a rubber sheet you get a surprisingly accurate model of Newtonian gravity. But spacetime in general relativity doesn't work that way. | {
"domain": "physics.stackexchange",
"id": 95530,
"tags": "general-relativity, gravity, spacetime, space-expansion, curvature"
} |
DC gain from an impulse response | Question: I am studying for my exam in signal processing. For one of the old exam sets, a discrete impulse response of a filter is given as $h[n]$
\begin{bmatrix}
-3&0&-3&0
\end{bmatrix}
With a frequency response $H[k]$. What is the easiest way of calculating the DC gain from this information? I have tried using a DFT but did not see any usable conclusion.
Answer: The DC gain is simply the sum of filter taps or coefficients. This is the value of the frequency response at DC (i.e. $0\ \rm Hz$), or equivalently
$$
H(0) = \sum_{n = 0}^{N - 1}h[n]\tag{1}
$$
Because, for a digital FIR filter of length $N$ with impulse response given by Equation $(2)$
$$
\big\{h[n]\big\}, \quad\text{with}\quad 0\le n\le N -1\tag{2}
$$
the frequency response is the $z$-transform at the unit circle (i.e. $z = e^{j2\pi f}$), or in this case, equivalently
$$
H(e^{j\omega}) \equiv H(\omega) = \mathcal F \left\{h[n]\right\} \triangleq \sum_{n = 0}^{N - 1}h[n]e^{-j\omega n } = \sum_{n = 0}^{N - 1}h[n]e^{-j2\pi n f}\tag{3}
$$
Then you can think at DC (i.e. at $0\ \rm Hz$ or $f = 0$ or $\omega = 0$) to see the why of Equation $(1)$.
From DTFT to DFT
The DFT is the one practically computed in place of the DTFT. For finite-length signals (as is the case here), it provides the frequency-domain samples of the DTFT. Then $H[k]$, which is the DFT of the $h[n]$, is computed for $N$-point as shown in Equation $(4)$
$$
H[k] = H(e^{j\omega}) \bigg\vert_{\omega = 2\pi k/N}\quad\text{with}\quad 0\leq k \leq N - 1\tag{4}
$$
i.e. $H[k]$ consists of equally-spaced (by $2\pi/N$) samples of $H(e^{j\omega})$.
Noting that
$$\omega \equiv 2\pi f \quad\text{with}\quad -\pi \leq \omega \leq \pi\quad\text{and}\quad -\frac 12 \leq f \leq \frac 12$$
With $\omega$ in [radians/sample] and $f$ in [cycles/sample].
In MATLAB
This can be done using MATLAB's freqz function as follows:
>> [h, w] = freqz([-3 0 -3 0], 1, [0 , pi/4])
h =
-6.000000000000000 + 0.000000000000000i -3.000000000000000 + 3.000000000000000i
w =
0 0.785398163397448
>>
Giving you the flexibility to specify at whatever angular frequencies $\omega$ (in example above the evaluation is at two frequencies $0$ and $\pi/4$ rad/samples) you would want to evaluate the frequency response, (Here we're interested in the value at $\omega = 0$). The magnitude response with these two points is shown below
Note that the two points in magenta correspond exactly to
>> 20*log10(abs(h))
ans =
15.563025007672874 12.552725051033063
>>
The values at $\omega = 0$ and $\omega = \pi/4$ respectively. | {
"domain": "dsp.stackexchange",
"id": 9677,
"tags": "filters, frequency-response, impulse-response"
} |
An electric motor operates on a 50 V supply and a current of 12 A. If the efficiency of the motor is 30% what is the resistance of winding of motor? | Question: This is not a textbook question,i want to understand why it would generate two different values? and I cannot do that without invoking some form of numerical value in my question
Ok so I get that the power dissipated as heat is$$P=0.7*50*12=420$$
So but to calculate the resistance if I use the formula $$R=\frac{P}{I^2}$$
I get $R=2.9$
However if i use $$R=\frac{V^2}{P}$$
I get $R=5.95$
Why is there a difference,i assume that it might be because the potential difference across the resistor is different than 50V(because of conservation of current,$I$ must be the same),but why would a non efficient machine behave as such?
Answer: Energy losses due to current flowing in an element of wire are given by the formula
$$
RI^2
$$
where $R$ is resistance of the wire element and $I$ is current there.
The formula
$$
\frac{V^2}{R}
$$
is not applicable in an electric motor because of presence of "back-emf" (motional electromotive force due to motion of the winding in magnetic field) acting on the current, and this effect is not described by the applied voltage $V$.
The latter formula is applicable only when the only force pushing the current against the resistance is voltage (conservative electric field), like in a resistor in ordinary circuit (not moving in magnetic field). | {
"domain": "physics.stackexchange",
"id": 89827,
"tags": "electric-circuits, electric-current, electrical-resistance, voltage, power"
} |
Maximun distance that can be reached | Question: A stone is located at the point (0,0) of an infinite grid. The stone has exactly $n$ possible moves, not necessarily unique, each described by a $vector$ of integer coordinates. The stone can make each move at most once , and the moves it makes may be arranged in any order.The goal is to reach a point as far (in the Euclidean distance) from the initial position as possible. How far can the stone be reached ? $($ Given that output is square of max distance $)$
Example :
consider $ n$ $=$ $4$ and vectrs are
$[$2,-2 $]$ ,[-2,-2],[0,2],[3,1],[-3,1]
Ans is $26$
Optimal way is to use vectors [0,2], [3,1], and [2,−2].or also [0,2], [−3,1], and [−2,−2].
link for vector clarity is https://ibb.co/mJByc99
Answer: Addition of vectors is commutative, so the order of moves is irrelevant. For example:
$(0,2) + (3,1) + (2,-2) = (0,2) + (2,-2) + (3,1) = (3,1) + (0,2) + (2,-2) = (5,1)$
So a simple brute force method is to calculate the distance traveled for all subsets of the set of $n$ possible moves. There are $2^n$ such subsets, but you don't need to consider the empty set. So the size of your search space is actually $2^n-1$. | {
"domain": "cs.stackexchange",
"id": 15495,
"tags": "computational-geometry, dynamic-programming"
} |
Using reflection in a test to check if a private variable is really null after a function, is this okay? | Question: I am using reflection to check if a private variable is set to null after the logout function. This is needed because the getUser function will always attempt to set and return an user if no user is set. Am I doing something wrong or can I do something better?
This is the bean that has the logout and getUser function:
@ManagedBean(name = "authBean")
@SessionScoped
public class AuthorizationBean implements Serializable{
//Data access object for the users
@Inject
UserDao userDao;
private User user; // The JPA entity.
public User getUser() {
if (user == null) {
user = (User) FacesContext.getCurrentInstance().getExternalContext().getSessionMap().get("user");
if (user == null) {
Principal principal = FacesContext.getCurrentInstance().getExternalContext().getUserPrincipal();
if (principal != null) {
user = userDao.findByEmail(principal.getName()); // Find User by j_username.
}
}
}
return user;
}
/**
* Function that handles the logout
* @return Redirect string that points to the login page
*/
public String doLogout() {
// invalidate the session, so that the session is removed
FacesContext.getCurrentInstance().getExternalContext().invalidateSession();
user = null;
// return redirect to login page
return "/login.xhtml?faces-redirect=true";
}
}
This is the test class, with the reflection:
@RunWith(PowerMockRunner.class)
@PrepareForTest(FacesContext.class)
public class AuthorizationBeanTest {
private AuthorizationBean authorizationBean;
@Mock
User user;
@Mock
FacesContext facesContext;
@Mock
ExternalContext externalContext;
@Before
public void setUp() {
authorizationBean = new AuthorizationBean();
Map<String,Object> sessionMap = new HashMap<>();
sessionMap.put("user", user);
//Mocking the static function getCurrentInstance from FacesContext,
//so a mocked user can be returned for the test
PowerMockito.mockStatic(FacesContext.class);
PowerMockito.when(FacesContext.getCurrentInstance()).thenReturn(facesContext);
when(facesContext.getExternalContext()).thenReturn(externalContext);
when(externalContext.getSessionMap()).thenReturn(sessionMap);
}
@Test
public void doLogoutTest() throws NoSuchFieldException, IllegalAccessException {
assertNotNull(authorizationBean.getUser());
assertEquals("/login.xhtml?faces-redirect=true", authorizationBean.doLogout());
//Check through reflection if the private field user is now null
Field userField = authorizationBean.getClass().getDeclaredField("user");
userField.setAccessible(true);
User testUser = (User) userField.get(authorizationBean);
assertNull(testUser);
}
}
Answer: Personally I wouldn't do this.
The reason why is when your field change name, your test fails because the field is hardcoded there and with refactoring this isn't persisted to the test.
What should I do?
Normally your AuthorizationBeanTest is in the same package as your AuthorizationBean.
When this is correct you could use a protected method.
Example :
protected boolean isUserNull () {
return user==null;
}
This is completely safe in your pojo, nobody could even acces the User object.
And outside the package no one will ever see this method.
The advantage is when you refactor user or the method name, your test is automatically updated. | {
"domain": "codereview.stackexchange",
"id": 17021,
"tags": "java, unit-testing, reflection"
} |
Odometry error and, maximum and minimum speed | Question:
Hi.
First I apologize for my limited English proficiency.
I am programming a simple robot, using Stage to simulate the real world.
First question ( 4 questions to ask the same ):
How I can define an error for the
data returned by the odometer?
(Specifically for the current angle
in the Z axis msg->
pose.pose.orientation.z);
Can it be defined in the .cfg file
that is passed to Stage?
Where?
and Name?
Second question:
Where I can define the maximum and
minimum speed of my robot? (Lineal and angular).
Thank you very much.
Originally posted by q2p0 on ROS Answers with karma: 11 on 2012-03-22
Post score: 1
Original comments
Comment by q2p0 on 2012-03-23:
Ok, question 1 is so much stupid because i can make a function that insert a random error whit the ranges i whant. XD
I make a new (offtopic?) question similar to question 2.
Where I can define the error that make my robot when I publish a rotation angular velocity in the final rotation that i get.
Comment by tfoote on 2012-03-26:
@q2p0 Please ask single questions at a time so that answers can be isolated and marked as accepted.
Answer:
Question number one has a few answers:
If your measurement error is constant, you can define it using the ros paramater server in your launchfile for your robot. Then read the error paramater from the parameter server and insert it into the orientation covariance matrix in a geometry_msgs/PoseWithCovariance. You probably want to include that message as part of a nav_msgs/Odometry message.
In the following example, the parameter z_rotation_covariance is defined in the launchfile:
<launch>
<param name="z_rotation_covariance" value="0.05" />
</launch>
To use the parameter value in code, you'll want to use the rosparam api for your client.
If your error is a calculated or accumulated error, you will want to set up a service to calculate the changes as a function on some input and store the error in rosbag and another service to retrieve the error from rosbag whenever you need to use it in the orientation covariance matrix in a geometry_msgs/PoseWithCovariance. You probably want to include that message as part of a nav_msgs/Odometry message.
You will need to subscribe to whatever topics are publishing the parameters of your error estimation function, probably using a message_filter to combine the messages of several different topics into a single callback. ApproximateTimeSynchronizer is a good filter for doing this. Then when your filter callback is called, call your error calculation/accumulation service to calculate and store your error, then you can retrieve it from rosbag at any time using your error retrieval service.
Question number two is simpler:
You will want to model your robot using urdf. URDF joint tags allow you to specify joint upper and lower limits, efforts, types, etc. You might look into using Gazebo instead of Stage. It allows you to add several additional tags to your urdf to model the physics properties of your robot during simulation.
Originally posted by jackcviers with karma: 207 on 2014-09-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8682,
"tags": "ros, simulation, simulator-stage, stage"
} |
What happens if I replace an electron in a $\rm Li$ atom by a muon? | Question: According to my knowledge the exclusion principle won't affect it, so it will jump to the muonic 1s orbit (strongly deformed by the electrons' repulsion).
The electrons fill the electron 1s orbits (also distorted).
So it would become like a He, but much heavier and easier to ionize.
Answer: It seems like my answer is correct. So repost:
So because electron-nucleus and muon-nucleus bond are on different energy scale, the problem can be separated into two parts: The muon would see a nucleus with 3+ charge in a large negative change-cloud. The electrons would see a 2+ charged nucleus. After 2us the muon decays and its energy blows away all two electrons. | {
"domain": "physics.stackexchange",
"id": 71627,
"tags": "particle-physics, atomic-physics, pauli-exclusion-principle, leptons"
} |
Performance of summing one value from join of three tables | Question: I have a table I want to get a sum of information on, but the value I need to query on is in two different tables.
The one with values I need to sum is say, kudos, which has a value indicating +1 (1) or -1 (2) in the Action column, which is associated to the discussion it's on, or the comment it's on:
Kudos
+-----------+--------------+--------+
| CommentID | DiscussionID | Action |
+-----------+--------------+--------+
| 24 | NULL | 1 |
| NULL | 4 | 1 |
| 60 | NULL | 2 |
+-----------+--------------+--------+
The value I want to filter by is the author's ID, which is either in
the Comment table or the Discussion table:
Comments
+-----------+--------+
| CommentID | Author |
+-----------+--------+
| 29 | 5 |
| 24 | 1 |
| 22 | 1 |
| 21 | 1 |
+-----------+--------+
Discussions
+--------------+--------+
| DiscussionID | Author |
+--------------+--------+
| 4 | 1 |
| 5 | 2 |
| 6 | 2 |
| 7 | 7 |
+--------------+--------+
In this example, Author 1 would have a score of 2, because row 1 in the Kudos table would count because it matches with row 1 in the comment table, and row 2 in the Kudos table matches the 1st row in the Discussions table.
Additionally, I need to use the query that does this as a subquery, because I'm getting all the users in a select statement, and then getting the sum of this subquery as a field associated with the userID, like:
SELECT tblUser.Name AS Name, (subquery) AS rep FROM User_Table AS tblUser;
What I've come up with is the following:
SELECT tblU.Name AS Name, tblU.UserID AS ID,
(SELECT SUM(IF(Action=1, 1, -1)) FROM Kudos AS tblK
LEFT JOIN Discussions AS tblD ON tblK.DiscussionID = tblD.DiscussionID
LEFT JOIN Comments AS tblC ON tblC.CommentID = tblK.CommentID
WHERE tblD.InsertUserID = id OR tblC.InsertUserID = id) AS rep
FROM Users AS tblU;
which gives me:
+--------------+----+------+
| Name | id | rep |
+--------------+----+------+
| Jack | 1 | 357 |
| Joe | 2 | 824 |
| Jim | 12 | 48 |
But it takes 0.23 seconds to run for a small set (like 13 users). I've heard from co-workers that the OR condition is bad for performance, but I'm not sure how this can be optimized at all.
Answer: Avoid LEFT JOIN clauses
I'm not sure that I'd pick the OR as the problem. The real problem is more likely to be the LEFT JOIN clauses. A LEFT JOIN is a slow operation and to be avoided if you aren't looking for NULL values.
Try using two subselects instead:
SELECT tblU.Name AS Name, tblU.UserID AS ID,
(SELECT SUM(IF(Action=1, 1, -1))
FROM Kudos AS tblK
INNER JOIN Discussions AS tblD ON tblK.DiscussionID = tblD.DiscussionID
WHERE tblD.InsertUserID = tblU.UserID) +
(SELECT SUM(IF(Action=1, 1, -1))
FROM Kudos AS tblK
INNER JOIN Comments AS tblC ON tblC.CommentID = tblK.CommentID
WHERE tblC.InsertUserID = tblU.UserID ) AS rep
FROM Users AS tblU;
Note that this only works if Kudos never has a non-null CommentID in a row with a non-null DiscussionID. This also has the side effect of getting rid of the OR, but I'm not sure that's what matters.
Use indexes
Another thing that you can do is add indexes on the relevant columns if you don't already have them. Hopefully you already have PRIMARY KEY indexes on the ID column for each table. If that's not enough, it is possible to add indexes on things like UserID, Name and DiscussionID, InsertUserID, Action. These can move critical columns into memory.
Consider merging Comments and Discussions
The Comments and Discussions tables use the same columns here. Consider putting both in the same table with a type column. Then Kudos would have to do only one join.
You don't include the various table definitions. If there's too much stuff in Comments and Discussions for a simple merge, you could also consider refactoring them into three tables. The ID column and author could be set in one table while the other two tables could hold the comment and discussion specific content respectively. I would consider a merge to be a better solution than this if possible. This is a backup suggestion. | {
"domain": "codereview.stackexchange",
"id": 13532,
"tags": "performance, sql, mysql"
} |
Correlation Performed by Convolution | Question: Background: The question here is related to images in particular and not signal/waveforms.
I have been reading a lot of answers about the difference between convolution and correlation but I am stuck on my doubt as to how are they related to each other while performing operations. Most of the answers are individual answers for each of these terms.
For example:
A statement I am trying to understand says
Here correlation operations are performed by replacing the exhausted
convolutions with element-wise multiplications using Discrete Fourier
Transform (DFT).
I am not able to relate the individual definitions here. Can anyone please explain this?
Answer: What that statement is saying that
a. Correlation is performed the same way as one would perform convolution (you must implicitly know that one of the sequences is conjugated and time reversed to express a correlation as a convolution, as it was not stated there),
and
b. Mathematically the convolution is performed using this relationship
$$x[n]*h[n] = \mathscr{F}^{-1}\left\{\mathscr{F}\left\{x[n]\right\}\cdot\mathscr{F}\left\{h[n]\right\}\right\}$$
Where $*$ denotes convolution, $\mathscr{F}\left\{f[n]\right\}$ denotes the Discrete Fourier Transform of $f[n]$, and $\mathscr{F}^{-1}\left\{F[k]\right\}$ denotes the Inverse Discrete Fourier Transform of $F[k]$.
Note that these operations with Discrete Fourier Transforms actually yield the "circular" convolution of $x[n]$ and $h[n]$, which is equal to the linear convolution of $x[n]$ and $h[n]$ only under a certain condition: there must be enough zero padding to avoid aliasing. | {
"domain": "dsp.stackexchange",
"id": 6471,
"tags": "image-processing, computer-vision, convolution, correlation"
} |
Flood-clearing routine for a JavaFx Minesweeper | Question: I really hate copy and pasting my code from one method to another. It signals something in my brain, "This could be done either more efficiently or more readable". This is used in a minesweeper program I am implementing with a JavaFx GUI (that's not very relevant, but it might help to know). Hopefully this is enough context!
@Contract(pure = true)
private int setMinX(int x){ return (x == 0 ? 0:(x-1)); }
@Contract(pure = true)
private int setMinY(int y){ return (y == 0 ? 0:(y-1)); }
@Contract(pure = true)
private int setMaxX(int x){ return (x == this.width-1 ? this.width-1 : x+1); }
@Contract(pure = true)
private int setMaxY(int y){ return (y == this.height-1 ? this.height-1 : y+1);}
private void adjacents(int x, int y){
//calculate adjacents for the just placed mine
int minX = setMinX(x);
int minY = setMinY(y);
int maxX = setMaxX(x);
int maxY = setMaxY(y);
if (this.mineField[minX][minY] != -1){ this.mineField[minX][minY] += 1; } //left top corner
if (this.mineField[maxX][minY] != -1){ this.mineField[maxX][minY] += 1; } //right top corner
if (this.mineField[maxX][maxY] != -1){ this.mineField[maxX][maxY] += 1; } // bottom right corner
if (this.mineField[minX][maxY] != -1){ this.mineField[minX][maxY] += 1; } // bottom left corner
if(x > 0 && x < this.width-1){
if (this.mineField[x][minY] != -1){ this.mineField[x][minY] += 1; } // top-middle
if (this.mineField[x][maxY] != -1){ this.mineField[x][maxY] += 1; } //bottom-middle
}
if(y > 0 && y < this.height-1){
if (this.mineField[minX][y] != -1){ this.mineField[minX][y] += 1; } // left-middle
if (this.mineField[maxX][y] != -1){ this.mineField[maxX][y] += 1; } //right-middle
}
}
public void floodClear(int x, int y) {
int minX = setMinX(x);
int minY = setMinY(y);
int maxX = setMaxX(x);
int maxY = setMaxY(y);
if (this.mineField[x][y] >= 0 && !this.liveGame[x][y]) {
//IMPLEMENT WITH GUI
//show(x,y);
if (this.mineField[minX][minY] == 0) {
floodClear(minX, minY);
}
if (this.mineField[maxX][minY] == 0) {
floodClear(maxX, minY);
}
if (this.mineField[maxX][maxY] == 0) {
floodClear(maxX, maxY);
}
if (this.mineField[minX][maxY] == 0) {
floodClear(minX, maxY);
}
if (x > 0 && x < this.width - 1) {
if (this.mineField[x][minY] == 0) {
floodClear(x, minY);
} // top-middle
if (this.mineField[x][maxY] == 0) {
floodClear(x, maxY);
} //bottom-middle
}
if (y > 0 && y < this.height - 1) {
if (this.mineField[minX][y] == 0) {
floodClear(minX, y);
} // left-middle
if (this.mineField[maxX][y] == 0) {
floodClear(maxX, y);
} //right-middle
}
}
}
Answer: Iterating through adjacents
There are two main approaches to take while iterating through the neighbors in Minesweeper.
The first and most common option is to use a nested for loop:
for (int y = -1; y <= 1; y++) {
for (int x = -1; x <= 1; x++) {
if (x == 0 && y == 0) {
continue; // We don't want to consider the field itself as a neighbor
}
increaseAdjacentMineCounter(mineX + x, mineY + y);
}
}
The second approach, and the one that I normally use and like better, is to use an array to define which fields should be used as your neighbors.
int[][] neighbors = {{ -1, -1 }, { -1, 0 }, { -1, 1 }, { 0, -1 },
{ 0, 1 }, { 1, -1 }, { 1, 0 }, { 1, 1 }};
for (int[] neighbor : neighbors) {
int x = neighbor[0];
int y = neighbor[1];
increaseAdjacentMineCounter(mineX + x, mineY + y);
}
The reason for why I like the second approach better is because it becomes more data-oriented. You can change the way you consider neighbors without changing your code. Changing the way your neighbors are considered can be a bit entertaining but very confusing :) | {
"domain": "codereview.stackexchange",
"id": 20709,
"tags": "java, minesweeper"
} |
Acids / Bases - Need some clarity for final exam | Question: I'm studying for my final exam, and these are the questions I got wrong on the mid term, could anyone help correct me? I know people are against giving answers for homework, so I must add that is now exam period for Universities, so this is only for my further understanding!
Thanks so much!
My attempts will be in this format
1)The pH of a 0.2M unknown base X OH is 8.15
a)Calculate [OH-]
10^(-8.15) = 7E-9M
b)What is the % disassociation
(7E-9/0.2M) x 100 = 3.5E-6%
c)What is the Kb of the base?
This one I don't even know where to begin..
2)Calculate the pH of the following solutions
a) 0.10M KCN, Ka (HCN) = 6.2E-10 (Hint: KCN is the salt of a weak acid and a strong base)
x = sqrt(Ka*c)
x = sqrt(6.2E-10 * 0.1) = 7.8E-6
-log(7.8E-6) = pH = 5.1
Answer: 1a) First use $pOH = 14 - pH$
Then use: $pOH = \log([OH^-])$
b) Simply use: $\dfrac{\text{dissociated}\ [OH^-]}{\text{original}\ [OH^-]}$
c) Use $K_b = \dfrac{[X^+][OH^-]}{[XOH]}$
2) Dissolving $KCN$ in water gives:
$\ce{KCN \rightarrow K^+ + CN^-}$
The following equilibrium reaction is thus formed
$\ce{CN^- + H2O \rightleftharpoons HCN + OH^-}$
Which is:
$K_b = \dfrac{[HCN][OH^-]}{[CN^-]}$
From the $K_a$, you can get the $K_b$ through $K_w\ =\ K_aK_b$ so that:
$\dfrac{K_w}{K_a} = \dfrac{[HCN][OH^-]}{[CN^-]}$
Let the amount of $CN^-$ that reacts to be $x\ M$, thus $[CN^-]$ becomes $0.1-x$ and you get $x\ M$ of both $[HCN]$ and $[OH^-]$. Solve for $x$ to get $[OH^-]$ and from there, $pH$ should be easy to calculate.
3) Use $Ksp = [Ba^{2+}][F^-]^2 = 1.7\times 10^{-6}$ and solve for $[Ba^{2+}]$. | {
"domain": "chemistry.stackexchange",
"id": 391,
"tags": "acid-base, ph"
} |
What exactly happens in a rigid body collision? | Question: Consider a situation in which a body of mass m moving with a velocity v is collided with a similar mass, applying momentum conservation,the initial mass will come to rest and the other mass will move with a velocity of v.
Velocity of first mass decreases while the other mass starts accelerating from rest.
Both of them reach a common velocity of v/2.
How could the mass in the front further accelerate to reach a velocity of v and the mass in back decelerate to zero velocity as there is no relative motion once they attain common velocity?
Answer: In elementary dynamics we make two simplifying assumptions, which are linked:
All bodies are completely rigid and do not deform.
Momentum can be transferred instantaneously from one body to another (which implies infinite accelerations).
Both assumptions are unrealistic. In reality, all bodies deform to some extent, and transfer of momentum between colliding bodies takes a finite amount of time. As you say, gradual transfer of momentum means that the average velocities of colliding bodies are only equal at one instant in time, but deformation means that the bodies can stay in contact even though they have different average velocities. | {
"domain": "physics.stackexchange",
"id": 75741,
"tags": "newtonian-mechanics, momentum, conservation-laws, inertial-frames, collision"
} |
For which $R$ is $\{0^a10^b10^c\mid R(a,b,c)\}$ context-free? | Question: Unless I'm mistaken, a language of the form $\{0^a10^b\mid R(a,b)\}$ is context-free if and only if $R$ is a finite union of linear (in)equalities involving integer constants and the variables $a$ and $b$ with some modulo conditions, e.g., $R(a,b)$ if and only if ($a<2b+1$ and $a\equiv 2 \bmod 6$) OR ($a-b=2$ and $a\equiv 2 \bmod 6$) OR ($2a-3b>5$ and $2a+b\equiv 3 \bmod 6$).
Is there some similar characterization known when the language can contain only some fixed, bounded number of non-zero characters?
For example, for which $R$ is $\{0^a10^b10^c\mid R(a,b,c)\}$ context-free?
Related: I've discovered that this topic has been studied a lot for primitive words, see Kaszonyi-Katsura: Some new results on the context-freeness of languages.
Answer: What you're looking for is an old result of Ginsburg and Spanier actually related to one of the oldest open questions of the field. See Ginsburg's book The Mathematical Theory of CFLs.
Defs. A linear set is a set of the form $\vec{v} + P^*$ where $\vec{v} \in \mathbb{N}^k$ and $P$ is a finite set of such vectors ($P^*$ denotes all linear combinations of the vectors in $P$). A linear set is stratified if :
All the vectors in $P$ have at most two nonzero components;
For all $\vec{x}, \vec{y} \in P$, the nonzero positions of $\vec{x}$ and $\vec{y}$ are not interleaved (this is related to the fact that $a^ib^ic^jd^j$ is CFL but not $a^ib^jc^id^j$).
Theorem. Let $w_1, \ldots, w_k$ be some words. Any CFL $L \subseteq w_1^*\cdots w_k^*$ can be written as $L = \{w_1^{i_1}\cdots w_k^{i_k} \mid (i_1, \ldots, i_k) \in R\}$ where $R$ is an union of stratified sets. Somewhat conversely, any language of that form with $R$ a union of stratified sets is a CFL.
Naturally, there are languages that are CFL that can be written in the previous way with $R$ being quite arbitrary—e.g., with $w_1=w_2=a$ and $R = \{(2^i, j)\}.$
However, if for instance all $w_i$'s are different letters, then $R$ has to be a union of stratified sets for $L$ to be a CFL.
As for the open question I mentioned before, the following problem is not known to be decidable:
Given: A finite union of linear sets
Question: Is this set presentable as a finite union of stratified sets? | {
"domain": "cstheory.stackexchange",
"id": 4613,
"tags": "reference-request, fl.formal-languages, context-free"
} |
Process a line in chunks via Regular Expressions with Ruby | Question: I'm writing a Ruby script to reverse engineering message log files. They come from an external system with the following characteristics:
Each line of the log file has at least one message.
Each line of the log file can have multiple messages.
Each message consists of a set of numbers separated by spaces (e.g. 30 0 -1 1 2 1).
Each message can have one of many different templates (e.g. some contain five numbers, others contains six).
The approach I'm using is to process each line, one at a time, via a method that takes a string to work on as an argument. It saves a copy of the initial input (for later comparison) then tries to match known patterns. When a pattern is matched, the string that made it up is removed. If there is nothing left, or if no more matches are found, the method exits. Otherwise, it calls itself with the remainder of the string to process. Here's the code I camp up with along with an example.
#!/usr/bin/evn ruby
def parse_line remainder_of_line
puts "Processing: #{remainder_of_line}"
# Save a copy of the initial input for later comparison
initial_snapshot = remainder_of_line.dup
# Look for known pattern matches, removing them if found
if remainder_of_line.gsub!(/^(\d+) 0 -1 1 (\d+) \d+\s*/, '')
puts " - Matched format 1 - found: #{$1} - #{$2}\n\n"
elsif remainder_of_line.gsub!(/^\d+ 0 -1 2 (\d+) \d+\s*/, '')
puts " - Matched format 2 - found: #{$1}\n\n"
### More patterns here.
end
# If noting changed, then no matches were found.
if initial_snapshot.eql? remainder_of_line
puts " - Line still has data but no matches found. (Left with: #{remainder_of_line}\n\n"
# Keep going if there is anything left.
elsif !remainder_of_line.empty?
parse_line remainder_of_line
end
end
line = "11 0 -1 2 13560 2 11 0 -1 2 13564 2 11 0 -1 1 36880 106 91 0 -1 1 36881 106 36881 106 91 1 13556 2 36880 106 36880 106 11 1 734 11 0 -1 1 36884 106 91 0 -1 1 36885 106 36885 106 91 1 13556 2 36884 106 36884 106 11 1 735 13556 2 31 18 799 13556 2 31 25 799 "
parse_line line
This works but I'm wondering if there is a better way.
Answer:
Because you're using the "bang" version of gsub, parse_line modifies the string you pass to it, which is generally a not a good idea. I wouldn't expect a parsing method to "eat" my input.
Since there's only one line and your regexes are anchored to the start of it, there's little point in using gsub (i.e. global substitution), since you'll only ever match 1 occurrence of the pattern.
Don't bother with all the newline literals. puts will automatically add one, and if you want an extra one, you should be able to just say puts with no argument in a strategic location (i.e. after having tried all the patterns).
This seems like a good fit for Ruby's case statement (aka switch) since you can match against regexes directly. And Ruby also sets other magic variables besides $1 and $2 whenever you match a regex. There's no reason to make the method recursive, though. A simple loop would do nicely too.
For instance:
def parse_line(line)
puts "Processing: #{line}"
# Loop until the string's empty (or we hit the return below)
until line.empty?
# Try matching the line
case line
when /^(\d+) 0 -1 1 (\d+) \d+\s*/
puts " - Matched format 1 - found: #{$1} - #{$2}"
when /^\d+ 0 -1 2 (\d+) \d+\s*/
puts " - Matched format 2 - found: #{$1}"
# more patterns...
else # no match
puts " - Line still has data but no matches found. (Left with: #{line})"
return # stop here
end
line = $' # set line to the *unmatched* part, i.e. the remainder
puts "" # output an extra blank line
end
puts "Entire line matched, yay"
end | {
"domain": "codereview.stackexchange",
"id": 9181,
"tags": "ruby, recursion, regex"
} |
Why is this force in the $\hat{z}$ direction - magnetic fields? | Question: So this picture is in my lecture notes, but I simply cannot see how this force is in the $\hat{z}$ direction?
The $d\vec{l}$ section is in the $- (x-y)$ plane and when you cross it with the magnetic field in the $x$ plane, you can't just end up with something in the $\hat{z}$ direction, can you?
Thanks]1
Answer: $\newcommand{\vect}[1]{{\bf #1}}$
$\newcommand{\uvect}[1]{\hat{\bf #1}}$
As you point out, on the segment connecting the point 1 and 2, the line differential $dl$ lies on the plane $xy$,
$$
d\vect{l} = dl_x~\uvect{x} + dl_y~\uvect{y}
$$
therefore,
$$
d\vect{l}\times\vect{B} \sim (dl_x~\uvect{x} + dl_y~\uvect{y})\times (B~\uvect{x}) = Bdl_y(\uvect{y}\times\uvect{x}) \sim \uvect{z}
$$ | {
"domain": "physics.stackexchange",
"id": 40980,
"tags": "electromagnetism, magnetic-fields, vectors"
} |
Space groups: Herman-Maugin notation, diagrams and Wyckoff positions | Question: I am currently taking an Introduction to Crystallography course. After studying and understanding the punctual symmetry groups (2mm, 4/m 2/m 2/m, 32,...), the flat groups and the diverse elements of symmetry that exist: helical axes, sliding planes, symmetry planes, among others.
I have doubts when building the diagram of a certain spatial group. For example, in an exercise, I am asked to construct the diagram of the spatial group Pmab.
Pmab implies the existence of:
an ordinary plane of symmetry perpendicular to the crystallographic axis a,
a sliding plane of type a perpendicular to the crystallographic axis b,
a sliding plane of type b perpendicular to the crystallographic axis c.
Also, Pmab is an abbreviated notation, because, in reality, there are also two helical binary axes parallel to the axes "a" and "b", and a binary axis parallel to the axis "c".
On the other hand, Pmab belongs to the orthorhombic (rhombic) system and derives from the point symmetry group 2/m 2/m 2/m 2/m.
This is all the information I get from the Herman-Maugin notation. If there is something wrong that I have understood, I hope you will tell me.
Once here I don't know how to start building the diagram, which would look like this:
As far as Wyckoff positions are concerned, I am often asked for them, but I am not quite sure what they are.
Answer: Well, knowing crystallography implies being able to deduce the picture above solely from the group's name $\rm Pmab$, and that's how it is done.
First you write down the symmetry elements mentioned explicitly in the group's name (that is, $\rm m,\;a$, and $\rm b$ in our case; in other examples there could have been axes among them, or even worse) and orient those accordingly, which you already know how to do. Then you combine these elements in all possible ways to find out what else is hidden beneath.
Essentially, any symmetry element is but a linear transformation that takes any vector $\bf\vec x$ to another vector $\bf A\vec x+\vec b$, where $\bf A$ is the $3\times3$ matrix of our transformation (rotation or reflection or otherwise), and $\bf\vec b$ is the shift, that is, a half-translation along the axis for a screw axis, or a half-translation in whatever direction for a glide plane, or $\bf\vec0$ in all other cases. The typical matrices are:
$$
\begin{array}{cc}
\begin{pmatrix}
-1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{pmatrix} &
\begin{pmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1 \\
\end{pmatrix}&
\begin{pmatrix}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1 \\
\end{pmatrix}\\
\text{Reflection $\perp$ X} & \text{Rotation about X}& \text{Center}
\end{array}
$$
Given two symmetry elements, you simply combine the two transformations by doing some matrix multiplication, which I sincerely recommend, if only to know how it feels. Alternatively, you may rely upon the wisdom of the predecessors, which says:
Two perpendicular planes give a twofold axis parallel to both.
A plane and a parallel twofold axis give another plane, perpendicular to the plane and parallel to the axis.
A plane and a perpendicular twofold axis give a center.
Two perpendicular twofold axes give a third axis, perpendicular to both.
All of the above remains true if you change
every "plane" to "plane or a glide plane" and
every "axis" to "axis or a screw axis".
When some of the combined elements have shift in them, the following happens:
If the shift is parallel to the resulting element, it becomes a part of the said element (that is, the result is going to be a screw axis or a glide plane).
If the shift is perpendicular to the resulting element, it moves the said element by half the value of the shift (that is, by $1\over4$ of a translation).
If you end up with a center of symmetry someplace other than at the origin, you relocate your origin there.
Now try to apply the rules 1-7 and see where this gets you. | {
"domain": "chemistry.stackexchange",
"id": 12266,
"tags": "crystallography"
} |
Text-based fighting game in Python 3.0 | Question: I have began to practice Python after a couple of years and for my first Project, i decided to make a fighting/rpg-like game with different characters. So far, I am only familiar with loops and functions, not classes or OOP. Please give me any feedback on how I could improve my code, through debugging, optimization, or adding more content (In case the game doesn't start due to the absence of highscore.txt, please make a text file called highscore, and of the first line insert an interger, and on the second put any name. This should not happen, but just in case :P).
import random as r
######Getting High Score#####
try:
hs = open("highscore.txt","r+")
except:
hs = open("highscore.txt","x")
hs = open("highscore.txt","r+")
try:
score = hs.readlines(1)
score = int(score[0])
leader = hs.readlines(2)
leader = str(leader[0])
except:
hs = open("highscore.txt","w")
hs.write("0\n")
hs.write("null")
hs = open("highscore.txt","r")
score = hs.readlines(1)
score = int(score[0])
leader = hs.readlines(2)
leader = str(leader[0])
#####Introduction, Initializing player#####
print("\nWELCOME TO WONDERLANDS RPG!")
print("The High Score is:",score,"by",leader)
points = 0
player_name = input("\nEnter your hero's name: ")
#####Classes [health, attack 1 (only does set damage), attack 2 min, attack 2 max, attack 3 min, attack 3 max, heal min, heal max], Getting player's Class#####
knight = [100,10,5,15,0,25,5,10] #health: 100, attack 1: 10, attack 2: 5-15, attack 3: 5-25, heal: 5-10
mage = [50,15,10,20,-5,25,10,15] #health: 50, attack 1: 15, attack 2: 10-20, attack 3: -5-25, heal: 10-15
healer = [150,5,5,10,5,15,10,20] #health: 150, attack 1: 5, attack 2: 5-10, attack 3: 5-15, heal: 10-20
while True:
print("\n1. Knight: Health: ",knight[0],"; Attack 1:",knight[1],"; Attack 2:",knight[2],"-",knight[3],"; Attack 3:",knight[4],"-",knight[5],"; Heal:",knight[6],"-",knight[7])
print("2. Mage: Health: ",mage[0],"; Attack 1:",mage[1],"; Attack 2:",mage[2],"-",mage[3],"; Attack 3:",mage[4],"-",mage[5],"; Heal:",mage[6],"-",mage[7])
print("3. Healer: Health: ",healer[0],"; Attack 1:",healer[1],"; Attack 2:",healer[2],"-",healer[3],"; Attack 3:",healer[4],"-",healer[5],"; Heal:",healer[6],"-",healer[7])
player_class = input("\nSelect your class: 1, 2, or 3: ")
if player_class == "1":
player_class = knight
print("You have selected the Knight")
break
if player_class == "2":
player_class = mage
print("You have selected the Mage")
break
if player_class == "3":
player_class = healer
print("You have selected the Healer")
break
else:
print("Please select a valid class.")
continue
player_heal_max = player_class[0]
#####Difficulty/Upgrade Functions#####
def level_up(player,health_max):
while True:
lv_choice = input("\nWould you like to:\n Increase max health by 20 (1)\n Increase Healing Factor by 5 (2)\n increase your damage by 5 (3) ")
if lv_choice == "1":
health_max += 20
player[0] = health_max
return player,health_max
elif lv_choice == "2":
player[6] +=5
player[7] +=5
player[0] = health_max
return player, health_max
elif lv_choice == "3":
player[1] +=5
player[2] +=5
player[3] +=5
player[4] +=5
player[5] +=5
player[0] = health_max
return player, health_max
else:
print("Please enter in a valid number")
continue
def difficulty(ai,health_max,level):
if level == 1:
return ai
else:
ai[0] = health_max+15*round(0.5*level+0.5)
ai[1] +=5*round(0.5*level+0.5)
ai[2] +=5*round(0.5*level+0.5)
ai[3] +=5*round(0.5*level+0.5)
ai[4] +=5*round(0.5*level+0.5)
ai[5] +=5*round(0.5*level+0.5)
ai[6] +=5*round(0.5*level+0.5)
ai[7] +=5*round(0.5*level+0.5)
return ai
def ai_stats(s):
s[0] += r.randint(-20,20)
s[1] += r.randint(-3,3)
s[2] += r.randint(-3,3)
s[3] += r.randint(-3,3)
s[4] += r.randint(-3,3)
s[5] += r.randint(-3,3)
s[6] += r.randint(-3,3)
s[7] += r.randint(-3,3)
return s
#####Game Loop#####
level = 1
print("\n----------------------- GAME START -----------------------")
while True:
#####Determining AI Class/Stats#####
#####(AI classes must be in the Game loop otherwise if an enemy chooses the same class twice, it would have <=0 HP, thus being an instant win)#####
ai_knight = [100,10,5,15,0,25,5,10] #health: 100, attack 1: 10, attack 2: 5-15, attack 3: 5-25, heal: 5-10
ai_mage = [50,15,10,20,-5,25,10,15] #health: 50, attack 1: 15, attack 2: 10-20, attack 3: -5-25, heal: 10-15
ai_healer = [150,5,5,10,5,15,10,20] #health: 150, attack 1: 5, attack 2: 5-10, attack 3: 5-15, heal: 10-20
ai = r.randint(1,3)
if ai == 1:
ai = ai_stats(ai_knight)
print("\nYou are fighiting a knight with",ai[0],"HP!")
if ai == 2:
ai = ai_stats(ai_mage)
print("\nYou are fighiting a mage with",ai[0],"HP!")
if ai == 3:
ai = ai_stats(ai_healer)
print("\nYou are fighiting a healer with",ai[0],"HP!")
ai_heal_max = ai[0]
ai = difficulty(ai,ai_heal_max,level)
#####Gameplay Loop#####
while True:
#####Player Attack#####
player_move = input("\nWould you like to use attack (1), attack (2), attack (3), or heal (4)? ")
print("")
if player_move == "1":
player_damage = player_class[1]
ai[0] = ai[0]- player_damage
print(player_name," did",player_damage,"damage!")
elif player_move == "2":
player_damage = r.randint(player_class[2],player_class[3])
ai[0] = ai[0]- player_damage
print(player_name," did",player_damage,"damage!")
elif player_move == "3":
player_damage = r.randint(player_class[4],player_class[5])
if player_damage<0:
player_class[0] = player_class[0]+player_damage
print(player_name," damaged themselves for",player_damage,"HP!")
else:
ai[0] = ai[0]- player_damage
print(player_name," did",player_damage,"damage!")
elif player_move == "4":
player_heal = r.randint(player_class[6],player_class[7])
if player_class[0] + player_heal > player_heal_max:
player_class[0] = player_heal_max
else:
player_class[0] = player_class[0] + player_heal
print(player_name," healed for",player_heal,"HP")
else:
print("Please enter in a valid move.")
continue
#####Detecting Death#####
if player_class[0]<=0:
break
elif ai[0]<=0:
points += player_class[0]*level
level +=1
print("You have bested your opponent! You Have",points,"points. \nNow starting level",level)
player_class,player_heal_max = level_up(player_class,player_heal_max)
break
#####AI Turn#####
if ai[0] <= (ai_heal_max/5):
ai_move = r.sample(set([1,2,3,4,4,4]), 1)[0]
elif ai[0] >= (ai_heal_max*.8):
ai_move = r.sample(set([1,2,3,1,2,3,4]), 1)[0]
elif ai[0] == ai_heal_max:
ai_move = r.randint(1,3)
else:
ai_move = r.randint(1,4)
if ai_move == 1:
ai_damage = ai[1]
player_class[0] = player_class[0]- ai_damage
print("Your opponent did",ai_damage,"damage!")
elif ai_move == 2:
ai_damage = r.randint(ai[2],ai[3])
player_class[0] = player_class[0]- ai_damage
print("Your opponent did",ai_damage,"damage!")
elif ai_move == 3:
ai_damage = r.randint(ai[4],ai[5])
if ai_damage<0:
ai[0] = ai[0]+ai_damage
print("Your opponent damaged themselves for",ai_damage,"HP!")
else:
player_class[0] = player_class[0]- ai_damage
print("Your opponent did",ai_damage,"damage!")
elif ai_move == 4:
ai_heal = r.randint(ai[6],ai[7])
if ai[0] + ai_heal > ai_heal_max:
ai[0] = ai_heal_max
else:
ai[0] = ai[0] + ai_heal
print("Your opponent healed for",ai_heal,"HP")
#####Displaying HP#####
print("\nYour health is:",player_class[0],"HP")
print("Your opponent's health is",ai[0],"HP")
#####Detecting Loss#####
if player_class[0]<=0:
break
elif ai[0]<=0:
points += player_class[0]*level
level +=1
print("You have bested your opponent! You Have",points,"points. \nNow starting level",level)
player_class,player_heal_max = level_up(player_class,player_heal_max)
break
else:
continue
#####Finishing Game, Checking/Updating High Score#####
if player_class[0]<=0:
print("\nYou Died! :(")
if points>score:
hs = open("highscore.txt","w")
hs.write(str(points))
hs.write("\n")
print("You have the new high score of",points,"!")
hs.write(player_name)
else:
print("\nYou finished with",points,"points.")
print("The high score is:",score,"by",leader)
input("")
hs.close()
break
Answer: In your game, which I've played, you have 3 main warriors: knight, mage, and healer. These warriors all have similar behaviors, health, attacks, and heals - they are essentially objects of a class, Warrior.
Tip 1: Let's create a Warrior class:
You will be able to create new Warriors (ie Archers, Brutes, Zombies) later with ease.
You can interface your AI player as a Warrior.
You can simply control all of your Warrior objects.
class Warrior:
def __init__(self, health, attack_1, attack_2, attack_3, heal):
self.health = health
self.attack_1 = attack_1
self.attack_2 = attack_2 # tuple ie (5,25) representing range for attack value
self.attack_3 = attack_3 # tuple ie (10,20) representing range for attack value
self.heal = heal # tuple ie (10,20) representing range for health value
def attributes(self):
# string containing the attributes of the character
string = "Health: "+ str(self.health) + " Attack 1: "+ str(self.attack_1) + " Attack 2: "+ str(self.attack_2[0]) + "-"+ str(self.attack_2[1])+ " Attack 3: "+ str(self.attack_3[0]) + "-"+ str(self.attack_3[1]) + " Heal:"+ str(self.heal[0]) + "-" + str(self.heal[0])
return string
def is_dead(self):
return self.health <= 0
You may want to add other functions later. For instance, def attack_3(self), which would return the value of an attack. We then initialize the knight, mage, healer, and ai as so:
knight = Warrior(100, 10, (5,15), (5,25), (5,10))
mage = Warrior(50, 15, (10,20), (-5,25), (10,15))
healer = Warrior(150, 5, (5,10), (5,15), (10,20))
while True:
# Determining AI Class/Stats
ai_knight = Warrior(100, 10, (5,15), (5,25), (5,10))
ai_mage = Warrior(50, 15, (10,20), (-5,25), (10,15))
ai_healer = Warrior(150, 5, (5,10), (5,15), (10,20))
ai_classes = [ai_knight, ai_mage, ai_healer]
ai = ai_classes[r.randint(0,2)]
randomize_ai(ai)
if ai == ai_knight:
print("\nYou are fighting a knight with ", ai.health,"HP!")
elif ai == ai_mage:
print("\nYou are fighting a mage with ", ai.health,"HP!")
elif ai == ai_healer:
print("\nYou are fighting a healer with ", ai.health,"HP!")
Tip 2: elif is your best friend. If your if statements are mutually exclusive, you can cut down on the complexity of your program, by using elif(which you used successfully in your program, just not always):
if ai == 1:
#code
if ai == 2:
#code
if ai == 3:
#code
# should instead be...
# because ai can't be three values at once
if ai == 1:
#code
elif ai == 2:
#code
elif ai == 3:
#code
Tip 3: The style in which you program is critical to your actual program. A few basic things you should know about coding styles:
Do not use too many # when you are commenting. Instead of ######Displaying HP####### try # Display HP. The latter is more readable for someone else or yourself reading/reviewing your code.
If you have a section header comment, you can try a special commenting style such as:
###########
# CLASSES #
###########
Do not add extra spaces to your code -- this makes your code longer than it needs to and should be. Make your code shorter if it doesn't reduce readability.
To improve the user experience, avoid typos when possible
And whatever you do, be consistent. It's easiest to read, review, and edit code that is consistent in style.
Style is one of those things that just takes experience and practice. You will get better over time.
With these three tips in mind, your final code should look more like:
import random as r
try:
hs = open("highscore.txt","r+")
except:
hs = open("highscore.txt","x")
hs = open("highscore.txt","r+")
try:
score = int(hs.readlines(1)[0])
score = int(score[0])
leader = hs.readlines(2)
leader = str(hs.readlines(2)[0])
except:
hs = open("highscore.txt","w")
hs.write("0\nnull")
hs = open("highscore.txt","r")
score = int(hs.readlines(1)[0])
leader = str(hs.readlines(2)[0])
# Introduce and name the player
print ("\nWELCOME TO WONDERLANDS RPG!")
print ("The High Score is:", score, "by", leader)
points = 0
player_name = input ("\nEnter your hero's name: ")
###########
# CLASSES #
###########
class Warrior:
def __init__(self, health, attack_1, attack_2, attack_3, heal):
self.health = health
self.attack_1 = attack_1
self.attack_2 = attack_2 # tuple ie (5,25) representing range for attack value
self.attack_3 = attack_3 # tuple ie (10,20) representing range for attack value
self.heal = heal # tuple ie (10,20) representing range for health value
def attributes(self):
# string containing the attributes of the character
string = "Health: "+ str(self.health) + " Attack 1: "+ str(self.attack_1) + " Attack 2: "+ str(self.attack_2[0]) + "-"+ str(self.attack_2[1])+ " Attack 3: "+ str(self.attack_3[0]) + "-"+ str(self.attack_3[1]) + " Heal:"+ str(self.heal[0]) + "-" + str(self.heal[0])
return string
def is_dead(self):
return self.health <= 0
knight = Warrior(100, 10, (5,15), (5,25), (5,10))
mage = Warrior(50, 15, (10,20), (-5,25), (10,15))
healer = Warrior(150, 5, (5,10), (5,15), (10,20))
while True:
print("\n1. Knight: ", knight.attributes())
print("\n2. Mage: ", mage.attributes())
print("\n3. Healer: ", healer.attributes())
player_class = input("\nSelect your class: 1, 2, or 3: ")
if player_class == "1":
player_class = knight
print("You have selected the Knight class.")
break
elif player_class == "2":
player_class = mage
print("You have selected the Mage")
break
elif player_class == "3":
player_class = healer
print("You have selected the Healer")
break
else:
print("Please select a valid class.")
continue
player_heal_max = player_class.health
################################
# Difficulty/Upgrade Functions #
################################
def level_up(player,health_max):
while True:
lv_choice = input("\nWould you like to:\n 1. Increase max health by 20 \n 2. Increase Healing Factor by 5 \n 3. increase your damage by 5\n")
if lv_choice == "1":
health_max += 20
player.health = health_max
return player, health_max
elif lv_choice == "2":
player.heal += (5,5)
player.health = health_max
return player, health_max
elif lv_choice == "3":
player.attack_1 += 5
player.attack_2 += (5,5)
player.attack_3 += (5,5)
player.health = health_max
return player, health_max
else:
print("Please enter in a valid number")
continue
def difficulty(ai,health_max,level):
if level == 1:
return ai
else:
ai.health = health_max + 15 * round(0.5 * level + 0.5)
ai.attack_1 += 5 * round(0.5 * level + 0.5)
ai.attack_2 += (5 * round(0.5 * level + 0.5),5 * round(0.5 * level + 0.5))
ai.attack_3 += (5 * round(0.5 * level + 0.5),5 * round(0.5 * level + 0.5))
ai.heal += (5 * round(0.5 * level + 0.5),5 * round(0.5 * level + 0.5))
return ai
def randomize_ai(ai):
ai.health += r.randint(-20,20)
ai.attack_1 += r.randint(-3,3)
ai.attack_2 += (r.randint(-3,3),r.randint(-3,3))
ai.attack_3 += (r.randint(-3,3),r.randint(-3,3))
ai.heal += (r.randint(-3,3),r.randint(-3,3))
return ai
#############
# Game Loop #
#############
level = 1
print("\n----------------------- GAME START -----------------------")
while True:
# Determining AI Class/Stats
ai_knight = Warrior(100, 10, (5,15), (5,25), (5,10))
ai_mage = Warrior(50, 15, (10,20), (-5,25), (10,15))
ai_healer = Warrior(150, 5, (5,10), (5,15), (10,20))
ai_classes = [ai_knight, ai_mage, ai_healer]
ai = ai_classes[r.randint(0,2)]
randomize_ai(ai)
if ai == ai_knight:
print("\nYou are fighting a knight with ", ai.health,"HP!")
elif ai == ai_mage:
print("\nYou are fighting a mage with ", ai.health,"HP!")
elif ai == ai_healer:
print("\nYou are fighting a healer with ", ai.health,"HP!")
ai_heal_max = ai.health
ai = difficulty(ai, ai_heal_max, level)
# Gameplay Loop
while True:
# Player Attack
player_move = input("\nWould you like to use attack (1), attack (2), attack (3), or heal (4)? ")
print("")
if player_move == "1":
player_damage = player_class.attack_1
ai.health = ai.health - player_damage
print(player_name," did",player_damage,"damage!")
elif player_move == "2":
player_damage = r.randint(player_class.attack_2[0],player_class.attack_2[1])
ai.health = ai.health - player_damage
print(player_name," did",player_damage,"damage!")
elif player_move == "3":
player_damage = r.randint(player_class.attack_3[0],player_class.attack_3[1])
ai.health = ai.health - player_damage
print(player_name," did", player_damage, " damage!")
elif player_move == "4":
player_heal = r.randint(player_class.heal[0],player_class.heal[1])
if player_class.health + player_heal > player_heal_max:
player_class.health = player_heal_max
else:
player_class.health = player_class.health + player_heal
print(player_name," healed for",player_heal,"HP")
else:
print("Please enter in a valid move.")
continue
# Detecting Death
if player_class.is_dead():
break
elif ai.is_dead():
points += player_class.health * level
level += 1
print("You have bested your opponent! You Have",points,"points. \nNow starting level",level)
player_class, player_heal_max = level_up(player_class,player_heal_max)
break
# AI Turn
if ai.health <= (ai_heal_max/5):
ai_move = r.sample(set([1,2,3,4,4,4]), 1)[0]
elif ai.health >= (ai_heal_max*.8):
ai_move = r.sample(set([1,2,3,1,2,3,4]), 1)[0]
elif ai.health == ai_heal_max:
ai_move = r.randint(1,3)
else:
ai_move = r.randint(1,4)
if ai_move == 1:
ai_damage = ai.attack_1
player_class.health = player_class.health - ai_damage
print("Your opponent did",ai_damage,"damage!")
elif ai_move == 2:
ai_damage = r.randint(ai.attack_2[0],ai.attack_2[1])
player_class.health = player_class.health- ai_damage
print("Your opponent did ",ai_damage," damage!")
elif ai_move == 3:
ai_damage = r.randint(ai.attack_3[0],ai.attack_3[1])
player_class.health = player_class.health - ai_damage
print("Your opponent did ", ai_damage," damage!")
elif ai_move == 4:
ai_heal = r.randint(ai.heal[0],ai.heal[1])
if ai.health + ai_heal > ai_heal_max:
ai.health = ai_heal_max
else:
ai.health = ai.health + ai_heal
print("Your opponent healed for ", ai_heal," HP")
# Displaying HP
print("\nYour health is:", player_class.health,"HP")
print("Your opponent's health is ", ai.health," HP ")
# Detecting Death
if player_class.is_dead():
break
elif ai.health <= 0:
points += player_class.health * level
level += 1
print("You have bested your opponent! You Have",points,"points. \nNow starting level",level)
player_class, player_heal_max = level_up(player_class,player_heal_max)
break
# Finishing Game, Checking/Updating High Score
if player_class.health<=0:
print(" \ nYou Died !: (")
if points > score:
hs = open(" highscore.txt "," w ")
hs.write(str(points))
hs.write(" \ n ")
print(" You have the new high score of ",points," !")
hs.write(player_name)
else:
print(" \ nYou finished with ",points," points.")
print(" The high score is:",score," by ",leader)
input(" ")
hs.close()
break
``` | {
"domain": "codereview.stackexchange",
"id": 35248,
"tags": "python, beginner, python-3.x, role-playing-game, battle-simulation"
} |
System Characterization: multiply imaginary component by scalar | Question: I tried solving a test question that frankly stumped me. If you could explain to me the solution I’d be really grateful.
Given $a, b \in \mathcal{R}$ the system $R_v$ takes the complex input $a + bj$ and returns $a + bvj$. For which values of v the system is linear?
This system accepts both discrete and continuous inputs.
The answer is $v=1$, but I don’t understand why for any other $v$ linearity is not fulfilled. It even seemed trivial to me that it is.
Like if I take take two complex inputs I can characterize as $x_1 = a(t) + b(t)j$ and $x_2 = c(t) + d(t)j$ applying the system: would result in $$\psi\{\alpha x_1+ \beta x_2\} = (\alpha a+\beta c)(t) + (\alpha b+ \beta d)vj$$, whereas applying the system to each one separately would do the same thing: $$ \psi\{\alpha x_1 \} + \psi \{\beta x_2 \} = (\alpha a + \alpha bvj) + (\beta c + \beta dvj) $$
so I think I must be missing something here
Answer: I think the problem here is that the system has complex inputs and outputs and you would have to extend the definition of linear to complex scale factors. Assuming that $F(a+jb) = a+jvb$, $y = F(x)$ you would have to proof that
$$F(\alpha x_1 + \beta x_2) = \alpha F(x_1) + \beta F(x_2)$$
You have successfully proven that this is the case if the scale coefficients are real ($\alpha,\beta \in \mathbb{R}$). However, for a complex system this must also hold for complex scale coefficients ($\alpha,\beta \in \mathbb{C}$) which is indeed not the case here.
The answer $v=1$ is trivial since the system just turns into an identity. | {
"domain": "dsp.stackexchange",
"id": 11789,
"tags": "discrete-signals, continuous-signals, linear-systems, complex"
} |
How can two charged black holes merge despite electrostatic repulsion? | Question: I have read this question:
Collision of charged black holes
And it made me curious.
I understand that the charged black holes do have negative EM charge, and they repel.
This EM interaction and repulsion between electron fields around nuclei, cause matter to have spatial extent. This EM repulsion is why atoms can't get closer to each other then a certain distance. And why atoms in molecules can't get closer to each other then a certain distance.
This is the reason why matter is 99% space.
Of course, the Heisenberg Uncertainty principle has an effect on this too.
Question:
How can two charged (negative EM charge) black holes merge? How can gravity overcome the EM repulsion and the Heisenberg uncertainty principle?
Do these electron fields pass through each other (merge too) when the holes merge? Does the Heisenberg uncertainty principle and the Pauli exclusion principle for fermions not apply anymore?
Answer: Two charged black holes are very much like two charged droplets: they can merge, as long as the charge is not strong enough to repel them. The result is a bigger hole with the sum of the charges. It is not like the EM repulsion goes to infinity as they approach each other: they have a spatial extent, and the surface electric fields will be roughly following Coulomb's law (with some corrections due to curved spacetime).
Approximately, two black holes of mass $M$ and charge $Q$ each will repel each other if $GM^2/r^2 < kQ^2/r^2$, or $Q/M > \sqrt{G/k}\approx 10^{-10}$ C/kg. For a solar mass black hole that is about $10^{20}$ C. This produces a field at the horizon that is way past the Schwinger limit where the electromagnetic field becomes nonlinear, so it is likely that long before this the whole system destabilizes in a shower of electrons and positrons.
The Heisenberg uncertainty for macroscopic black holes is so small that it is negligible. Whether black holes can be treated as fermions looks uncertain. | {
"domain": "physics.stackexchange",
"id": 51870,
"tags": "electrostatics, gravity, black-holes, charge"
} |
Does $\int_C B \cdot dr$ have any physical meaning when $C$ is *not* a closed loop? | Question: Question. Does the line integral $\int_C \textbf{B} \cdot d\textbf{r}$ have any physical meaning when $C$ is not a closed loop? That is, I know that Ampere's law asserts that in the case of a static electric field, the line integral of the magnetic field around a closed loop is proportional to the electric current flowing through the loop. But I'm wondering if the line integral has any meaning when the path is just an arbitrary path through the magnetic field, and specifically not a closed loop?
Context. I am a mathematics grad student, TA'ing a vector calculus course. My professor has assigned a recitation activity where we are to calculate several such line integrals with several paths through the magnetic field generated by a wire. So I'm just wondering if there is any physical meaning to such things, as that will help illustrate the purpose of these integrals!
Thanks!
Answer: Not really.
I mean, you can force some artificial meaning onto it: for example, if you need to calculate an integral over a stick perpendicular to the current carrying wire, $I_1$, you can write something like that: $I_1 + I_2 = \mu_0 J$, where $J$ is current. $I_2$ is very easy to calculate, it is $\frac{2\pi-\theta}{2\pi}\mu_0 I$, therefore you find $I_1 = \frac{\theta}{2\pi} \mu_0 $.
It can be generalized, by the way.
You can use such integrals as auxiliary ones. But still, it's quite artificial. It's like if you, for some reason, need to calculate one complex integral of magnetic field, you can close the loop and calculate the easier one. But question rises why would you calculate the integral in the first place. | {
"domain": "physics.stackexchange",
"id": 83637,
"tags": "electromagnetism, magnetic-fields"
} |
How can I remove a PEG shampoo from a natural horn-comb without damaging it? | Question: I washed by accident a horn-comb with a shampoo that contains PEGs, now how can I remove these types of chemicals from the horn-comb without damaging its surface?
Answer: Soap, water and a dash of elbow grease. PEGs are all surfactants which will wash away with water and if they don't then an alcohol-water mix (vodka) will certainly remove any PEG residue, probably along with some of the natural oils in the horn comb that has built up over time and usage.
This may leave the comb with whitened patches or streaks on it. These are not a PEG residue but fine cracks in the horn, interstices in the layers of keratin that the horn is made of, that used to be filled with oil which the PEG shampoo has removed. Any number of natural oils will improve the appearance of the horn, including, but not limited to, natural scalp oils (i.e. use the comb for its intended purpose), linseed oil (a favourite of cricketers for their bats), castor oil, beeswax, olive oil (a bit sticky) or a light application of any proprietary furniture polish. Choose something unscented if you don't want Mom to find out. And start with the least viscous options you have, these will achieve greater penetration of the fine cracks in the horn and "seal" them with a wax if possible.
I've seen antique tortoiseshell respond very badly to cleaning with detergents, but the damage is really illusory and can be restored with the gentle application of oils and waxes. | {
"domain": "chemistry.stackexchange",
"id": 6622,
"tags": "biochemistry, toxicity"
} |
Show how to do FFT by hand | Question: Say you have two polynomials: $3 + x$ and $2x^2 + 2$.
I'm trying to understand how FFT helps us multiply these two polynomials. However, I can't find any worked out examples. Can someone show me how FFT algorithm would multiply these two polynomials. (Note: there is nothing special about these polynomials, but I wanted to keep it simple to make it easier to follow.)
I've looked at the algorithms in pseudocode, but all of them seem to be have problems (don't specify what the input should be, undefined variables). And surprisingly, I can't find where anyone has actually walked through (by hand) an example of multiplying polynomials using FFT.
Answer: Suppose we use fourth roots of unity, which corresponds to substituting $1,i,-1,-i$ for $x$. We also use decimation-in-time rather than decimation-in-frequency in the FFT algorithm. (We also apply a bit-reversal operation seamlessly.)
In order to compute the transform of the first polynomial, we start by writing the coefficients:
$$ 3,1,0,0. $$
The Fourier transform of the even coefficients $3,0$ is $3,3$, and of the odd coefficients $1,0$ is $1,1$. (This transform is just $a,b \mapsto a+b,a-b$.) Therefore the transform of the first polynomial is
$$ 4,3+i,2,3-i. $$
This is obtained using $X_{0,2} = E_0 \pm O_0$, $X_{1,3} = E_1 \mp i O_1$. ( From twiddle factor calculation ).
Let's do the same for the second polynomial. The coefficients are
$$2,0,2,0.$$
The even coefficients $2,2$ transform to $4,0$, and the odd coefficients $0,0$ transform to $0,0$. Therefore the transform of the second polynomial is
$$ 4,0,4,0. $$
We obtain the Fourier transform of the product polynomial by multiplying the two Fourier transforms pointwise:
$$ 16, 0, 8, 0. $$
It remains to compute the inverse Fourier transform. The even coefficients $16,8$ inverse-transform to $12,4$, and the odd coefficients $0,0$ inverse-transform to $0,0$. (The inverse transform is $x,y \mapsto (x+y)/2,(x-y)/2$.) Therefore the transform of the product polynomial is
$$6,2,6,2.$$
This is obtained using $X_{0,2} = (E_0 \pm O_0)/2$, $X_{1,3} = (E_1 \mp i O_1)/2$.
We have obtained the desired answer
$$ (3 + x)(2 + 2x^2) = 6+2x+6x^2+2x^3. $$ | {
"domain": "cs.stackexchange",
"id": 13189,
"tags": "algorithms, fourier-transform, divide-and-conquer"
} |
Bonding in the primary structure of a protein | Question: My textbook says:
The amino group of an amino acid reacts with the carbonyl group of another amino acid at the end of a polypeptide chain. This condensation reaction forms a peptide bond. ... The precise sequence of amino acids in a polypeptide chain is the primary structure of the protein.
Later, it states that
The primary structure of a protein is established by covalent bonds ...
Is the primary structure of a protein governed by covalent bonds or peptide bonds? Are peptide bonds a type of covalent bond?
Any help is greatly appreciated.
Answer: Peptide bonds (also called amide bonds) are definitely covalent bonds.
This is also mentioned in the first line of wikipedia entry: https://en.wikipedia.org/wiki/Peptide_bond | {
"domain": "biology.stackexchange",
"id": 6995,
"tags": "protein-structure"
} |
How to generalize outer product into its integral form assuming continuous basis? | Question: (My current physics study is undergraduate quantum mechanics)
By definition, the inner product is $w^Tu= \sum_i w_iu_i$, the outer product is $wu^T=w_iu_j$. According to Griffith, the inner product integral is defined as $\int_{-\infty}^{\infty} \Psi(x)^*\Phi(x) dx$ and is equivalent to $\langle \Psi |\Phi\rangle $ in terms of wavefunctions. How would I apply such transformation so that the outer product $|\Psi\rangle\langle \Phi|$ can be written in integral form?
Something I have observed is that $\mathrm{tr}(wu^T) = w^Tu$. But I couldn't find a transformation from trace to an integral.
Answer: We can find the outer product of states in terms of wavefunctions by the usual approach of applying position kets to each state
\begin{equation}
|\Psi\rangle \langle \Phi| \rightarrow \langle x|\Psi\rangle \langle \Phi| y \rangle = \Psi(x)\Phi^*(y)\;.
\end{equation}
No integrals required. We can go back the other way using the transformation
\begin{equation}
\Psi(x)\Phi^*(y) \rightarrow \int\mathrm{d}x\, | x\rangle\Psi(x)\int\mathrm{d}y\,\Phi^*(y) \langle y| = |\Psi\rangle \langle \Phi|
\end{equation}
directly analogous to how it would be done for a simple wavefunction.
The trace identity you write down can be derived as
\begin{align}
\mathrm{tr}\left(|\Psi\rangle \langle \Phi|\right) &= \sum_i \langle \phi_i |\Psi\rangle \langle \Phi|\phi_i\rangle \\
&= \sum_i \langle \Phi|\phi_i\rangle \langle \phi_i |\Psi\rangle \\
&= \langle \Phi|\Psi\rangle\;.
\end{align}
Where $|\phi_i\rangle$ is some orthonormal basis (can be position or any other basis you like due to the basis independence of the trace. | {
"domain": "physics.stackexchange",
"id": 75569,
"tags": "quantum-mechanics, hilbert-space, wavefunction"
} |
Spectral covers and a specific exact short sequence | Question: I have a question about the spectral cover construction of Friedman, Morgan, and Witten (typically used to map a description in heterotic string theory into F-theory). I realise this is a highly specialised topic; I am asking in the hope that there might be some specialists around. I also realise this may be more appropriate in the mathematics StackExchange, and may post a question there later.
Specifically I have a question about the more mathematical FMW paper, `Vector bundles over elliptic fibrations': https://arxiv.org/abs/alg-geom/9709029
Consider the second short exact sequence on page 72, of sheaves on the elliptic fibration $Z$ (the one involving $V_{A,a}$ sheaves). The three sheaves appearing in the first two slots of the sequence are I think supposed to all be vector bundles (the $V_{A,a}$ sheaves should be vector bundles since they are supposed to correspond to the spectral cover description). But the sheaf in the third slot is, I think, zero (that is, not a trivial bundle, but actually just zero) away from the subspace $D$, and hence is not a vector bundle. How then could both the left and the middle sheaves in the sequence be vector bundles? Additionally, away from the subspace $D$, do we then have an isomorphism between the sheaves in the first two slots? And in the subspace $D$ (where the final sheaf is not just zero), do we just have the ordinary quotient relation between the three sheaves?
Edit:
The short exact sequence in question is, $$0\to V_{A,a}(n)\to V_{A,a}(n-1)\oplus \pi^*L^a\to(\pi^*L^a)|D\to0\,.$$ The space where these bundles (or sheaves) live is an elliptic fibration $\pi:Z\to B$ (and $L^a$, where it appears, is being pulled back by $\pi^*$ to the space $Z$). The $|D$ denotes restriction to a subspace $D$ of $Z$. Here $V_{A,a}(m)$ is a sheaf, and specifically is I believe supposed to be a vector bundle for both values of $m$ seen in the sequence. The final term in the sequence is I think not a vector bundle (it is 0 away from the subspace $D$), and this gives rise to my questions above. (Essentially this is a question about exact sequences of sheaves, but as it appeared in a physics context I thought I'd try my luck finding someone on here who is familiar with this spectral cover construction.)
Answer: The quotient of locally free sheaves is not necessarily locally free, i.e. the first two terms being locally free does not force the third to be locally free, cf. this math.SE post.
You have no guarantee that, given a map $f: E\to F$ of vector bundles, that the quotient $\mathrm{coker}(E\to F)$ (which we would also like to write as $F/\mathrm{im}(E)$) exists as a vector bundle. In other words, the category of vector bundles is not an Abelian category, it does not have all kernels and cokernels. To see this, simply choose any map of vector bundles whose rank on the fibers jumps somewhere. This shows that you should not expect to be able to lift an exact sequence of sheaves to a sequence of vector bundles unless you know all sheaves to be locally free, since the category of sheaves is Abelian so maps of two locally free sheaves will always have kernels and cokernels as sheaves that are not kernels and cokernels as locally free sheaves. | {
"domain": "physics.stackexchange",
"id": 32569,
"tags": "string-theory, mathematical-physics, compactification"
} |
Identify random repetitive patterns | Question: Forgive me if it’s too basic, I finish engineering a while ago.
Given any time series, not periodic, I would like to find any repetitive pattern that is distinct (by some given measurement) and is unknown.
For example given this :
I would like to find number 1 and 2( get their index on time).
Is FFT going to help anyway?
Without knowing or hard programming an algorithm, is it possible or do I need a serious machine learning?
(Are there practical books about finding patterns with code and not too much math?)
Thanks.
Answer: Finding repeating but not periodically repeating patterns of an unknown template which you expect the algorithm to identify is a hard problem. Nature typically doesn't shut off all the other patterns that may be present for our convienance either.
Signals have sources or generating mechanisms. It would help if you had some idea how your patterns are generated.
The concept of noise can be helpful because what isn't noise might very well be signal.
I would approach a problem as a time series that corresponds to an ODE and try to learn the ODE.
One way to do this would be to construct a phase space representation, that is something like $x(t)$ and $y(t)=dx(t)/dt$. If the noise is low using central differences can give a good approximation to the derivative of your time series. A phase space plot might identify orbits and limit cycles produced by your pattern.
These ideas are common in chaos theory and I would search Google accordingly
As an example https://en.m.wikipedia.org/wiki/Van_der_Pol_oscillator | {
"domain": "dsp.stackexchange",
"id": 6837,
"tags": "fourier-transform, time-series, pattern"
} |
Forge-flattening | Question: Is there an easy way to estimate how much energy is needed to forge a cube of iron into a thin square sheet with a hammer? I suppose i should integrate the force $$F = \text{yield_strength} \times \text{area(h)}$$ through the height, $h_0 = \text{height of the initial cube}$, $h_1$ -- resulting sheet's thickness. Am I right? Also, where to look for compressive yield strength of iron at high temperatures? I've read its better to forge it at $1000-1100 °\text{C}$
Answer: Your suggestion is reasonable as an estimate for pressing the metal; hammering would require significantly more energy (see below). The volume $V=Ah$ of metal is constant so as its thickness $h$ decreases its area $A$ increases. A graph of the temperature dependence of Yield Strength for several metals is given in Engineering ToolBox.
At room temperature the Yield Strength is about $p=50MPa$. The work required to compress a cube of side $h_0$ to a plate of thickness $h_1$ is $ph_0^3 \ln{\frac{h_0}{h_1}}$. For $h_0=100mm$ and $h_1=1mm$ this work is approx. $230kJ$.
An upper bound is the energy required to heat the metal to its melting point then change it from solid to liquid. Gravity would then re-shape the metal without further input of energy. For $h_1^3=0.001m^3$ of iron ($\approx 0.8kg$) this is about $375kJ$ when starting from $1100^{\circ}C$ and $760kJ$ when starting from room temperature $25^{\circ}C$. This is the same order of magnitude as the Yield Strength calculation.
For a lower bound estimate you could calculate the energy of the bonds which need to be broken in order to increase the surface area of the metal between its initial and final shapes. Bond energy for iron is about $120 kJ/mol$.
Hammering also dissipates energy through heating with a much smaller amount lost as sound. Whether this non-useful energy is significant depends on whether the hammer blows exceed the yield strength of $50MPa$ and by how much. The proportion of energy dissipated as sound increases as the surface area of the metal plate increases but remains small. Vibration energy is eventually lost mostly through heating of dense metal rather than radiation of sound in air.
For an interesting discussion of hammering see Can hammer blows increase workpiece temperature? which states that $90-95 \text{%}$ of the impact energy goes into raising temperature. It concludes that manual hammering does not cause any significant rise in temperature because the metal anvil quickly conducts away the heat generated. But power hammering can cause a significant increase in appropriate circumstances. | {
"domain": "physics.stackexchange",
"id": 50568,
"tags": "homework-and-exercises, newtonian-mechanics, energy, everyday-life"
} |
How to prove $L \cdot L^{*} = L^{+}$ | Question: How can one formally prove
$L \cdot L^{*} = L^{+}$
It looks obvious to me since with the concatenation you get rid of $\varepsilon$, but I cannot think of a formal proof through induction or something.
Answer: To summarize the comments.
A. Sometimes $L^+$ is defined to be $L\circ L^*$, where '$\circ$' is the concatenations operator.
B. Assume the following definitions,
$L^* \equiv \{\varepsilon\} \cup L \cup L^2 \cup \cdots$
$L^+\equiv L \cup L^2 \cup \cdots$
Then, by the properties of the concatenation operator, $L\circ L^i = L^{i+1}$. Explicitly for the case of $i=0$, it also holds that $L\circ\{\epsilon\} = L$.
Then,
$$\begin{align}
L\circ L^* &= L \circ \left( \{\varepsilon\} \cup L \cup L^2 \cup \cdots \right )
\\ &= (L\circ \{\varepsilon\}) \cup (L\circ L) \cup (L \circ L^2) \cup \cdots \\ &= L \cup L^2 \cup \cdots \\ &\equiv L^+
\end{align}$$
The only missing part is to explain the second transition--The distributivity of the concatenation operator over unions. | {
"domain": "cs.stackexchange",
"id": 615,
"tags": "formal-languages, regular-languages"
} |
Android - Simple multi-activity app | Question: This is a task from lab classes at my university. Problem statement was to showcase following features:
Multiple activities in app.
Passing data from one activity to another.
Changing behavior of back button in one of activities.
Getting picture from camera and displaying it in app.
Binding callback action in XML file instead of Java code.
It's based heavily on examples, but I wonder if I can get any general advice about code quality.
Whole project on GitHub:
https://github.com/rogalski/tmlab2
And Java and layout code:
MyActivity.java:
package lab.tm.rogalski.lab2;
import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
public class MyActivity extends Activity {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
setListeners();
}
@Override
public void onResume() {
EditText editText = (EditText) findViewById(R.id.TextToSend);
editText.setText("");
super.onResume();
}
private void setListeners() {
Button sendButton = (Button) findViewById(R.id.SendBtn);
sendButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
EditText editText = (EditText) findViewById(R.id.TextToSend);
String data = editText.getText().toString();
sendIntent(data);
}
});
}
private void sendIntent(String data) {
Intent intent = new Intent(this, SecondActivity.class);
intent.putExtra("data", data);
startActivity(intent);
}
}
SecondActivity.java
package lab.tm.rogalski.lab2;
import android.app.Activity;
import android.content.Intent;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.os.Bundle;
import android.os.Environment;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import java.io.File;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Date;
public class SecondActivity extends Activity {
static final int REQUEST_TAKE_PHOTO = 1;
String mCurrentPhotoPath;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.second);
retrieveIntentData();
}
private void retrieveIntentData() {
Intent intent = getIntent();
TextView target = (TextView) findViewById(R.id.TextDestination);
target.setText(intent.getStringExtra("data"));
}
private void showPicture() {
if (mCurrentPhotoPath == null)
return;
ImageView iv = (ImageView) findViewById(R.id.imageView);
iv.setImageBitmap(BitmapFactory.decodeFile(mCurrentPhotoPath));
}
@Override
public void onBackPressed() {
Toast.makeText(this, "Back button disabled. Use a button above.", Toast.LENGTH_SHORT).show();
// super.onBackPressed();
}
public void onBtnClick(View v) {
Intent intent = new Intent(this, MyActivity.class);
startActivity(intent);
}
public void onTakePhotoBtnClick(View v) {
dispatchTakePictureIntent();
}
private void dispatchTakePictureIntent() {
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(getPackageManager()) == null) {
Toast.makeText(this, "Camera not detected.", Toast.LENGTH_SHORT).show();
}
File photoFile = null;
try {
photoFile = createImageFile();
} catch (IOException ex) {
Log.d("TAG", Log.getStackTraceString(ex));
Toast.makeText(this, "Failed to create image file.", Toast.LENGTH_SHORT).show();
}
if (photoFile != null) {
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(photoFile));
startActivityForResult(takePictureIntent, REQUEST_TAKE_PHOTO);
}
}
private File createImageFile() throws IOException {
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
String imageFileName = "JPEG_" + timeStamp + "_";
File storageDir = getApplicationContext().getExternalFilesDir(Environment.DIRECTORY_PICTURES);
if (isExternalStorageWritable() && !storageDir.isDirectory()) {
storageDir.mkdirs();
}
File image = File.createTempFile(imageFileName, ".jpg", storageDir);
mCurrentPhotoPath = image.getAbsolutePath();
return image;
}
public boolean isExternalStorageWritable() {
String state = Environment.getExternalStorageState();
return Environment.MEDIA_MOUNTED.equals(state);
}
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
showPicture();
}
}
main.xml:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/TextToSend" android:layout_gravity="center_horizontal"/>
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Send to another activity"
android:id="@+id/SendBtn" android:layout_gravity="center_horizontal"/>
</LinearLayout>
second.xml:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAppearance="?android:attr/textAppearanceLarge"
android:id="@+id/TextDestination" android:layout_gravity="center_horizontal"/>
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Go Back To Main Activity"
android:id="@+id/GoBackBtn" android:layout_gravity="center_horizontal" android:onClick="onBtnClick"/>
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Take Picture"
android:id="@+id/button" android:layout_gravity="center_horizontal" android:onClick="onTakePhotoBtnClick"/>
<ImageView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/imageView" android:layout_gravity="center_horizontal"/>
</LinearLayout>
Answer: There is just a couple of small things you can improve on but mostly are personal preferences.
The first thing that you should do is getting used to have all the string in the strings.xml file in the values folder. This is very useful to keep organized and for localization.
Ids of elements in the xml often go in the format camelCase, like in imageView. If you don't like it I suggest at least sticking to one version.
I suggest avoiding using very short names (like v for the Views) as variable names. In long methods this could get messy. The only case where I see it reasonable is for the index in a loop.
In the xml fill_parent has been deprecated since API 8, you should use match_parent everywhere. | {
"domain": "codereview.stackexchange",
"id": 13548,
"tags": "java, android"
} |
When and where is a black hole formed on the Penrose diagram | Question: In the standard Penrose diagram of black hole forming from collpase (either a star or a matter shell), where can we locate the event where a black hole forms? Is it just the point at which the matter world line crosses the event horizon?
Also, when is the first Hawking quanta emitted after the BH forms? and where can we locate it on the Penrose diagram?
Answer: This is actually two questions. I'll answer them in order.
The answer to the first is depends strongly on exactly what you mean by a black hole. The most commonly used defining feature of a black hole is the existence of an event horizon. However, the definition of an event horizon is very non-local and in particular requires full knowledge of the future of the spacetime. This leads to some peculiar results when try to define "when" the event horizon formed. For example, in the Penrose diagram of a collapsing shell, the event horizon appears in the interior (flat) part of the spacetime "before" the shell crosses its own Schwarzschild radius.
A more local definition of the formation of a black hole is the appearance of an apparent horizon, a boundary on a time slice beyond which all light rays "point inwards". Black hole singularity theorems link the formation of an apparent horizon to the formation of a singularity in the future, while the (weak) cosmic censorship conjecture implies that an apparent horizon must always lie inside an event horizon. These points combined make the appearance of an apparent horizon a good indicator for "when" a black hole is formed. This is also what is used for this purpose in most numerical relativity simulations for detecting the formation of black hole horizons.
However, the notion of apparent horizon depends of the time slicing of the spacetime, making their position not fully invariant. (and therefor their location in the Penrose diagram not well defined.) However, for somewhat "sane" slicings of a collapsing shell spacetime, the formation of an apparent horizon coincides with the shell crossing its own Schwarzschild radius.
As for the where in the Penrose diagram the first Hawking radiation is emitted. This is not well-defined within semi-classical gravity. Hawking radiation is derived through an essentially global argument. I.e. its existence is derived only on future null infinity in the Penrose diagram. You can trace back the Hawking modes using geometric optics, in which case you will find that (in an eternal BH spacetime) the modes originated inside the white hole region in the past. In a more physically relevant spacetime describing collapse, they will always stay outside the apparent horizon.
To truly say where/when Hawking originates one would need a full quantum gravity description of the process, which we currently don't have. | {
"domain": "physics.stackexchange",
"id": 63494,
"tags": "general-relativity, black-holes"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.