anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Printing command line arguments in C++
Question: I'm learning C++ and here I've written a little toy program that is supposed to list all the arguments passed to it through the command line. #include <iostream>; using namespace std; int main (int argc, char *argv[]) { for (int number = 1; number < argc; ++number) { cout << argv[number] << endl; } return 0; } It is possible to use indexing instead and rewrite it in the following way: #include <iostream> using namespace std; int main (int argc, char *argv[]) { int i; for (int j = 1; j < argc; ++j) { i = 0; while (argv[j][i]) { cout << argv[j][i]; ++i; } cout << endl; } return 0; } I personally like the first version more. It doesn't need one more variable and it doesn't need one more cycle, so making use of pointers here seems like a good idea: it helps improve readability and as far as I know performance. Is this actually a good alternative to the second version that uses indexing? And if so, how could it be improved? Answer: Is this actually a good alternative to the second version that uses indexing? And if so, how could it be improved? I'm not sure what you're asking here. Obviously the first version is fine; obviously the second version is pretty silly (because it does the same thing as the first version but in a more confusing and verbose way). As for "using indexing", doesn't the first version also use indexing? What do you think argv[i] is, if not "indexing"? It sure seems like you've got the hang of "indexing" well enough. :) Your second program is also silly in that it uses a while loop instead of a for loop, even though you have all three pieces of a for-loop (initialization, condition, and increment) in close proximity. Why didn't you use a for loop? Just for practice with different kinds of loops? In that case, I'd dock you points for not using a do-while loop! ;) To nitpick as long as I'm here: Everyone will tell you (so you might as well start listening now) that using namespace std is a bad habit. Write std::cout and std::endl explicitly. You have a stray ; on line 1 — a transcription error, since the compiler wouldn't accept it if you actually tried to compile that. Your second program uses saner loop index names: i and j. Naming a loop control variable number is more likely to confuse the reader than help them. Explicit return 0 from main is unnecessary in C++ (and also in C, as of 18 years ago). Omitting it from simple programs like this can help the reader focus on the important stuff instead of the boilerplate. Putting it all together: #include <iostream> int main(int argc, char **argv) { for (int i = 1; i < argc; ++i) { std::cout << argv[i] << std::endl; } }
{ "domain": "codereview.stackexchange", "id": 24834, "tags": "c++" }
How do I know if my Random Forest Regressor Model is overfitted?
Question: Im creating a Random Forest Regressor Model with a small dataset (30 data points). I tried with other models but RF was the best one, however, after applying GridSearchCv I got that the training set is 0 and the test set is 0.12, and I dont know if that means that my model is overfitted. Could anyone please help me with this? Tks Answer: Hi and welcome to StackExchange! First of all, your dataset is truly, extremely small. Maybe someone can correct me, but I would say 30 points is so small that using RandomForest is not appropriate. That aside, overfitting is when your test set performance is worse to training set performance, due to the model fitting itself to noise in the training set. In most cases, you will see SOME degree of this (test set performance worse than training set). However, the question is how much. In your case, you have basically a "perfect score" on your training set - this will basically ALWAYS signify some degree of overfitting (in the real world, there's pretty much no way a model can ever "truly" be getting a perfect score unless it's overfitting). This is confirmed by the considerably larger error in the test set. The question to you is: Yes, you have overfitting. But is this degree of overfitting acceptable? Is 0.12 error on test set good enough for you? If it is, then you don't need to worry too much. If not, you need to make some changes! If that's too much error for you, I would HIGHLY recommend using a simpler method (some form of regression, linear or logistic). Your dataset is too small for such complex algorithms to do well.
{ "domain": "ai.stackexchange", "id": 3388, "tags": "regression, overfitting, random-forests" }
Down with FizzBuzz... LOL
Question: This post is the result of reading through and following the LOLCODE Specification 1.2 ("smoking the manual", right?), and writing and executing my code on compileonline.com. My "hello world" was going to be a fizzbuzz. I like it because it nicely illustrates the basics of a language - variables, operators, looping, conditionals, and the likes. As far as naming goes, the specification allows for a wide variety, so I decided to go PascalCase all the way, so as to ease up the reading a bit - otherwise the language is a bit harsh on the eyes, to say the least. I'm curious about how idiomatic my code is - this is my very first and probably last lolcode program. Did I overlook language constructs that would help readability? How's naming? Indentation? Any issues? HAI 1.2 I HAS A Fizz ITZ 3 I HAS A Buzz ITZ 5 IM IN YR FrootLoopz UPPIN YR Frootz TIL BOTH SAEM Frootz AN 100 BTW Frootz runs 0 through 99 I HAS A Froot ITZ SUM OF Frootz AN 1 BTW Froot will run 1 through 100 BOTH SAEM 0 AN MOD OF Froot AN PRODUKT OF Fizz AN Buzz O RLY? YA RLY Froot R "FizzBuzz" NO WAI BOTH SAEM 0 AN MOD OF Froot AN Fizz O RLY? YA RLY Froot R "Fizz" NO WAI BOTH SAEM 0 AN MOD OF Froot AN Buzz O RLY? YA RLY Froot R "Buzz" OIC OIC OIC VISIBLE Froot IM OUTTA YR FrootLoopz VISIBLE "DOWN WITH FIZZBUZZ LOL" KTHXBYE Answer: I like how you have declared the Fizz and Buzz 'constants'. It makes the code more readable. The comments are useful too, though, for consistency, the second comment should be changed from: BTW Froot will run 1 through 100 to BTW Froot runs 1 through 100 The idea of generating Froot from Frootz is also good, because it reduces the computations later in the code. I only wish you used a better name than Frootz because it conflicts with Froot. The similarity is.... uncanny. I would suggest the name Loopz: I HAS A Loopz ITZ SUM OF Frootz AN 1 This will reduce the ambiguity, and improve the readability and maintainability. Finally, it is good that you put the Fizz modulo check before the Buzz check because that improves performance. Many people put buzz first, but that means the modulo-5 check happens, and that fails more often than the modulo-3 check, so people who do the Buzz-loop first tend to do about 20% more conditional checks. All in all, for a first go, I am very impressed with your result. I can't wait until you implement a fim++ interpreter in lolcode. Till then!
{ "domain": "codereview.stackexchange", "id": 9320, "tags": "beginner, fizzbuzz, lolcode" }
Simulate round-robin tournament draw
Question: I decided to implement the round robin algorithm in Python. My code takes a list of teams as input and prints the schedule. This is my first try to write something by my own after taking some online courses, so I am absolutely sure that this code must be significantly improved. Here it is: import random def simulate_draw(teams): if len(teams) % 2 == 0: simulate_even_draw(teams) else: simulate_odd_draw(teams) def simulate_even_draw(teams): dic = {} for i in range(len(teams)): dic[i] = teams[i] games = [] arr1 = [i+1 for i in range(int(len(teams)/2))] arr2 = [i+1 for i in range(int(len(teams)/2), len(teams))][::-1] for i in range(len(teams)-1): arr1.insert(1, arr2[0]) arr2.append(arr1[-1]) arr2.remove(arr2[0]) arr1.remove(arr1[-1]) zipped = list(zip(arr1, arr2)) games.append(zipped) zipped = [] for game in games: for gm in list(game): r = random.sample(gm, len(gm)) print(dic[r[0]-1] + ' plays ' + dic[r[1]-1]) def simulate_odd_draw(teams): dic = {} for i in range(len(teams)): dic[i] = teams[i] dic[i+1] = '' games = [] arr1 = [i+1 for i in range(int((len(teams)+1)/2))] arr2 = [i+1 for i in range(int((len(teams)+1)/2), len(teams)+1)][::-1] for i in range(len(teams)): arr1.insert(1, arr2[0]) arr2.append(arr1[-1]) arr2.remove(arr2[0]) arr1.remove(arr1[-1]) zipped = list(zip(arr1, arr2)) games.append(zipped) zipped = [] for game in games: for gm in list(game): r = random.sample(gm, len(gm)) if len(teams)+1 not in r: print(dic[r[0]-1] + ' plays ' + dic[r[1]-1]) I think that big blocks of code that largely repeat themselves inside 2 functions may be united in one function, but not sure how to implement it. Answer: Making the code testable and tested The first step to improve your code is to try to make it testable. By doing so, you usually have to deal with Separation of Concerns: in your case, you have to split the logic doing the output from the logic computing games. The easiest way to do so it to rewrite slightly the simulate_XXX functions to return values instead of writing them. Once it it done, you can easily write tests for the function computing the games (in order to make this easier to implement, I've extracted out the randomising part as well). At this stage, we have something like: import random def simulate_draw(teams): """Return the list of games.""" if len(teams) % 2 == 0: return simulate_even_draw(teams) else: return simulate_odd_draw(teams) def simulate_even_draw(teams): """Return the list of games.""" matches = [] dic = {} for i in range(len(teams)): dic[i] = teams[i] games = [] arr1 = [i+1 for i in range(int(len(teams)/2))] arr2 = [i+1 for i in range(int(len(teams)/2), len(teams))][::-1] for i in range(len(teams)-1): arr1.insert(1, arr2[0]) arr2.append(arr1[-1]) arr2.remove(arr2[0]) arr1.remove(arr1[-1]) zipped = list(zip(arr1, arr2)) games.append(zipped) zipped = [] for game in games: for gm in list(game): r = gm # remove randomness for now - random.sample(gm, len(gm)) a, b = dic[r[0]-1], dic[r[1]-1] matches.append((a, b)) # print(a + ' plays ' + b) return matches def simulate_odd_draw(teams): """Return the list of games.""" matches = [] dic = {} for i in range(len(teams)): dic[i] = teams[i] dic[i+1] = '' games = [] arr1 = [i+1 for i in range(int((len(teams)+1)/2))] arr2 = [i+1 for i in range(int((len(teams)+1)/2), len(teams)+1)][::-1] for i in range(len(teams)): arr1.insert(1, arr2[0]) arr2.append(arr1[-1]) arr2.remove(arr2[0]) arr1.remove(arr1[-1]) zipped = list(zip(arr1, arr2)) games.append(zipped) zipped = [] for game in games: for gm in list(game): r = gm # remove randomness for now - random.sample(gm, len(gm)) if len(teams)+1 not in r: a, b = dic[r[0]-1], dic[r[1]-1] matches.append((a, b)) # print(a + ' plays ' + b) return matches def displays_simulated_draws(teams): """Print the list of games.""" for gm in simulate_draw(teams): a, b = random.sample(gm, len(gm)) print(a + ' plays ' + b) def test_simulate_draw(): """Small tests for simulate_draw.""" # TODO: Use a proper testing framework TESTS = [ ([], []), (['A'], []), (['A', 'B', 'C', 'D'], [('A', 'C'), ('D', 'B'), ('A', 'B'), ('C', 'D'), ('A', 'D'), ('B', 'C')]), (['A', 'B', 'C', 'D', 'E'], [('A', 'E'), ('B', 'C'), ('A', 'D'), ('E', 'C'), ('A', 'C'), ('D', 'B'), ('A', 'B'), ('D', 'E'), ('B', 'E'), ('C', 'D')]), ] for teams, expected_out in TESTS: # print(teams) ret = simulate_draw(teams) assert ret == expected_out if __name__ == '__main__': test_simulate_draw() displays_simulated_draws(['A', 'B', 'C', 'D']) Now we can start improving the code in a safer way. Remove what's not required dic[i+1] = '' is not required, we can remove it. Also, resetting zipped to the empty string is not required, we can remove it. Maybe we could get rid of zipped altogether. Finally, we call for gm in list(game) when game is already a list. We can remove the call to list. Loop like a native I highly recommend Ned Batchelder's talk "Loop like a native" about iterators. One of the most simple take away is that whenever you're doing range(len(iterable)), you can probably do things in a better way: more concise, clearer and more efficient. In your case, we could have: for i in range(len(teams)): dic[i] = teams[i] replaced by for i, team in enumerate(teams): dic[i] = team And we could do: for _ in teams: instead of for i in range(len(teams)) (Unfortunately, this can hardly be adapted to the "even" situation) Note: "_" is a usual variable names for values one does not plan to use. Dict comprehension The dictionnary initiation you perform via dict[index] = value in a loop could be done using the Dictionnary Comprehension syntactic sugar. Instead of: dic = {} for i, team in enumerate(teams): dic[i] = team we you can write: dic = {i: team for i, team in enumerate(teams)} Now it is much more obvious, it also corresponds to: dic = dict(enumerate(teams)) Finally, we can ask ourselves how we use this dictionnary: the answer is "to get the team at a given index". Do we really need a dictionnay for this ? I do not think so. We can get rid of the dic variable and use teams directly. At this stage, we have: import random def simulate_draw(teams): """Return the list of games.""" if len(teams) % 2 == 0: return simulate_even_draw(teams) else: return simulate_odd_draw(teams) def simulate_even_draw(teams): """Return the list of games.""" matches = [] games = [] half_len = int(len(teams)/2) arr1 = [i+1 for i in range(half_len)] arr2 = [i+1 for i in range(half_len, len(teams))][::-1] for i in range(len(teams)-1): arr1.insert(1, arr2[0]) arr2.append(arr1[-1]) arr2.remove(arr2[0]) arr1.remove(arr1[-1]) games.append(list(zip(arr1, arr2))) for game in games: for gm in game: r = gm # remove randomness for now - random.sample(gm, len(gm)) a, b = teams[r[0]-1], teams[r[1]-1] matches.append((a, b)) # print(a + ' plays ' + b) return matches def simulate_odd_draw(teams): """Return the list of games.""" matches = [] games = [] half_len = int((len(teams)+1)/2) arr1 = [i+1 for i in range(half_len)] arr2 = [i+1 for i in range(half_len, len(teams)+1)][::-1] for i in range(len(teams)): arr1.insert(1, arr2[0]) arr2.append(arr1[-1]) arr2.remove(arr2[0]) arr1.remove(arr1[-1]) games.append(list(zip(arr1, arr2))) for game in games: for gm in game: r = gm # remove randomness for now - random.sample(gm, len(gm)) if len(teams)+1 not in r: a, b = teams[r[0]-1], teams[r[1]-1] matches.append((a, b)) # print(a + ' plays ' + b) return matches def displays_simulated_draws(teams): """Print the list of games.""" for gm in simulate_draw(teams): a, b = random.sample(gm, len(gm)) print(a + ' plays ' + b) def test_simulate_draw(): """Small tests for simulate_draw.""" # TODO: Use a proper testing framework TESTS = [ ([], []), (['A'], []), (['A', 'B', 'C', 'D'], [('A', 'C'), ('D', 'B'), ('A', 'B'), ('C', 'D'), ('A', 'D'), ('B', 'C')]), (['A', 'B', 'C', 'D', 'E'], [('A', 'E'), ('B', 'C'), ('A', 'D'), ('E', 'C'), ('A', 'C'), ('D', 'B'), ('A', 'B'), ('D', 'E'), ('B', 'E'), ('C', 'D')]), ] for teams, expected_out in TESTS: # print(teams) ret = simulate_draw(teams) assert ret == expected_out if __name__ == '__main__': test_simulate_draw() displays_simulated_draws(['A', 'B', 'C', 'D']) The right tool for the task The part: arr2.remove(arr2[0]) arr1.remove(arr1[-1]) could/should probably be written with pop: arr2.pop(0) arr1.pop() And now, these line can be merged with arrXX.append(arrYYY[ZZ]): for i in range(len(teams)-1): arr1.insert(1, arr2.pop(0)) arr2.append(arr1.pop()) games.append(list(zip(arr1, arr2))) Removing useless steps A loop is used to fill an array. Another one is used to iterate over the array. We could try to use a single loop to do everything (disclaimer: this is not always a good idea as far as readability goes). This removes the need for a few calls to list. At this stage, we have: def simulate_even_draw(teams): """Return the list of games.""" matches = [] half_len = int(len(teams)/2) arr1 = [i+1 for i in range(half_len)] arr2 = [i+1 for i in range(half_len, len(teams))][::-1] for i in range(len(teams)-1): arr1.insert(1, arr2.pop(0)) arr2.append(arr1.pop()) for gm in zip(arr1, arr2): matches.append((teams[gm[0]-1], teams[gm[1]-1])) return matches def simulate_odd_draw(teams): """Return the list of games.""" matches = [] half_len = int((len(teams)+1)/2) arr1 = [i+1 for i in range(half_len)] arr2 = [i+1 for i in range(half_len, len(teams)+1)][::-1] for i in range(len(teams)): arr1.insert(1, arr2.pop(0)) arr2.append(arr1.pop()) for gm in zip(arr1, arr2): if len(teams)+1 not in gm: matches.append((teams[gm[0]-1], teams[gm[1]-1])) return matches Better indices You generate a list of indices using i+1 and then use val - 1 when you use them. You can make your life easier twice. Iterable unpacking Instead of using indices to get elements from an iterable with a know number of elements, you can use iterable unpacking. You'd get def simulate_even_draw(teams): """Return the list of games.""" half_len = int(len(teams)/2) arr1 = [i for i in range(half_len)] arr2 = [i for i in range(half_len, len(teams))][::-1] matches = [] for i in range(len(teams)-1): arr1.insert(1, arr2.pop(0)) arr2.append(arr1.pop()) for a, b in zip(arr1, arr2): matches.append((teams[a], teams[b])) return matches def simulate_odd_draw(teams): """Return the list of games.""" half_len = int((len(teams)+1)/2) arr1 = [i for i in range(half_len)] arr2 = [i for i in range(half_len, len(teams)+1)][::-1] matches = [] for i in range(len(teams)): arr1.insert(1, arr2.pop(0)) arr2.append(arr1.pop()) for a, b in zip(arr1, arr2): if len(teams) not in (a, b): matches.append((teams[a], teams[b])) return matches True divisions Instead of using "/" and convert the float result to int, you can use "//" which is an integer division. Other way to compute indices We could write something like: indices = list(range(len(teams))) half_len = len(indices)//2 arr1 = indices[:half_len] arr2 = indices[:half_len-1:-1] and indices = list(range(len(teams)+1)) half_len = len(indices)//2 arr1 = indices[:half_len] arr2 = indices[:half_len-1:-1] Altough, if we don't care about order, we could use the more direct: arr1 = indices[:half_len] arr2 = indices[half_len:] Remove the duplicated logic Don't repeat yourself is a principle of software development that you could easily apply here. Indeed, we have 2 functions that look very similar. This is trickier than expected and I have to go. I may continue another day. Batteries included The Python standard library contains many useful things. Among them, we have the very interesting module itertools which itself contains combinations which is what you want.
{ "domain": "codereview.stackexchange", "id": 34309, "tags": "python, simulation" }
What is a sensible number of gene/observations to explain PCA variance?
Question: I am working with a set of RNASeq dataset. I have about 4000 observations (genes) on 20 samples and plotting a PCA I found the clustering doesn't vary much when I use different number of genes, but the % variance of PC1 and PC2 does. I have ~1300 DEG. But how do you select a sensible cutoff for the number of genes/observations to explain PCA variance? Would it be correct to select the number of genes that are DEG which account for >50% of the variance between PC1 and 2? Few examples: All genes: PC1(34%), PC2(14%) Top1000: PC1(45%), PC2(17%) Top500: PC1(49%), PC2(19%) Top50: PC1(55%), PC2(24%) Top 5: PC1(75%), PC2(24%) Answer: The general goal of PCA in RNA-seq can be stated as, "I'd like a low-dimension representation of my data to allow easy assessment of the gross structure of my samples, specifically for assessing missing batch effects, samples swaps, etc." Ideally, this low-dimension representation can fit into a plot or two. In other words, we'd like the first 2 (maybe 3) PCs to explain a reasonable amount of variation. A few things to note from this alone: The number of differentially expressed genes were not mentioned. They have no relevance here. There was no mention made of the treatment groups clustering together. Seeing this is NOT a goal of PCA. As you have noticed, the more input rows (genes, assuming you haven't already transposed it) the less variation explained by the first 2 principal components (PCs). On the flip side, the more rows you input the better PCA is describing the structure of your entire dataset. Question: Do I really care about the gross structure of the entire dataset? Answer: No, I just want to see problems that I might need to account for. So you really don't have to use all of the data in the PCA, just the bits that will allow you to see any issues. In general that'll be the few hundred most-variable genes (after some reasonable transformation so you're not just looking at the most highly expressed genes!). The exact number shouldn't really matter very much, something in the range of 300-500 most variable genes should suffice. A common practice is to just randomly pick a nice round number somewhere in or around this range. This is not a number that needs optimization unless perhaps you have a very large number of samples and therefore PC1 and PC2 only account for a small (say 10-20%) percentage of the variation, in which case you're better off looking at either a number of difference principal components or using a different dimension reduction technique, such as UMAP.
{ "domain": "bioinformatics.stackexchange", "id": 1657, "tags": "rna-seq, deseq2, pca" }
ROS buildfarm - Debian Buster Binary Job Failure (noetic)
Question: I recently made a release of image_transport_plugins in ROS noetic, but the noetic debian binary builds have started failing afterwards. Looking at the logs, the issue seems to be apt coming from an invocation of apt-get install ... catkin on the BuildFarm, but I can't quite figure out why catkin is getting installed rather than ros-noetic-catkin. All logs have the following problem: 01:20:43 Invoking 'apt-get install -q -y -o Debug::pkgProblemResolver=yes apt-src catkin debhelper image-transport-tools libcv-bridge-dev libdynamic-reconfigure-config-init-mutex-dev libgmock-dev libgtest-dev libimage-transport-dev libogg-dev librosbag-dev libstd-msgs-dev libtheora-dev pluginlib-dev python3-cv-bridge python3-dynamic-reconfigure python3-rosbag python3-std-msgs ros-message-generation ros-std-msgs' 01:20:43 Reading package lists... 01:20:44 Building dependency tree... 01:20:44 Reading state information... 01:20:44 Starting pkgProblemResolver with broken count: 5 01:20:44 Starting 2 pkgProblemResolver with broken count: 5 ... Does anyone know, or have come across similar issues? Buildfarm failures Logs: Nbin_dbv8_dBv8__theora_image_transport__debian_buster_arm64__binary Nbin_dbv8_dBv8__compressed_image_transport__debian_buster_arm64__binary Nbin_dbv8_dBv8__compressed_depth_image_transport__debian_buster_arm64__binary Nbin_db_dB64__theora_image_transport__debian_buster_amd64__binary Nbin_db_dB64__compressed_image_transport__debian_buster_amd64__binary Nbin_db_dB64__compressed_depth_image_transport__debian_buster_amd64__binary Answer: Looking at the ros/rosdistro PR which included this release, the bloom version used was 0.10.7 which is quite old compared to the current 0.11.2 version from the time of the release PR. I have not been able to determine what is causing this with older bloom versions on current distributions but using the current version of bloom to re-run bloom-release ... resolves it.
{ "domain": "robotics.stackexchange", "id": 2655, "tags": "ros, ros-noetic" }
Could IK solver called by arm_navigation auto-generated file work for 5 DOF manipulator
Question: Hi, all, We have a 5 DOF manipulator, the ompl planner keeps throwing out "IK Solution not found, IK returned with error_code: -31", when I pass pose goal to the navigation server. I doubt is it because the IK solver didn't support 5 DOF arm. What I have done so far is: I used the planning description configuration wizard to generate the launch file and it works fine in Planning Components Visualizer, except I can't specify start and end postition using end effector control which is expected. I changed the controller_action_name to be the action name of my real robot. I changed the auto-generated ompl_planning.yaml in which I delete roll in state space, because we are only interested in the direction not the orientation of the end effector. I don't know if it is neccessary, but I tried both and got the same result. I use move_arm_joint_goal and modify it according to my robot, it works. But when I use move_arm_pose_goal and do the corresponding modification for my robot, the IK solver keep throwing out "IK Solution not found, IK returned with error_code: -31". I have tried different value for the desired pose constraint and tolerance, even didn't specify the constraint of the desired pose, but got the same result. Could anyone give me any hint what might be the possible cause to this problem? Thanks a lot. Originally posted by Fei Liu on ROS Answers with karma: 47 on 2012-02-01 Post score: 1 Answer: If you have a 5DOF arm and keep the target pose orientation within the reachable space of your arm, IK should work. If however, you for example rotate the target pose around the axis of the missing sixth DOF (if it's rotational), IK will naturally fail. I can recommend the OpenRAVE ikfast module and it's 'TranslationDirection5D' IK option for such a scenario. It is also shortly talked about here. I documented how I generated the IK for our 5DOF arm, so here goes: Generating a OpenRAVE-Collada robot model (assumes you have OpenRAVE installed): roscd hector_arm_urdf/urdf rosrun xacro xacro.py hector_arm_ax12_5dof_standalone.xacro > hector_arm_ax12_5dof_standalone.urdf rosrun collada_urdf urdf_to_collada hector_arm_ax12_5dof_standalone.urdf hector_arm_ax12_5dof_standalone.dae In the next step, a OpenRAVE scene description file has to be created in the same directory. For this example, this would be "hector_arm_ax12_5dof_standalone.xml". With following contents: <Robot name="hector_arm_ax12_5dof_standalone" file="hector_arm_ax12_5dof_standalone.dae"> <Manipulator name="hector_arm_5dof"> <base>arm_base_link</base> <effector>endeffector_yaw_link</effector> <direction>1 0 0</direction> <translation>0 0 0</translation> </Manipulator> </Robot> You can do a visual inspection of the model by starting OpenRAVE and then loading the OpenRAVE scene description XML file created above. The endeffector should look as described here when using the red arrow tool to the top right and clicking on the robot. roscd openrave/bin export PYTHONPATH=$PYTHONPATH:`./openrave-config --python-dir` ./openrave.py Finally, generating 5DOF IK: roscd openrave/bin export PYTHONPATH=$PYTHONPATH:`./openrave-config --python-dir` ./openrave.py --database inversekinematics --robot=/home/stefan/rosext/hector/ros/stacks/hector_arm/hector_arm_urdf/urdf/hector_arm_ax12_5dof_standalone.xml --iktype=TranslationDirection5D The generated IK library or .cpp can then be used for calculating inverse kinematics in ROS. This of course is an option that needs a little more effort than relying on the KDL based iterative solver provided with the ROS tools. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2012-02-01 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by Rosen Diankov on 2012-02-02: check to see if your STL file is binary. if it is, the first 5 letters cannot be 'solid' Comment by Fei Liu on 2012-02-02: Hi, Sefan, Thank you for your detailed explanation. I almost go through everything you suggest. I have a problem in using the mesh for my manipulator, I got "failed to load resource package://cob_description/ros/meshes/arm_v0/arm2.stl" when I try to generate urdf to collada. How you run your mesh? Comment by Rosen Diankov on 2012-02-01: Hi Stefan, excellent answer! In openrave changes a couple of hours ago, we increased the katana arm ik success rate by 15%. Also, you can bring in all of openrave to your shell by adding the following (recommended) line: source rospack find openrave/openrave_svn/openrave.bash rospack find openrave Comment by mirsking on 2014-10-02: Hi, Rosen, I've met the same problem of "failed to load resource package", and my STL file is binary with the first 5 letters being 'solid'. But after I delete the 5 letters 'solid', it still doesn't work. How can I solve this problem? Comment by BrettHemes on 2016-05-05: This is an old question but thought I would post a reference to an answer for the "solid" issue from here: Just go to the folder with STL's with this problem and do: sed -i 's/^solid/robot/' *
{ "domain": "robotics.stackexchange", "id": 8075, "tags": "ros, ik, arm-navigation" }
Velocity of a point in a massed spring
Question: I got a Doubt that what if the spring has mass I found the following question When one end of a spring (of mass m and length l ) is pulled with velocity V1 and other and with velocity V2 then velocity of a point on spring at a distance x from first end is given by formula V1+ (x/l)×V2 I tried to prove this but I didn't get any idea I tried to by putting x=0, x=l the equation satisfied for those... My sir said that velocity varies linearly with distance x but I didn't understand why Please help me with this Answer: The elongation (or compression) of a spring distributes itself linearly over the spring's length. That can easily be explained. Let's assume for a moment, the right half of the spring elongates more than the left half. Then the right half pulls stronger than the left half, thus pulling stronger on the middle point than the left half, until the middle point has moved to a location where both forces become equal because of equal elongation. Generalizing that, every part of a spring gets the same relative elongation, spread equally over the length of the spring. And as velocity is nothing else than the change of location (1st derivative, if you know that concept), this linear distribution also holds true for the velocities. P.S. The given formula (and my explanation) deliberately ignores the mass m of the spring. If we start pulling on the spring's ends, some force is needed to accelerate the parts of the spring up to their final velocities, depending on the parts' masses, and this will create some quite complex dynamic behaviour with oscillations where the simple formula V1+(x/l)×V2 no longer holds.
{ "domain": "physics.stackexchange", "id": 71534, "tags": "homework-and-exercises, newtonian-mechanics, spring" }
Deserializing response to correct type
Question: I use this method to get either Customer or Account. The server will determine which type it is. The response will have a property "Type": "Customer" or "Type": "Account". I first deserialize to Client (supertype) to check the Type property. Then deserialize to either Customer or Account. public async Task<Models.Entities.Client> GetClient(int clientId) { var getClientRequest = new RestRequest("client/details", Method.GET); getClientRequest.AddQueryParameter("ClientId", clientId); var jsonResponse = await _requestService.DoRequest(getClientRequest); var client = JsonConvert.DeserializeObject<Models.Entities.Client>(jsonResponse); switch (client.Type) { case ClientKind.Account: var account = JsonConvert.DeserializeObject<Account>(jsonResponse) return account; default: var customer = JsonConvert.DeserializeObject<Customer>(jsonResponse); return customer; } } Example responses: { "ClientId": 1, "Type": "Account", "Name": "Company Inc." } { "ClientId": 2, "Type": "Customer", "Name": { "First": "John", "Last": "Smith" }, "DateOfBirth": "1960-12-01" } DTOs: public class Client { public int ClientId { get; set; } public ClientType Type { get; set; } } public class Account : Client { public string Name { get; set; } } public class Customer : Client { public PersonalName Name { get; set; } public DateTime DateOfBirth { get; set; } } Answer: Your code could be rewritten: var settings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.All, SerializationBinder = knownTypesBinder // <- see security risk below }; var client = JsonConvert.DeserializeObject<Models.Entities.Client>(jsonResponse); return client; // <- strongly-typed Make sure both server and client use the settings If you have declared public ClientType Type { get; set; } just to enable two-phase serialisation (base entity - concrete entity), you should remove it from the code. The two-phase serialisation hack can be replaced with a strongly-typed serialisation using TypeNameHandling = TypeNameHandling.All. Example As suggested in the comments, we need to address the security aspect also. Hence, knownTypesBinder is used to mitigate a security risk. // based on https://www.newtonsoft.com/json/help/html/SerializeSerializationBinder.htm var knownTypesBinder = new KnownTypesBinder { KnownTypes = new List<Type> { typeof(Customer), typeof(Account) } }; public class KnownTypesBinder : ISerializationBinder { public IList<Type> KnownTypes { get; set; } public Type BindToType(string assemblyName, string typeName) { return KnownTypes.SingleOrDefault(t => t.Name == typeName); } public void BindToName(Type serializedType, out string assemblyName, out string typeName) { assemblyName = null; typeName = serializedType.Name; } }
{ "domain": "codereview.stackexchange", "id": 34904, "tags": "c#, inheritance, serialization, xamarin" }
Fermionic solution of 2D Ising
Question: I'm trying to understand the discussion in this book on the fermionization of the 2D Ising model. The transfer matrix for this model becomes $T = \theta\tilde{\theta}$ where: $$\theta = e^{\beta \sum_{x}\sigma_{x}^{(1)}\sigma_{x+1}^{(1)}} \quad \mbox{and} \quad \tilde{\theta} = e^{\tilde{\beta}\sum_{x}\sigma_{x}^{(3)}}$$ where $\sigma^{(i)}_{x}$ are Pauli matrices at each $x$: $$\sigma^{(1)} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \sigma^{(2)} = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \quad \mbox{and} \quad \sigma^{(3)} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $$ Then, the following change of variables is made: $$\Gamma_{-\frac{1}{2},0} := \sigma_{0}^{(1)} \quad \Gamma_{x-\frac{1}{2},x} := \bigg{(}\prod_{0}^{x-1}\sigma_{x'}^{(3)}\bigg{)}\sigma_{x}^{(1)} \quad (x\ge 1)$$ and: $$\Gamma_{0,\frac{1}{2}} := \sigma_{0}^{(2)} \quad \mbox{and} \quad \Gamma_{x,x+\frac{1}{2}} :=\bigg{(}\prod_{0}^{x-1}\sigma_{x'}^{(3)}\bigg{)}\sigma_{x}^{(2)} \quad (x\ge 1)$$ After some manipulations, we get: $$\theta(\beta) = e^{-i\beta \sum_{x}\Gamma_{x,x+\frac{1}{2}}\Gamma_{x+\frac{1}{2},x+1}} \quad \mbox{and} \quad \tilde{\theta}(\beta) = e^{-i\tilde{\beta}\sum_{x}\Gamma_{x-\frac{1}{2},x}\Gamma_{x,x+\frac{1}{2}}}$$ Finally, one more change of variables $\Gamma_{x-\frac{1}{2},x} = a_{x}+a^{\dagger}_{x}$ and $\Gamma_{x,x+\frac{1}{2}} = i(a_{x}-a^{\dagger}_{x})$ leads to: \begin{eqnarray}\tilde{\theta}(\beta) = e^{\tilde{\beta}\sum_{x}(a^{\dagger}_{x}a_{x}-a_{x}a^{\dagger}_{x})} = \prod_{x}(e^{-\tilde{\beta}}+2\sin\tilde{\beta}a^{\dagger}_{x}a_{x})\tag{1}\label{1}\end{eqnarray} Then, the author states: Accordingly, using the same symbol, the corresponding integral kernel is: \begin{eqnarray}\tilde{\theta}(\tilde{\beta},\tilde{\xi},\xi) = \prod_{x}(e^{-\tilde{\beta}}+2\sin\tilde{\beta}\tilde{\xi}_{x}\xi_{x})e^{\tilde{\xi}_{x}\xi_{x}}\tag{2}\label{2}\end{eqnarray} Question: What is done to pass from (\ref{1}) to (\ref{2})? I don't follow the reasoning. Answer: Correspondence between fermionic operators and their integral kernels is introduced earlier in this book. The relevant text begins at the end of page 53 and lasts till the beginning of page 55 and even further. Equation (1) defines an operator in terms of fermionic creation-annihilation operators $a^\dagger_x$, $a_x$. At the same time eqution (2) expresses a function of grassmannian variables $\overline{\xi}_x$, $\xi_x$. Grassmannian variables are numbers with special properties, not operators.
{ "domain": "physics.stackexchange", "id": 78680, "tags": "statistical-mechanics, field-theory, ising-model" }
Maximum imbalance in a graph?
Question: Let $G$ be a connected graph $G = (V,E)$ with nodes $V = 1 \dots n$ and edges $E$. Let $w_i$ denote the (integer) weight of graph $G$, with $\sum_i w_i = m$ the total weight in the graph. The average weight per node then is $\bar w = m/n$. Let $e_i = w_i - \bar w$ denote the deviation of node $i$ from the mean. We call $|e_i|$ the imbalance of node $i$. Suppose that the weight between any two adjacent nodes can differ by at most $1$, i.e., $$ w_i - w_j \le 1\; \forall (i,j) \in E.$$ Question: What is the largest possible imbalance the network can have, in terms of $n$ and $m$? To be more precise, picture the vector $\vec{e} = (e_1, \dots, e_n)$. I'd be equally content with results concerning $||\vec{e}||_1$ or $||\vec{e}||_2$. For $||\vec{e}||_\infty$, a simple bound in terms of the graph diameter can be found: Since all $e_i$ must sum to zero, if there is a large positive $e_i$, there must somewhere be a negative $e_j$. Hence their difference $|e_i - e_j|$ is at least $|e_i|$, but this difference can be at most the shortest distance between nodes $i$ and $j$, which in turn can be at most the graph diameter. I'm interested in stronger bounds, preferably for the $1$- or $2$-norm. I suppose it should involve some spectral graph theory to reflect the connectivity of the graph. I tried expressing it as a max-flow problem, to no avail. EDIT: More explanation. I'm interested in the $1$- or $2$-norm as they more accurately reflect the total imbalance. A trivial relation would be obtained from $||\vec{e}||_1 \leq n|||\vec{e}||_\infty$, and $||\vec{e}||_2 \leq \sqrt{n}||\vec{e}||_\infty$. I expect, however, that due to the connectedness of the graph and my constraint in the difference of loads between adjacent nodes, that the $1$- and $2$-norms should be much smaller. Example: Hypercube of Dimension d, with $n = 2^d$. It has diameter $d = \log_2(n)$. The maximum imbalance is then at most $d$. This suggest as an upper bound for the $1$-norm $nd = n\log_2(n)$. So far, I have been unable to construct a situation where this is actually obtained, the best I can do is something along the lines of $||\vec{e}||_1 = n/2$, where I embed a cycle into the Hypercube and have the nodes have imbalances $0$, $1$, $0$, $-1$ etc. So, here the bound is off by a factor of $\log(n)$, which I consider already too much, as I'm looking for (asymptotically) tight bounds. Answer: Since $|e_i|$ is bounded by the diameter $d$, the $\ell_1$ norm is going to be trivially bounded by $nd$, likewise for the $\ell_2$ norm, except by $\sqrt{n}d$ (in fact the $\ell_p$ norm is bounded by $n^{1/p}d$). The $\ell_1$ case turns out to be surprisingly easy to analyze. For a path, it's easy to see that $\|\vec e\|_1$ is $O(n^2)$, so you can't do any better than $O(nd)$. For a complete $k$-ary tree, you can divide it in half at the root, setting $w_{\text{root}} = 0$, ascending one side and descending the other until the leaves have $|e_i| = |w_i| = \log_k n$, producing $O(n\log_k n) = O(nd)$ again. For a clique it doesn't really matter how you distribute the weights, since they'll all be within $1$ of each other, and that will yield $O(n) = O(nd)$ again. When you realize that what we're talking about here is a function $e : \mathbb{Z} \to [-d/2,d/2] \subset \mathbb{R}$, and then we're taking its $\ell_1$ norm, as long as you can arbitrarily distribute weights $e_i \in [-d/2,d/2]$ evenly across the range, the bound will be $O(nd)$. The only way to change this is to play games with the mass. For instance, if you have several giant cliques at points that are necessarily balanced, like a giant clique with two paths of equal length jutting out of it, then you can count on a bound of only (for example) $O(d^2)$. This may be true for expanders to some degree as well, but I'm not sure. I could imagine a case where you set $w_1 = 0$ in a regular graph and then let the values increase subsequently from every hop. It seems likely that the mean could possibly have the most mass, but I don't know if it would be enough to affect the bound. I think that you could reason similarly about $\ell_2$. EDIT: In the comments we figured out a (loose) $\ell_2$ bound of $O(|E|/\lambda_2(L))$ using the constraints of the problem and some basic spectral graph theory.
{ "domain": "cstheory.stackexchange", "id": 793, "tags": "graph-theory, co.combinatorics, spectral-graph-theory" }
Do high votlage transmission lines disturb other electrical devices?
Question: I had the question myself it those high voltage transmission lines, disturb other electrical devices? Let's say I have an high voltage next to a mobile basestation, does it effekt the electrical components in the base station or the wireless communication of the antennas ? Is there any way I could calculate that? Answer: Generally, no. Transmission lines can act like antennas, but they operate at around 50 Hz. The wavelength of a 50 Hz EM-wave is $6\times 10^6$ m. In order to effectively couple that energy, you'd have to have a receive antenna of $3\times 10^6 $ m length. That's not feasible. Not only this, but of course the gain of the antenna scales with frequency and effective area. For low frequencies, even antennas of large effective area tend to be bad radiators. Even if it COULD feasibly couple with basestation electronics, it just wouldn't happen. A base station's carrier frequency is more likely to interract with its own electronics than a power line's. And, because of this, basestation electronics is very well shielded from most frequencies of radiation, especially RF/HF radiation. You can get calculations for all of this stuff in basic antenna design books i.e. Modern Antenna Design by Thomas Milligan.
{ "domain": "engineering.stackexchange", "id": 1810, "tags": "electrical-engineering, electrical-grid" }
Type inference + overloading
Question: I'm looking for a type inference algorithm for a language I'm developing, but I couldn't find one that suits my needs because they usually are either: à la Haskell, with polymorphism but no ad-hoc overloading à la C++ (auto) in which you have ad-hoc overloading but functions are monomorphic In particular my type system is (simplifying) (I'm using Haskellish syntax but this is language agnostic): data Type = Int | Double | Matrix Type | Function Type Type And I've got an operator * which has got quite some overloads: Int -> Int -> Int (Function Int Int) -> Int -> Int Int -> (Function Int Int) -> (Function Int Int) (Function Int Int) -> (Function Int Int) -> (Function Int Int) Int -> Matrix Int -> Matrix Int Matrix Int -> Matrix Int -> Matrix Int (Function (Matrix Int) (Matrix Int)) -> Matrix Int -> Matrix Int Etc... And I want to infer possible types for (2*(x => 2*x))*6 (2*(x => 2*x))*{{1,2},{3,4}} The first is Int, the second Matrix Int. Example (that doesn't work): {-# LANGUAGE OverlappingInstances, MultiParamTypeClasses, FunctionalDependencies, FlexibleContexts, FlexibleInstances, UndecidableInstances #-} import qualified Prelude import Prelude hiding ((+), (*)) import qualified Prelude newtype WInt = WInt { unwrap :: Int } liftW f a b = WInt $ f (unwrap a) (unwrap b) class Times a b c | a b -> c where (*) :: a -> b -> c instance Times WInt WInt WInt where (*) = liftW (Prelude.*) instance (Times a b c) => Times a (r -> b) (r -> c) where x * g = \v -> x * g v instance Times (a -> b) a b where f * y = f y two = WInt 2 six = WInt 6 test :: WInt test = (two*(\x -> two*x))*six main = undefined Answer: I would suggest looking at Geoffrey Seward Smith's dissertation As you probably already know, the way the common type inference algorithms work is that they traverse the syntax tree and for every subexpression they generate a type constraint. Then, they take this constraints, assume conjunction between them, and solve them (typically looking for a most general solution). When you also have overloading, when analyzing an overloaded operator you generate several type constraints, instead of one, and assume disjunction between them, if the overloading is bounded. Because you are essentially saying that the operator can have ``either this, or this, or that type." If it is unbounded, one needs to resort to universal quantification, just as with polymorphic types, but with additional constraints that constrain the actual overloading types. The paper I reference covers these topics in more depth.
{ "domain": "cs.stackexchange", "id": 6881, "tags": "type-theory, type-inference" }
Evaluation of an arithmetic formula where the time depends on the length of the arguments of gates
Question: Let $(X,+,\cdot)$ be a commutative ring. Let $|\cdot|\colon X\to \mathbb{N}$ be a function that satisfies $|x+y|\leq |x|+|y|$ and $|xy|\leq |x|+|y|$. We call the function length, and length is always positive. We are given an arithmetic formula(arithmetic circuit with outdegree 1) of size $n$, over $X$ with gates $+$ and $\cdot$. There are no constant terms. The time to evaluate a gate with input $x$ and $y$ is $O(|x|+|y|)$. Is it possible to evaluate the formula in $O(m\log n)$ time? where $m$ is the total length of the input. Answer: EDIT: I was wrong about the implication to arithmetic circuits. I think it is possible. It uses ideas in parallel evaluation of arithmetic expressions and tree contraction. Consider the arithmetic formula, it is an arithmetic circuit that forms a directed tree. Consider each gate as a function of the form $f(x,y) = a(x\square y)+b$ for constant $a,b$ and operation $\square$. So, a $\square$ gate is $1(x\square y)+0$. We call $a$ the linear part, and $b$ the constant part. One can evaluate the formula by a "tree contraction" operation. It takes 3 vertices $u,v,w$ and returns one new vertex in this tree. Here $u,v$ are children of $w$, and $u$ is a leaf. That is, we can delete node $u$, contract $w$ and $v$ into to a new vertex $w'$, and the gate for $w'$ is a function of the form $a(x\square y)+b$ for some constants $a$ and $b$, and we have the tree evaluates to the same value. Here is an example: Let $\ell(v)$ be the labels of $v$, defined as all the input used to compute the linear and constant part of the gate $v$. Facts: One can apply tree contraction to a constant fraction of vertices in parallel. Let such parallel operation be called a single iteration. At any moment of the computation, $\{\ell(v) | v\in V_t\}$ forms a partition of the input, where $V_t$ is the vertices of the formula after the $t$th contraction. The coefficient of the linear term and constant term on each node can be expressed as a single-use expression (i.e. each variable in the input is used at most once). It implies if $y$ is the constant part or linear part of the vertex $v$, then $|y| \leq \sum_{x\in \ell(v)} |x|$. Note this fact is only used to bound the size of the coefficient. The computation of the coefficient itself is not related to the single-use expression. Consider an algorithm that does $O(\log(n))$ iterations of tree contraction in parallel, and get a constant size tree. We evaluate the final tree directly. In each iteration, the running time is bounded by the sum of the size of the labels for each node, which is $O(m)$ due to disjointness of the label sets over the vertices. Together the running time is $O(m\log n)=O(m\log n)$.
{ "domain": "cstheory.stackexchange", "id": 4589, "tags": "ds.algorithms, arithmetic-circuits" }
Where does angular momentum come from when an object gets attracted by a planet?
Question: Consider an object effectively travelling in free space, maybe expelled from another solar system a long time ago. The object enters our solar system and approaches say Jupiter. Obviously while freely moving the object has linear momentum and seemingly no orbital angular momentum. (Also assume that it initially isn't rotating so that it initially doesn't have spin angular momentum). I would like to know what happens to the energy in 2 general cases. Firstly, the object enters orbit around Jupiter. It now seems to have orbital angular momentum and maybe potential energy as well? But no linear momentum? Is this all correct? And what did the gravitational field do to cause the changes in energy? Secondly, the object doesn't enter orbit but is partially deflected. As it moves in a curve for a short time does it have orbital angular momentum at that time which it then loses (or possibly keeps by gaining by rotation, i.e. spin angular momentum). Answer: Obviously while freely moving the object has...seemingly no angular momentum (isn't rotating) Here is one error. If you pick a point along the straight line path of the object, then the angular momentum of the object is $0$. But if you pick your reference point as, say Jupiter itself, then there is a non-zero angular momentum about this reference point (assuming the object isn't heading right towards jupiter. This is because the definition of angular momentum is $$\mathbf L=\mathbf r\times\mathbf p$$ Since in the case of using Jupiter as the reference point $\mathbf r\times\mathbf p\neq0$ (these vectors do not point in the same direction), we have a non-zero angular momentum. Furthermore, before having substantial influence from Jupiter, this angular momentum is constant because no net torque acts on our object. Firstly object enters orbit around Jupiter. It now seems to have angular momentum and maybe potential energy as well? But no linear momentum? Is this all correct and what did the gravitational field do to cause the changes in energy? The object keeps the angular momentum it had when coming into the orbit.$^*$ The object still has a non-zero velocity, so it still has linear momentum, as $\mathbf p=m\mathbf v$. The gravitational field does work on the object. Since gravity it conservative, we can easily consider the energy in terms of gravitational potential energy. As the gravitational potential energy of the object decreases, its kinetic energy (speed) will increase. If gravity is the only force acting on our object, total mechanical energy will be conserved the entire time. It seems like you are thinking "Moving in a straight line $\to$ linear momentum. Moving around a curved path $\to$ angular momentum." This is not the case, as the above shows. Secondly the object doesn't enter orbit but is partially deflected. As it moves in a curve for a short time does it have angular momentum at that time which it then loses (or possibly keeps by gaining rotation). Since gravity is a central force, if this is the only force, and if your reference point is the center of this force, then there is no net torque acting on the object. Hence it's angular momentum will be conserved. $^*$If the object "came from infinity" then this will be a hyperbolic orbit, not an elliptical one around Juptier. Unless there is something taking energy from the object
{ "domain": "physics.stackexchange", "id": 67108, "tags": "newtonian-mechanics, angular-momentum, momentum, conservation-laws, orbital-motion" }
Python Method for Text Input from Project Euler
Question: Method Frequently input for problems is given as a 1D or 2D array in a text file. Have knocked out a method to do so, but believe something shorter, clearer and cheaper can be written: def read_array_from_txt(path, dim, typ, sep): ''' @time: O(n) @space: O(n) ''' txt_file_object = open(path,"r") text = txt_file_object.readlines() text = text[0] if dim == 1: if typ == "int": text = [int(num) for num in text.split(sep)] elif typ == "str": text = [let for let in text.split(sep)] else: raise ValueError("Unknown type.") elif dim == 2: if typ == "int": text = [[int(num) for num in line.split(sep)] for line in text] if typ == "str": text = [[let for let in line.split(sep)] for line in text] else: raise ValueError("Unknown type.") else: raise ValueError("Unknown dimension.") txt_file_object.close() return text An Example: Input: encrypted_message.txt 36,22,80,0,0,4,23,25,19,17,88,4,4,19 Output: >>> read_array_from_txt("./encrypted_message.txt", 1, "int", ",") [36,22,80,0,0,4,23,25,19,17,88,4,4,19] Some Other Possible Inputs: 1 37 79 164 155 32 87 39 113 15 18 78 175 140 200 4 160 97 191 100 91 20 69 198 196 2 123 134 10 141 13 12 43 47 3 177 101 179 77 182 117 116 36 103 51 154 162 128 30 3 48 123 134 109 41 17 159 49 136 16 130 141 29 176 2 190 66 153 157 70 114 65 173 104 194 54 are,in,hello,hi,ok,yes,no,is Answer: First of all, let's make use of the with statement so that the file is closed automatically: with open(path, 'r') as txt_file_object: text = txt_file_object.readlines() With this, you don't have to call close() anymore, as the file will close automatically when you exit the with scope. text = text[0] You are only reading the first line of text. Is this really what you want to do? You are using the variable text for two different things: for the input lines and for the output values. This is not very intuitive; in fact, the result can be a list of integers, so why would it be called text? Maybe result would be a better name for it. BUT since now you don't have to close at the end of the function, you can return the result directly instead of saving it in a variable: return [int(num) for num in text.split(sep)] Returning will exit the with scope, so again the file will be closed automatically. [let for let in text.split(sep)] This selects all objects inside text.split(sep), so we can return the splitted list directly: text.split(sep) Similarly, there's a different way of applying a function to every item in a list, which is using map. Maybe the list comprehension feels more natural, so you can keep that if you want; still I'll show you just so you know: # Creates a list by calling 'int' to every item in the list list(map(int, text.split(sep)) You are repating a lot of code; you have four different results, but they are very similar, so let's try to provide a more generic way. My concern here is handling the two possible dimensions. You are parsing the text in the same way (depending if it's int or str), but when it's two dimensions you do it for every line. So we could use Python's lambdas to decide first what type of parsing (int or str) we are doing, and then just apply it once or multiple times. The lambda can take parameters; the only parameter we need in our case is the input text. We can't just use text directly because sometimes we want to parse the full text, but sometimes only the line: if typ == "int": parse_function = lambda t: list(map(int, t.split(sep))) elif typ == "str": parse_function = lambda t: list(t.split(sep)) else: raise ValueError("Unknown type.") Now parse_function can be used like any other function, taking the text as input. So we can use it when deciding the dimension: if dim == 1: return parse_function(text) elif dim == 2: return [parse_function(line) for line in text] else: raise ValueError("Unknown dimension.") You do well in throwing exceptions for invalid input, but how is the user meant to know what possible values can be used for typ and dim? You could add that to the docstring. You should also say what the function does in the docstring. Updated code def read_array_from_txt(path, dim, typ, sep): ''' Processes a text file as a 1D or 2D array. :param path: Path to the input file. :param dim: How many dimensions (1 or 2) the array has. :param typ: Whether the elements are read as 'int' or 'str' :param sep: The text that is used to separate between elements. @time: O(n) @space: O(n) ''' with open(path,"r") as txt_file_object: text = txt_file_object.readlines() if typ == "int": parse_function = lambda t: list(map(int, t.split(sep))) elif typ == "str": parse_function = lambda t: list(t.split(sep)) else: raise ValueError("Unknown type.") if dim == 1: return parse_function(text) elif dim == 2: return [parse_function(line) for line in text] else: raise ValueError("Unknown dimension.")
{ "domain": "codereview.stackexchange", "id": 35803, "tags": "python, programming-challenge, io" }
What does a closed 3-brane look like?
Question: A closed 2-brane looks like a donut, or in scientific terms, a torus. This is a three-dimensional object. I came across this diagram in a book, "The Little Book of String Theory". A torus is a three-dimensional object, so a closed 3-brane would be a fourth-dimensional object. The author provided an image of a closed 2-brane, but refused to give an answer for an image of a closed 3-brane, saying simply that it was too difficult to draw. I cannot find any pictures of one on the internet that clearly explain it. Has anybody found a picture of a closed 3-brane that they can provide? Thank you. Answer: Take a look at this youtube video, which animates a slice-through of a 4D torus. It may or may not be topologically equivalent to what you are talking about.
{ "domain": "physics.stackexchange", "id": 52960, "tags": "spacetime-dimensions, branes" }
Energy stored in the magnetic field
Question: While deriving energy density of a magnetic field via using an inductor, why dont we consider the energy stored outside of inductor also? As initially the field was there outside the inductor, (may be very very far away), but afterwards it is gone..and along woth it the energy stored in it.. If this energy outside the inductor doesnt come in form of heat through resistor , then where does it go? I am talking about the derivation in which an ideal inductor carrying current i is brought in contact with resistor and heat liberated through resistor ($\frac12LI^2$) = energy stored outside inductor + energy stored inside inductor. I dont understand how people put that outside energy term = 0. As many field lines would exist inside inductor.. So many would outside Answer: I'll assume that your inductor is a common sort, a long coil of wire. Then as we establish a current through it, it will produce magnetic flux. The lines of flux are closed loops linked with the coil, and therefore partly inside and partly outside the coil. The usual way to derive the energy stored by the inductor is by considering the emf induced in the coil, which we have to do work against, as we increase the current. This emf is equal to the rate of change of flux linkage with the coil, and does not favour field outside or field inside. [It is true that in order to evaluate the flux linkage at any time we may consider 'lines' crossing the internal area of the coil (that is, neglecting non-uniformity of field at the ends of the coil) calculate $$N \Phi=NBA_\text {int}$$ but we could equally well consider an area outside the coil, stretching from the outside circumference of the coil (around its middle) to infinity and evaluate $$N \Phi=N \int_{A_\text{ext}} \vec{B}.\vec{dA}.$$ This area will be crossed by the same lines as crossed the area inside the coil!] Another way to evaluate the energy stored is to integrate up the energy stored in the magnetic field in and around the coil, in other words to use $$U=\int _{V} \frac{1}{\mu_0}B^2(I)\ dV$$ This makes the point very clearly that both 'inside' and 'outside' contribute towards the energy.
{ "domain": "physics.stackexchange", "id": 45208, "tags": "electromagnetism, energy, magnetic-fields, inductance" }
SQL Group and Filter - Refining down a search including Dates
Question: The below process is designed to pick out only AsbestosUPRNs and get the lowest OverallRiskCategory where they have the newest SurveyDate. I just wanted to ensure this is the best way, it seems neat now though as well as second code, which in reality is the same thing, based on two tables. SELECT AsbestosUPRN, MIN(OverallRiskCategory) AS OverallRiskCategory, MAX(SurveyDate) AS SurveyDate FROM TblAsbestos GROUP BY AsbestosUPRN Two Table version: SELECT p.UCARN, MAX(a.OverallRiskCatNumberical) as OverallRiskCatNumberical, MAX(a.SurveyDate) AS SurveyDate FROM TblAsbestos AS a INNER JOIN TblProperty AS p ON p.UPRN = a.AsbestosUPRN GROUP BY p.UCARN Answer: Overall, that looks good. You've followed basic SQL styling rules, capitalising keywords for example, and the both queries are consistent. Anyone looking at that is going to struggle to find things to complain about. A couple little things, the first AS in your second query isn't capitalised. I would change it to AS to make the query more consistent. I'd say that the table aliases in the second query should be more meaningful than just a and p. Other than that, good job. I would recommend using the first query, it avoids the join and, presumably, returns the same correct results.
{ "domain": "codereview.stackexchange", "id": 9496, "tags": "sql" }
recognize unwanted sounds, noises, and deleting them algorithm
Question: I have a function that I need to use denoise algorithm on. What I thought is to divide it to smaller parts and then use Fourier Transform on each one of the parts, but I am not sure what to do next. Can you please help me and give me few algorithms to recognize noises, if possible algorithm that will fit the structure I described above? Answer: Provided that the noise is uniform across your sound and that you have regions where only noise is present, you can make use of a technique called spectral subtraction. The whole process consist of 4 steps. Find a noise-only region and apply an FFT on it in order to obtain the noise profile (noise spectrum). The longer the noise region, the better the noise profile (resolution) you'll get (use longer FFT lengths) Apply FFT on your entire sound. This way, you'll get the spectrum for both the signal (people talking) as well as the noise. Subtract the noise profile from the overall sound spectrum (2-1). Ideally, you'll end up with the signal-only spectrum (of course, some noise will still remain). Perform an inverse FFT on the signal spectrum obtained in step #3 to get a time-domain signal (waveform) . If the filtered signal still has some unwanted artifacts, you may try to interpolate the signal spectrum prior to performing the IFFT (step 4) Note that you don't have to break up your signal into smaller chunks (this is not STFT)
{ "domain": "dsp.stackexchange", "id": 5861, "tags": "frequency-spectrum, noise, algorithms, sound, denoising" }
Solution to arithmetic arranger from freecodecamp
Question: I tried my hand at the freecodecamp python algorithm and I came up with a solution for the first question but need help refactoring it. Here is the arithmetic arranger question click here to access the question and the detailed question can be found below: Students in primary school often arrange arithmetic problems vertically to make them easier to solve. For example, "235 + 52" becomes: 235 + 52 ----- Create a function that receives a list of strings that are arithmetic problems and returns the problems arranged vertically and side-by-side. The function should optionally take a second argument. When the second argument is set to True, the answers should be displayed. Example Function Call: arithmetic_arranger(["32 + 698", "3801 - 2", "45 + 43", "123 + 49"]) Output: 32 3801 45 123 + 698 - 2 + 43 + 49 ----- ------ ---- ----- Function Call: arithmetic_arranger(["32 + 8", "1 - 3801", "9999 + 9999", "523 - 49"], True) Output: 32 1 9999 523 + 8 - 3801 + 9999 - 49 ---- ------ ------ ----- 40 -3800 19998 474 Rules The function will return the correct conversion if the supplied problems are properly formatted, otherwise, it will return a string that describes an error that is meaningful to the user. Situations that will return an error: If there are too many problems supplied to the function. The limit is five, anything more will return: Error: Too many problems. The appropriate operators the function will accept are addition and subtraction. Multiplication and division will return an error. Other operators not mentioned in this bullet point will not need to be tested. The error returned will be: Error: Operator must be '+' or '-'. Each number (operand) should only contain digits. Otherwise, the function will return: Error: Numbers must only contain digits. Each operand (aka number on each side of the operator) has a max of four digits in width. Otherwise, the error string returned will be: Error: Numbers cannot be more than four digits. If the user supplied the correct format of problems, the conversion you return will follow these rules: There should be a single space between the operator and the longest of the two operands, the operator will be on the same line as the second operand, both operands will be in the same order as provided (the first will be the top one and the second will be the bottom. Numbers should be right-aligned. There should be four spaces between each problem. There should be dashes at the bottom of each problem. The dashes should run along the entire length of each problem individually. (The example above shows what this should look like.) The solution can be found below: import operator ops = {"+": operator.add, "-": operator.sub, "*": operator.mul} def arithmetic_arranger(problems, solver=False): # Check problems does not exceed the given max(5) if len(problems) > 5: return "Error: Too many problems." toptier = "" bottomtier = "" lines = "" totals = "" for n in problems: fnumber = n.split()[0] operator = n.split()[1] snumber = n.split()[2] # Handle errors for input: if operator != "+" and operator != "-": return "Error: Operator must be '+' or '-'." if not fnumber.isdigit() or not snumber.isdigit(): return "Error: Numbers must only contain digits." if len(fnumber) > 4 or len(snumber) > 4: return "Error: Numbers cannot be more than four digits" # Get total of correct function total = ops[operator](int(fnumber), int(snumber)) # Get distance for longest operator operatorDistance = max(len(fnumber), len(snumber)) + 2 snumber = operator + snumber.rjust(operatorDistance - 1) toptier = toptier + fnumber.rjust(operatorDistance) + (4 * " ") bottomtier = bottomtier + snumber + (4 * " ") lines = lines + len(snumber) * "_" + (4 * " ") totals = totals + str(total).rjust(operatorDistance) + (4 * " ") if solver: print(toptier) print(bottomtier) print(lines) print(totals) if __name__ == "__main__": arithmetic_arranger(["32 + 698", "3801 - 2", "45 + 43", "123 + 49"]) Answer: Use PEP484 type hints. The too-many-problems and too-many-digits constraints are not good ones, but whatever: we'll keep them because the problem asks so. You call split() three times when you should only call it once and tuple-unpack to three substrings. You should not make literal comparisons of the operator to potential strings, nor should you include + and - verbatim in the error message. Instead, derive these from your ops dictionary (you adding it was a good idea!) Where possible, pass validation problems from subroutines to your main routine using exceptions. Your len(fnumber) is technically not correct if they expand the problem to allow negative numbers. The minus sign constitutes another character. They want you to limit digit count, not character count. I think a safer check would be comparing to abs() >= 1e4. fnumber and snumber (short for "first" and "second") are non-obvious, and should use something else: number_1, or more easily just x and y since it's pretty obvious what's going on. One potential refactor could look like: Create a simple class representing a problem On the class, have a parse that loads a class instance from a string If parse fails with a ValueError, the string isn't parseable: bail. If parse succeeds but validate() fails, the string is parseable but invalid: bail. Write a formatting method that returns a tuple of lines for one problem. In your upper method, zip the lines of all of the problems together, join the lines with \n and the groups among each line with however many spaces you want (looks like 4). Suggested import operator from typing import Sequence, NamedTuple, Literal ops = {"+": operator.add, "-": operator.sub} class Problem(NamedTuple): x: int y: int op: Literal['+', '-'] @classmethod def parse(cls, s: str) -> 'Problem': x, op, y = s.split() for n in (x, y): if not n.isdigit(): raise ValueError('Error: Numbers must only contain digits.') return cls(x=int(x), y=int(y), op=op) def validate(self) -> None: for n in (self.x, self.y): if abs(n) >= 1e4: raise ValueError('Error: Number cannot be more than four digits.') if self.op not in ops: raise ValueError( 'Error: Operator must be ' + ' or '.join(f"'{o}'" for o in ops.keys()) ) def format_lines(self, solve: bool = False) -> tuple[str, ...]: longest = max(self.x, self.y) width = len(str(longest)) lines = ( f'{self.x:>{width + 2}}', f'{self.op} {self.y:>{width}}', f'{"":->{width+2}}', ) if solve: lines += ( f'{self.answer:>{width+2}}', ) return lines @property def answer(self) -> int: return ops[self.op](self.x, self.y) def arithmetic_arranger(problem_strings: Sequence[str], solve: bool = False) -> None: if len(problem_strings) > 5: print('Error: Too many problems.') return try: problems = [Problem.parse(s) for s in problem_strings] for problem in problems: problem.validate() except ValueError as e: print(e) return lines = zip(*(p.format_lines(solve) for p in problems)) print( '\n'.join( ' '.join(groups) for groups in lines ) ) if __name__ == "__main__": arithmetic_arranger(( "32 + 698", "3801 - 2", "4 + 4553", "123 + 49", "1234 - 9876" ), solve=True)
{ "domain": "codereview.stackexchange", "id": 43353, "tags": "python, algorithm" }
TI-Basic Bouncing Ball Animation
Question: Anyone who has programmed in TI-Basic is likely to have written one of these programs. The concept for this program is simple. The "ball" represented by a single point that originates from the upper left hand corner of the screen and bounces around the screen leaving a trail as it travels. The initial movement vector for the ball is (1,1), meaning one down, one right. During each pass through a loop, the "ball" is translated by the amount specified in the movement vector. When the ball would go of bounds, its movement vector is adjusted so that this does not happen. In short, it behaves like this: except I increased the frame of the GIF. I've reduced my code to a very short and clean snippet: FnOff PlotsOff AxesOff ClrDraw {1,1→L₁ Repeat getKey Ans→L₂ Pxl-Change(L₁(1),L₁(2 Ans+L₁→L₁ L₂*(1-2not(Ans and Ans≠{62,94 End FnOn PlotsOn AxesOn ClrDraw What I would like to hear in responses can be enumerated in four points: How can I reduce the size of this program? How can I increase the speed of this program? How can I reduce the numbers of variables used in this program? How can I increase the clarity of this program without sacrificing any of the goals stated above? People responding to this post should also keep in mind that this code was written for and tested on a TI-83 Plus calculator. Answer: On my TI-82, I get an ERR:DATA TYPE on the following line: L₂*(1-2not(Ans and Ans≠{62,94 Apparently, the TI-82 does not support operations not and and on lists. I recommend that you avoid relying on Ans. Stating what you mean explicitly leads to less code that is less fragile and more readable. The following code also has one fewer statement within the loop. {1,1→L₁ L₁→L₂ Repeat getKey Pxl-Change(L₁(1),L₁(2 L₂+2(L₁={0,0})-2(L₁={62,94}→L₂ L₁+L₂→L₁ End
{ "domain": "codereview.stackexchange", "id": 13756, "tags": "animation, ti-basic" }
iZotope RX's FFT processing?
Question: How does iZotope RX handle the FFT (i.e. how does it manage to perform those processes in the frequency domain without creating glitches) since its able to e.g. Apply another DSP process (even time-domain) into a spectral part (i.e. apply e.g. a phaser onto individual parts of the FFT). Attenuate or gain spectral parts (without touching the surrounding parts). Is it really frequency domain filtering or time domain using time domain filters? I thought this kind of processing was impossible: Why is it a bad idea to filter by zeroing out FFT bins? Answer: Spectral Repair is more complex than just muting a handful of harmonics. But first, let's just clarify that as far as Linear Time Invariant (LTI) systems are concerned, their processing can be applied, equivalently, either in the time domain (via the operation of convolution) or the frequency domain via either the overlap-add or overlap-save methods. iZotope's Spectral Repair performs interpolation. That is, to remove an unwanted portion of a recording, it will try to "mimic" the characteristics of the spectrum in either ends of the unwanted region. Consider the following image: Can't we just remove the railings by copying and pasting the grass from another (and very similar) part of the image and adjust its brightness? Couldn't we even replicate how the green colour varies in the grass portions and just generate some "new" grass for the regions of the railings? Is it going to look odd? Possibly, if it was overdone, but for small regions it may be good enough to fool the eye. This is what the "Replace" mode (and "Attenuate") does in iZotope, but instead of grass and railing there is background noise and local disturbance. It's not so much attenuation (or "setting harmonics to zero") as "masking" or "hiding" the unwanted sound, since it attempts to make up a good enough patch of harmonics to bury it in by "looking at" the surroundings of the disturbance. For more information on interpolation please see this link. For an example of "learning the profile and applying a filter" please see this link and this link. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 3612, "tags": "audio, filtering, frequency-domain" }
yet another fizz buzz
Question: FizzBuzz is a well known Exercise among programmers, but I wanted to add an aspect called "openClosed" principle (from SOLID) FizzBuzz is a very simple programming task, used in software developer job interviews, to determine whether the job candidate can actually write code. It was invented by Imran Ghory, and popularized by Jeff Atwood. Here is a description of the task: Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. App for running an example public class App{ public static void main(String[] args) { FizzBuzz fizzBuzz = new FizzBuzz(); fizzBuzz.addHandler(new FizzHandler()); fizzBuzz.addHandler(new BuzzHandler()); fizzBuzz.print(100); } } Handler public interface Handler { boolean matches(int number); String getMessage(); } FizzHandler an Handler implementation public class FizzHandler implements Handler { @Override public boolean matches(int number) { return number % 3 == 0; } @Override public String getMessage() { return "Fizz"; } } BuzzHandler an Handler implementation public class BuzzHandler implements Handler { @Override public boolean matches(int number) { return number % 5 == 0; } @Override public String getMessage() { return "Buzz"; } } FizzBuzz public class FizzBuzz { private List<Handler> handlers = new ArrayList<>(); public void print(int countDestination) { IntStream.range(0,countDestination).forEach(this::printFizzBuzz); } private void printFizzBuzz(int i) { List<Handler> matchingHandler = handlers.stream().filter(h -> h.matches(i)).toList(); String output = matchingHandler.isEmpty()? Integer.toString(i): matchingHandler.stream().map(Handler::getMessage).collect(Collectors.joining()); System.out.println(output); } public void addHandler(Handler handler) { handlers.add(handler); } } Answer: The handlers combine matching and processing into the same class. Two responsibilities in one module violates single responsibility principle. To me the handlers look like unnecessary decoration for Predicate<Integer> and Supplier<String>. Combining them together, behind a specialized interface, makes reuse and change more complicated. The FizzBuzz class combines the looping functionality and the handler matching. Again two responsibilities. It relies on the fact that the user must register the handlers in a specific order, which cannot be deduced from anything in the API. I would rather see the handlers be associated with a priority or some other way where their execution order is made clear in the registration code. I personally associate handler registration with a map. I don't know if other share this but changing the method name to addLast(Handler) would already make it clear that there is an order. The API does not make it clear that the class also relies on both handlers being executed when they both match. I don't have an API change ready that would rectify this so at least it should be documented in the JavaDoc. The print(int countDestination) gives no clues about the parameter actually defining a range. It should be renamed to printRange(int start, int end). It should also include error checking for start being greater than end. I'm not sure if printing a range should be a feature at all. The FizzBuzz class could just be a Function<Integer, String> so it could be included in stream operations elsewhere. After all, an IntStream is much more versatile than always forcing the user into a "zero to something" range. The handlers always output to System.out which closes the main method to change. There is no way to write the output to, for example, a network socket. The FizzBuzz probably should just produce the data and let some other component decide where it gets written.
{ "domain": "codereview.stackexchange", "id": 43723, "tags": "java, fizzbuzz" }
2d cellular automata Wolfram binary codes
Question: This is a similar question to How 2d cellular automata rules works? However, the answer there did not provide me what I am looking for. Specifically, I want to be able to render these 2d cellular automata forms: https://www.wolframscience.com/nks/p173--cellular-automata/ I cannot find any reference anywhere that explains how to change between a real number and a growth rule. In 1d, the situation is well documented, such as here: https://mathworld.wolfram.com/ElementaryCellularAutomaton.html but in 2d the exact mapping of bits is nowhere to be found. I'd really like to make a program, so that I can for instance enter the number 465 and it will draw the pattern 465 from the Wolfram book. The other stack exchange answer I linked to above provides a possible binary mapping, however the method given there does not produce the same numbers as in the Wolfram reference. The binary value of 465 is 111010001 which does not help me at all. This particular shape is based on adding a cell when exactly one neighbor is currently occupied, so shouldn't we expect to have 4 1's in a row, one for each of the four neighbors? And to make this even more confusing, the last two digits seem to be swapped from the description given on the Wolfram page itself... It seems clear from the other Stack Overflow answer that there is not just one possible binary mapping but many, however given that there already exists a guide with pictures referenced by rule numbers I would really like to actually be able to use those specific rule numbers. Thanks. Answer: The description given on the page you linked to is correct: "In each case the base 2 digit sequence for the code number specifies the rule as follows. The last digit specifies what color the center cell should be if all its neighbors were white on the previous step, and it too was white. The second-to-last digit specifies what happens if all the neighbors are white, but the center cell itself is black. And each earlier digit then specifies what should happen if progressively more neighbors are black. (Compare page 60.)" What you might be missing is that, if the rule number is odd, the empty lattice is unstable since white cells surrounded by other white cells will spontaneously turn black. Specifically, any rules whose number is congruent to 1 modulo 4 (i.e. whose binary form ends in 01), like 465, are "strobing", i.e. the empty lattice will alternate between all white and all black in each successive generation. In particular, this means that rule 465 cannot correspond to "adding a cell when exactly one neighbor is currently occupied". (That would presumably be rule 686, or 1010101110 in binary.) Instead, as you correctly note, 465 equals 111010001 in binary. Written in five groups of two bits each, that gives 01-11-01-00-01. In each of these groups the rightmost bit in group $k$ (numbered right-to-left from 0 to 4) is 1 if a white cell with $k$ black neighbors will turn black in the next generation, and the leftmost bit is 1 if a black cells with $k$ black neighbors will stay black. This means that, under this rule, a white cell will turn black if it has 0, 2, 3 or 4 black neighbors (since the rightmost bit is 1 in groups 0, 2, 3 and 4 counting from the right) and a black cell will stay black if it has exactly 3 black neighbors (since the leftmost bit is 1 only in group 3). And indeed, simulating this rule for 22 generations, starting from one black pixel on a white background, produces an image matching the one on the linked page. Ps. It turns out that rule 465 is the "strobing equivalent" of the state-symmetric rule 558 = 010001011102, which differs from rule 686 by exactly one bit and can be described as "add a cell when exactly one neighbor is occupied, remove a cell when exactly one neighbor is empty". Started from a single cell, it seems that rules 558 and 686 evolve identically, since from this starting point they apparently never generate a live cell with exactly three live neighbors. Thus, on even-numbered generations, the strobing rule 465 also looks identical to both of them.
{ "domain": "cs.stackexchange", "id": 16773, "tags": "binary, cellular-automata" }
how to make 'for loop' short in C?
Question: I made some code about solar system stimulation in C. It is working, but it looks too long. So, Are there some ways to shorten my code? Also this website told me your code is too full to upload this. #include <Windows.h> #include <stdio.h> #include <math.h> #define solar_size 20 #define PI 3.141592654 #define rad angle*180/PI int angle; double sun_x, sun_y, earth_x, earth_y; double x, y, px, py; double earth_speed = 0.05; double x2, y2, x3, y3, x4, y4, x5, y5, x6, y6; int main(void) { double sun; sun = sqrt(solar_size * 10); HWND hwnd = GetForegroundWindow(); HDC hdc = GetWindowDC(hwnd); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Rectangle(hdc, 0, 0, GetSystemMetrics(SM_CXSCREEN), GetSystemMetrics(SM_CYSCREEN)); TextOut(hdc, 250, 450, L"solar system Simulation", 23); sun_x = 300 ; sun_y = 240 ; SelectObject(hdc, CreatePen(PS_SOLID, 1, RGB(255, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(255, 0, 0))); Ellipse(hdc, sun_x - sun, sun_y - sun, sun_x + solar_size, sun_y + solar_size); while (1) { for (angle = 0;; angle++) { x = 30 * cos(angle * earth_speed * 2.5) + sun_x; y = 30 * sin(angle * earth_speed * 2.5) + sun_y; x2 = 55 * cos(angle * earth_speed * 1.5) + sun_x; y2 = 55 * sin(angle * earth_speed * 1.5) + sun_y; x3 = 85 * cos(angle * earth_speed) + sun_x; y3 = 85 * sin(angle * earth_speed) + sun_y; x4 = 110 * cos(angle * earth_speed * 0.6) + sun_x; y4 = 110 * sin(angle * earth_speed * 0.6) + sun_y; x5 = 140 * cos(angle * earth_speed * 0.4) + sun_x; y5 = 140 * sin(angle * earth_speed * 0.4) + sun_y; x6 = 180 * cos(angle * earth_speed * 0.15) + sun_x; y6 = 180 * sin(angle * earth_speed * 0.15) + sun_y; SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(120, 120, 120))); SelectObject(hdc, CreateSolidBrush(RGB(120, 120, 120))); Ellipse(hdc, x, y, x + 8, y + 8); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(100, 80, 0))); SelectObject(hdc, CreateSolidBrush(RGB(100, 80, 0))); Ellipse(hdc, x2, y2, x2 + 12, y2 + 12); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 50, 120))); SelectObject(hdc, CreateSolidBrush(RGB(0, 50, 120))); Ellipse(hdc, x3, y3, x3 + 12, y3 + 12); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(120, 20, 0))); SelectObject(hdc, CreateSolidBrush(RGB(120, 20, 0))); Ellipse(hdc, x4, y4, x4 + 10, y4 + 10); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(200, 80, 20))); SelectObject(hdc, CreateSolidBrush(RGB(200, 80, 20))); Ellipse(hdc, x5, y5, x5 + 17, y5 + 17); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(255, 220, 0))); SelectObject(hdc, CreateSolidBrush(RGB(255, 220, 0))); Ellipse(hdc, x6, y6, x6 + 21, y6 + 21); Sleep(50); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x, y, x + 8, y + 8); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x2, y2, x2 + 12, y2 + 12); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x3, y3, x3 + 12, y3 + 12); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x4, y4, x4 + 10, y4 + 10); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x5, y5, x5 + 17, y5 + 17); SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x6, y6, x6 + 21, y6 + 21); } } } Answer: Haven't tested if the code works. I've simply shortened the code that was given in the question. In the code given in the question, there are variables x,x2,..x6 and y,...y6. These variables are all being used in a similar manner and a lot of code is redundant. For instance lines like SelectObject(hdc, CreatePen(PS_SOLID, 1, RGB(255, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(255, 0, 0))); Ellipse(hdc, sun_x - sun, sun_y - sun, sun_x + solar_size, sun_y + solar_size); are constantly occurring in the code and differ only by some constants. So instead of using 6 variables, an array would be better. You could iterate through it and set the values one by one. Also, another array can be used to store the constant values that differ for each variable. #include <Windows.h> #include <stdio.h> #include <math.h> #define solar_size 20 #define PI 3.141592654 #define rad angle*180/PI int angle; double sun_x, sun_y, earth_x, earth_y; double px, py; double earth_speed = 0.05; double x[6],y[6]; int main(void) { double sun; sun = sqrt(solar_size * 10); HWND hwnd = GetForegroundWindow(); HDC hdc = GetWindowDC(hwnd); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Rectangle(hdc, 0, 0, GetSystemMetrics(SM_CXSCREEN), GetSystemMetrics(SM_CYSCREEN)); TextOut(hdc, 250, 450, L"solar system Simulation", 23); sun_x = 300 ; sun_y = 240 ; SelectObject(hdc, CreatePen(PS_SOLID, 1, RGB(255, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(255, 0, 0))); Ellipse(hdc, sun_x - sun, sun_y - sun, sun_x + solar_size, sun_y + solar_size); double speedCoefficients[] = {2.5, 1.5, 1, 0.6, 0.4, 0.15}; int trignometricCoefficients[] = {30, 55, 85, 110, 140, 180}; int ellipseCoefficients[]={8, 12, 12, 10, 17, 21}; int r=[120, 100, 0, 120, 200, 255]; int g=[120, 80, 50, 20, 80, 220]; int b=[120, 0, 120, 0, 20, 0]; while (1) { for (angle = 0;; angle++) { for(int i=0;i<6;i++) { x[i] = trignometricCoefficients[i] * cos(angle * earth_speed * speedCoefficients[i]) + sun_x; y[i] = trignometricCoefficients[i] * sin(angle * earth_speed * speedCoefficients[i]) + sun_y; SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(r[i], g[i], b[i]))); SelectObject(hdc, CreateSolidBrush(RGB(r[i], g[i], b[i]))); Ellipse(hdc, x[i], y[i], x[i] + ellipseCoefficients[i], y[i] + ellipseCoefficients[i]); } Sleep(50); for(int i=0;i<6;i++) { SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0))); SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0))); Ellipse(hdc, x[i], y[i], x[i] + ellipseCoefficients[i], y[i] + ellipseCoefficients[i]); } } } }
{ "domain": "codereview.stackexchange", "id": 38226, "tags": "beginner, c, image, graphics, visual-studio" }
Oxidation number of methylene blue
Question: I came across methylene blue's structure when it reacts with oxygen. The left is the reduced state, and the right is the oxidized state. However, when I looked at them carefully, the oxidation number of O2 changes normally, but I couldn't find the change of oxidation number in methylene blue. ( the middle N is -3, S is -2, the right N is -3, in both structures. The H is always +1.) What am I missing here? Answer: You missed the carbon atoms, and specifically the carbon atoms in the central ring which are bonded to nitrogen and sulfur. When the methylene blue is in its reduced leuco state, each of these has one bond to the more electronegative atoms and none to hydrogen (the only other element in the molecule), thus you count out the oxidation state as +1 for all four of these carbons. Now you shake the bottle and it turns blue. Note that in this form two of the central carbons now have two bonds to the nitrogen or sulfur, so they have been oxidized from +1 to +2. Which two carbons get oxidized depends on which of multiple contributing structures you draw, but the net result is always you oxidize two carbon atoms by one unit. Half the available contributing structures have the two carbon atoms on the left side of the central ring oxidized, and half have the carbon atoms on the right half oxidized. Since the two halves are equivalent, if you are into fractional oxidation numbers you can say that "on average" the four central carbon atoms each go from +1 to +1.5.
{ "domain": "chemistry.stackexchange", "id": 16220, "tags": "oxidation-state" }
The effect of HCL concentration on the Solubility of Hydroxyapatite
Question: Recently I conducted an experiment to investigate the effect of HCL concentration on the solubility of hydroxyapatite [Ca5(PO4)3(OH)]. This was done by taking a sample of the solution after hydroxyapatite was left to dissolve in HCL, and conducting an EDTA titration using this solution to find how many calcium ions were present. I have already established that hydroxyapatite is more soluble in solutions of HCL at higher concentrations (lower pH). However, I was wondering what the exact scientific theory behind this reaction was. Answer: The $Ksp$ expression for hydroxyapatite is: $\ce{ Ca5(PO4)3(OH)_{(s)} -> 5Ca^{2+}_{(aq)} + 3PO4^{3-}_{(aq)} + OH-_{(aq)}}$. When $\ce{HCl_{(aq)}}$ is added, it reacts with the dissolved $\ce{OH-}$ to form water, thereby decreasing the $\ce{OH-}$ concentration. Le Chatlier's principle predicts that more hydroxyapatite will dissolve (shift to the right).
{ "domain": "chemistry.stackexchange", "id": 10584, "tags": "solubility" }
Parallel DNA double-helices with Watson–Crick base-pairing: Why do they not occur?
Question: I know that parallel DNA helices exist and are governed by Hoogsten base pairing, but why can’t they be possible with Watson-Crick pairing? In the diagram below, if we were to flip one of the strands while keeping the other the same, it appears as though hydrogen bonding is still possible. The only specific suggestions that I could find was because of the DNA replication process and the negative polarity of hydoxyl group on the phosphates. Moreover, after flipping one strand, the DNA nucleotides form enantiomers. Are these possible reasons, or are there others? Answer: “The only specific suggestions that I could find was because of the DNA replication process and…” No. The explanation can have nothing to do with DNA replication. If the structure does not exist, you can’t replicate it, if it does, Nature will evolve a mechanism. (The related SE question, mentioned by @Gilleain, asked whether it could still replicate if it were parallel, i.e. using the enzymes that have evolved for parallel DNA.) I know that parallel DNA helices exist… Let us clarify this first. Perhaps the most extensive parallel duplex DNA, the structure of which has been determined, is that described by Parvathy et al.. The two parallel strands of this are shown below: The following points should be noted: This parallel DNA (and the shorter examples that preceded it) is not a pure stretch of complementary base pairs. It is stabilized by what the authors refer to as “CC+ clamps” at either end. One is left to conclude that without these the duplex would not form. All the complementary base-pairs are of the type AT (actually reverse Watson–Crick base pairs). Presumably GC base-pairs would have destablized the structure. (You can inspect this structure in three-dimensions here. Choose ‘Licorice’ style and colour ‘by chain’ and notice the non-planarity of the three base ‘pairs’ at each end.) So although the question refers specifically to parallel DNA helices with Watson–Crick base pairs, it should be recognized that extended parallel DNA helices composed of any kind of complementary AT and GC base pairs are not found, and the question applies equally well to them. “In the diagram if we were to flip one of the strands while keeping the other same, hydrogen bonding is still possible” The diagram in the question is two-dimensional; DNA is three-dimensional. It is only by considering the three-dimensional structure of DNA can you approach this question. So how would one do that? One must consider the free energy of alternative structures in the relevant millieu to determine which will occur (i.e. be more thermodynamically stable). This will tell you whether single DNA strands with parallel sequences will form a double-stranded (ds) structure or not. This will tell you whether ds-parallel DNA is more or less energetically stable than a ds-antiparallel DNA. Hence, even if both can form (which I doubt, without some special circumstances*) the lower thermodynamic free energy of the anti-parallel dsDNA would give organisms adopting it an evolutionary advantage. And the answer to the question? It seems unlikely that one single factor is responsible or it would have been pointed out in elementary text books such as Berg et al.. To answer would require a complete theoretical analysis of the structure or structures. First one would have to build a model of a proposed parallel structure that could accommodate Watson–Crick base pairs. This in itself is a problem because there are likely to be many alternative structures. Perhaps there are computer programs that can find the structure with the lowest energy. This would be calculated in the classic manner, calculating the positive contribution of hydrogen bonding (which depends on distance and angle), ionic interaction etc.* against the negative contribution of charge and steric repulsions. *Etc? The two-dimensional diagram fails to consider the contribution of base stacking (how could it?), which contributes to a considerable extent to the stability of nucleic acid helices, as the original cover design of Stryer’s Biochemistry is a constant reminder:
{ "domain": "biology.stackexchange", "id": 8337, "tags": "biochemistry, dna, structural-biology, 3d-structure" }
If $\underline{u}$ is a vector, does $u$ indicate its magnitude?
Question: If $\underline{u}$ is a vector, does $u$ indicate its magnitude? $|\underline{u}|$ also indicates the magnitude, doesn't it? Answer: $|\underline{u}|$ also indicates the magnitude, doesn't it? - Yes. The problem is that there are a number of conventions which people use and misuse. The notation for a vector can be written as $\underline u, \,\vec u$ and $\mathbf u$. Without any further information perhaps the "safest" way forward is to assume that $u$ represents the component of a vector in a given direction. So $\underline u = u\,\hat u$ where $\hat u$ is unit vector in some given direction and $u$ is the component of vector $\underline u$ in the $\hat u$ direction. As an example, you might write $\underline x = -3 \hat i$ but it could also be written as $\underline x = -3 \hat x$ where $\hat i = \hat x$ as alternative labels for the unit vector pointing in the positive x-direction.
{ "domain": "physics.stackexchange", "id": 86856, "tags": "vectors, conventions, notation" }
Relation between strength and proticity of an acid
Question: The strength of an acid is related to the number of molecules which have dissociated into hydronium ions in aq solution of that acid while proticity is the number of hydronium ions furnished by 1 molecule of that acid in water. But what is the relation between the two. Does a strong acid have a high proticity or vice-versa? Answer: I'm a chemist. I've never heard the term "proticity". If your definition of it is correct, then consider phosphoric acid, H3PO4. It has a proticity of 3. Is it a stronger acid than HCl or H2SO4? There's really NO useful relationship I can see between strength and the number of donate-able protons. The term "acid" has three of four different meanings. The three most common are Arrhenius, Brønsted, and Lewis acids, but note that there are other definitions (usually used in very specialized contexts). The difference between Ar. and Br. is that Arr. applies only to water as the solvent while Br. applies to any compound which will donate an H(+) ion (solvent unspecified and possibly without any solvent present), the most general definition, L., has to do with compounds (or ions) which accept electron density, so it is often used in descriptions of covalent (not (necessarily) ionic) reactions. You can look up the dissociation constants for each of the three HnPO4 species which can give up an H(+) ion. None of them are "strong" acids, on the other hand H2SO4 is a very strong acid while H2CO3 is a very weak acid. (If phosphoric acid were strong, those of us who drink carbonated soda wouldn't have any teeth left since they add it in order to increase the solubility of the carbonate ion (CO2 +H2O → HCO3(+) + OH(-)...you do the mass balance! (it's incomplete the way I've written it). Both sulfuric and hydrochloric acids can easily have pHs well below 0. (anhydrous HCl is a gas, and H2SO4 is often sold as 98% H2SO4 in water (what's the molarity of that?!)). You can also have NaOH solutions in water with a pH well above 14, but don't tell your teacher, s/he may argue with you (but I know better!).(although to be truthful, at some concentration NaOH(aq) is no longer usefully thought of as a "water solution",(its more a solution of water IN NaOH (or in H2SO4)).
{ "domain": "chemistry.stackexchange", "id": 5787, "tags": "acid-base" }
Mutations as a crossover by product
Question: Let's say I'm writing a GA to find an optimal path to travel from point $A$ to point $B$. Genotypes are a list of directions (north, south, east, west) to follow. So a genotype "NENWEE" will move north once, east once, then north again, west once, and finally east twice. The directions are encoded as follows: N : 00 E : 01 W : 10 S : 11 Our first genotype, "NENWEE" (let's call it $P$), will thus be encoded as follows: $00\,01\,00\,10\,01\,01$ Let $Q$ be a second genotype, say "EENEWW", which is encoded as follows: $01\,01\,00\,01\,10\,10$ Now let's do a one-point crossover operation on genotype $Q$, from $P$. The randomly-chosen crossover point is between the $9$-th and $10$-th bit, so bits $10$, $11$ and $12$ from genotype $P$ will replace those same bits from genotype $Q$. Let's call the resulting genotype $Q'$. P : 00 01 00 10 01 01 Q : 01 01 00 01 10 10 Q' : 01 01 00 01 11 01 After decoding $Q'$ we find that the result is "EENESE". However neither $P$ nor $Q$ contained direction south. My question is, do crossover operators imply a certain degree of mutation by definition? Answer: I have contacted Inman Harvey from the University of Sussex and here is his answer to my question: In your example you have given each genotype $6 \,\,\,\, 2$-bit 'genes'. Let's look at how different your example parents $P$ and $Q$ are: In genes $2 (01)$ and $3 (00)$, the genes are identical in $P$ and $Q$ In genes $1, 4, 5$ and $6$ the $2$-bit genes differ Ie $2/3$ of the genes differ between parents. If you have $1$-point crossover at any of $12$ positions on the genotype (including here the non-functional case where the crossover-point is 'off-the-end', then: $6$ of the possible crossover points are between-genes and hence do not count as sort-of mutations. $2$ of the possible crossover points are within the $2$-bit genes that are identical in $P$ and $Q$ – hence do not result in any 'crossover-style-mutation' in $Q'$ $4$ of the possible crossover points are within non-identical genes, and hence DO result in a crossover-style-mutation in $Q'$. So -- with your specific example, $4$ out of $12$, ie $1/3$ of the time, crossover will produce a crossover-style-mutation. Sounds a lot. Sounds like maybe you should be worried about this in terms of impacting on effective mutation rates... BUT WRONG!! Your example was atypical. You will find in practice, after only a few generations of evolution, your parents $P$ and $Q$ will be genetically very similar to each other. This is the basic SAGA insight, see any of my papers with SAGA in the title. If you doubt this, then run any binary-style GA for as few as $10$ generations and check on how similar any $2$ randomly chosen parents are. It won't be (as in your example above) $67\%$ of genes differing between parents, it will be more like order of $1\%$ (if your genotypes are say $100$ bits long) differ between parents. Don't believe me, run the tests yourself, it is very easy to do! ... and hence your concern about these crossover-style-mutations should, I suggest, diminish to near vanishing point when you take account of the reality that most parents have mostly identical genotypes. Anecdotally and simplistically, if human genotypes are $98\%$ similar to chimpanzee genotypes, then you should expect your average human-human parental couple to be $99.99\%$ genetically identical, and only $0.01\%$ open to the possibilities that worry you above (that $\sim 0.01\%$ is of course important and interesting for different reasons). Elephant mating with a flea, then you can expect problems -- in fact what you are discussing above relates closely to inter-species breeding problems. Your standard $GA$ does not run into such problems. I hadn't thought of it this way. So in fact crossover operators may introduce unwanted mutations in early generations (depending on how they are implemented) but these mutations will have little, if any, effect on the actual optimal mutation rate.
{ "domain": "cs.stackexchange", "id": 4032, "tags": "genetic-algorithms" }
What is the use of using width/height shift in data augmentation?
Question: I'm not sure to understand the use of augmentation data using width shift and height shift. Say I have limited image data, and I want to create new data using Keras' ImageDataGenerator. To classify between images, I use CNNs. Since CNNs are translation invariant, aren't the translational shifts from keras useless as those shifts will not result in new images per say? (I know new images will be created from the generator, but the CNNs will not learn new features from the pictures, and might instead cause overfitting?) Answer: If done standalone, then it is correct. But another goal while applying augmentation is to have randomness. This is achieved with multiple augmentation techniques applied together. In that sense, these two can also become effective. e.g. This is a zoomed image, adding vertical shift can crop the image further and eventually result in a new(random) image $\hspace{2cm}$
{ "domain": "datascience.stackexchange", "id": 8977, "tags": "keras, dataset, cnn, image-classification, data-augmentation" }
Rearrange content string
Question: From https://www.codeeval.com/browse/115/: You have a string of words and digits divided by comma. Write a program which separates words with digits. You shouldn't change the order elements. I tried to do it using Haskell (learning Haskell). Can you give me some good practices/shortcuts/advice on how to make this code better? {-# LANGUAGE OverloadedStrings #-} import qualified Data.Text as T import System.Environment (getArgs) main = do args <- getArgs content <- readFile(args !! 0) let fileLines = lines content mapM (\x -> putStrLn (mixedContent (generateContent (wordsWhen (==',') x)) [] [])) fileLines data Content = String String | Num Int deriving (Show) generateContent :: [String] -> [Content] generateContent [] = [] generateContent (x:xs) = if isNumeric x then [(Num (read x))] ++ generateContent xs else [String x] ++ generateContent xs mixedContent :: [Content] -> [Int] -> [String] -> String mixedContent [] d w = foldl (\r x -> r ++ x) "" $ giveResults (generateCommaDelimitedList w) ["|"] (generateCommaDelimitedList (intToChar d)) mixedContent ((String x):xs) d w = mixedContent xs d (w ++ [x]) mixedContent ((Num x):xs) d w = mixedContent xs (d ++ [x]) w giveResults :: [String] -> [String] -> [String] -> [String] giveResults [] _ [] = [] giveResults [] _ d = d giveResults w _ [] = w giveResults w delimitor d = w ++ delimitor ++ d intToChar :: [Int] -> [String] intToChar l = map (\i -> show i) l generateCommaDelimitedList :: [String] -> [String] generateCommaDelimitedList [] = [] generateCommaDelimitedList (x:[]) = [x] generateCommaDelimitedList (x:xs) = (x ++ ","):(generateCommaDelimitedList xs) isInteger s = case reads s :: [(Integer, String)] of [(_, "")] -> True _ -> False isDouble s = case reads s :: [(Double, String)] of [(_, "")] -> True _ -> False isNumeric :: String -> Bool isNumeric s = isInteger s || isDouble s -- split string equivalent wordsWhen :: (Char -> Bool) -> String -> [String] wordsWhen p s = case dropWhile p s of "" -> [] s' -> w : wordsWhen p s'' where (w, s'') = break p s' EDIT: I follow advice given in the answer. I just wanted to show how the main should be to respect what the exercise was asking. main = do args <- getArgs content <- readFile(args !! 0) let l = lines content let lf = filter (\x -> length x > 0) l mapM_ (putStrLn . arrange) lf The interesting part is mapM_, which makes me able to print the lines re-arranged and ignore the result returned by the mapM Answer: I think you've overcomplicated the problem because you're approaching it the way you would in an imperative language. I'll describe the way I would approach the problem. Notice that I do a lot of my "thinking" in GHCi! Your code reads the file and splits it up into lines just fine, so I'll move onto the part where we handle each line of text. First, we need to split the line up into tokens, where the tokens are separated by commas. The words function is close to what we want, but it breaks at whitespace, not commas. So we'll write our own: tokens :: String -> [String] tokens [] = [] tokens as = case break (==',') as of (xs,[]) -> [xs] (xs,_:ys) -> xs:tokens ys I put that in the program I'm writing, load it, and try it out: λ> tokens "wombat,7,789,tiger,33" ["wombat","7","789","tiger","33"] Now we need to separate the words from the integers. So we need a function that will tell us if a token is an integer. The problem statement implies we only need to deal with integers. So a token is an integer if it only contains digits. (The example given didn't have any spaces, but we could allow them, and even decimal points, if we want to.) So how do we tell if a character is a digit? Hmm... there's probably a function to do that for us.The best way to answer questions of the form "Is there a Haskell function to do X" is usually: Figure out what the type signature of the function you want would be. Search hoogle or hayoo. If you don't find it in one, try the other. Usually it doesn't matter much whether you get the order of the input parameters exactly right, but you may need to experiment. What would the type of the function we're interested in be? Logically, it has to be: Char -> Bool Unfortunately, in the case there are a lot of functions with that signature, but eventually we find isDigit in Data.Char. λ> import Data.Char λ> isDigit '7' True λ> isDigit 'w' False Look at the source for that function. (You'll see that it would be easy to write our own function to allow blanks or decimal points as well as digits.) So now we have a way to tell if an individual character is valid in an integer. How do we tell if all of the characters in a token pass the test? Again, I suspect there's a function that applies a boolean test to all elements in a sequence, and tells us if they all pass. The type signature of such a function would be one of the following: (a -> Bool) -> [a] -> Bool [a] -> (a -> Bool) -> Bool Checking Hoogle, we find the all function in Data.List. λ> import Data.List λ> all isDigit "123" True λ> all isDigit "wombat" False We could use this expression as is, but would be more readable to define a function. isInteger :: String -> Bool isInteger s = all isDigit s We could also write that last line using pointfree notation. isInteger = all isDigit I add that to my program, reload it, and try it out: λ> isInteger "123" True λ> isInteger "wombat" False Next we need to divide our tokens up into those that are integers, and those that aren't. Again, there's probably a function that segregates a list into those items that satisfy a test, and those that don't. It would have this signature (possibly with the arguments switched): (a -> Bool) -> [a] -> ([a],[a]) This time a Hayoo or Hoogle search turns up break and partition. Looking at the documentation, we see that partition is what we want. Let's try it: λ> partition isInteger ["wombat","7","789","tiger","33"] (["7","789","33"],["wombat","tiger"]) That preserved the order of the elements, but it gave us the integers first. That's easily fixed: λ> partition (not . isInteger) ["wombat","7","789","tiger","33"] (["wombat","tiger"],["7","789","33"]) Let's wrap that in a function to make it more readable: segregate :: [String] -> ([String], [String]) segregate = partition (not . isInteger) I'm going to move a little more quickly now, because you're probably getting the hang of how to approach these problems. Next we need to format the results. Let's focus on re-inserting the comma between tokens. I'll write a dual to the tokens function: untokens :: [String] -> String untokens = concat . intersperse "," Let's try it: λ> untokens ["wombat","tiger"] "wombat,tiger" Now all we need to combine the two parts: format :: ([String],[String]) -> String format (xs,ys) = untokens xs ++ '|' : untokens ys Let's try it out: λ> format (["wombat","tiger"],["7","789","33"]) "wombat,tiger|7,789,33" Now we can put all the steps together: parse :: String -> String parse = format . segregate . tokens See how simple and readable the parse function is? And it works: λ> parse "wombat,7,789,tiger,33" "wombat,tiger|7,789,33" EDIT: Here's the entire program, with main. import Data.Char import Data.List main = do args <- getArgs content <- readFile(args !! 0) mapM_ (putStrLn . lines) content tokens :: String -> [String] tokens [] = [] tokens as = case break (==',') as of (xs,[]) -> [xs] (xs,_:ys) -> xs:tokens ys untokens :: [String] -> String untokens = concat . intersperse "," segregate :: [String] -> ([String], [String]) segregate = partition (not . isInteger) isInteger :: String -> Bool isInteger = all isDigit format :: ([String],[String]) -> String format (xs,ys) = untokens xs ++ '|' : untokens ys parse :: String -> String parse = format . segregate . tokens
{ "domain": "codereview.stackexchange", "id": 5099, "tags": "haskell" }
Proper naming for a Time class
Question: Which of these names is better: time.getHours() or time.hours()? And why? public class Time implements Serializable { private final int hours; private final int minutes; public static Time from(Calendar calendar) { int hoursIn24HourFormat = calendar.get(Calendar.HOUR_OF_DAY); int minutes = calendar.get(Calendar.MINUTE); return new Time(hoursIn24HourFormat, minutes); } public Time(int hours, int minutes) { this.hours = hours; this.minutes = minutes; } // or may be getHours() name is better? public int hours() { return hours; } // or may be getMinutes() name is better? public int minutes() { return minutes; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (obj == this) { return true; } if (!(obj instanceof Time)) { return false; } Time other = (Time) obj; return (hours == other.hours) && (minutes == other.minutes); } @Override public int hashCode() { return toMinutesOfDay(); } private int toMinutesOfDay() { return hours * 60 + minutes; } @Override public String toString() { return twoDigitString(hours) + ":" + twoDigitString(minutes); } private static String twoDigitString(int timeComponent) { return (timeComponent < 10) ? ("0" + timeComponent) : String.valueOf(timeComponent); } public boolean before(Time time) { return toMinutesOfDay() < time.toMinutesOfDay(); } public boolean after(Time time) { return toMinutesOfDay() > time.toMinutesOfDay(); } public boolean within(Time fromIncluded, Time toIncluded) { return (!fromIncluded.after(this)) && (!this.after(toIncluded)); } } Answer: I'd name the class TimeOfDay to be absolutely clear about its purpose. Consider adding validation to the constructor (0 ≤ hour ≤ 23 and 0 ≤ minutes ≤ 59). Because before() and after() are predicates, I'd rename them to isBefore(TimeOfDay t) and isAfter(TimeOfDay t). I prefer getHours() over hours(). If you prefer hours(), you might consider just exposing public final int hours, minutes; instead. Normally, you want to reserve the flexibility to change the internal representation of your object (for example, to store just minutesOfDay instead of hours and minutes). However, since those fields are final, and you're more or less committed to the representation due to serialization, there's not much to be gained by wrapping the value in a method, other than consistency with tradition, and you would already violate tradition slightly anyway by choosing hours() over getHours().
{ "domain": "codereview.stackexchange", "id": 5704, "tags": "java, datetime" }
Where can I find the file where a specific topic was created?
Question: I can see the available topics using rostopic list. However, I would like to see the files (the source code) where they are being published from (or where they have been created). Is this possible somehow? When I execute the comand rostopic list I see certain topics which I can't find anywhere inside the ROS package which is supposed to have created them. Originally posted by nbro on ROS Answers with karma: 372 on 2019-03-03 Post score: 0 Answer: I would like to see the files (the source code) where they are being published from (or where they have been created). Is this possible somehow? In general (and with "a single command")? No, I would say you can't. Not with any of the command line tools that are provided "as part of a standard ROS install". If however you know which nodes are publishing or subscribing to those topics (note: topics are not "created", in fact, in some sense they do not really even "exist"), you could: figure out the package that hosts the/those node(s) retrieve the source code of the package use something like grep to find occurences of the topic name in the code But note though: due to remapping and dynamic topic name creation, there is no guarantee at all that you will be able to find the publishers this way. Especially with Python nodes, as Python is so flexible that many "crazy" things can be done at runtime making this sort of sleuthing rather difficult. I see certain topics which I can't find anywhere inside the ROS package which is supposed to have created them. nitpick: nodes can be publishers. Packages can't. Also note that rostopic list shows you a topic name in three cases: topic is published to topic is subscribed to topic is both published and subscribed to So it could be that a subscriber causes a topic to appear in rostopic list, and that subscriber could be "anywhere". Edit: I'm trying to use this ROS package. After having launched the Pioneer 3AT, if I type rostopic list, I see a bunch of topics related to the Pioneer, but I can't find the place where they are being published from. Looking at the repository you linked, two things stand out: this is not a package, but an entire workspace with one package (pioneer3at_simulation) and two git submodules pointing to other repositories (with even more packages) the main package pioneer3at_simulation provides a collection of .launch files (here) that together start (among other things): gazebo and something called teleop_joy. Gazebo then is configured to load various .world files, urdf_spawner is then used to inject a model of a 3-AT, that loads multiple Gazebo plugins, etc, etc. I haven't checked the two repositories linked in via the git submodules, but I expect those to also either provide additional nodes and/or plugins. Gazebo, the teleop nodes and all of the Gazebo plugins will both publish and subscribe to topics, and those will appear in rostopic list. So in order to find "the file where a specific topic was created", you'd have to check the sources of all those packages. Edit2: I haven't checked, but there is a small chance that you could actually figure out where a topic is published or subscribed to without grepping through source code: the ROS log(s). It might be possible to print out the source line nr (when configured using ROSCONSOLE_FORMAT) and a sufficiently high logging level (DEBUG or even higher). If this works, I wouldn't recommend doing this in production systems though. Edit 3: For example, using rostopic list, I see the topic /pioneer3at/camera_down/image_raw. The only place inside that package where I find that topic is in the file rqt_gui_cameras.perspective This is a good example of what I described in my original answer ("dynamic topic name creation"): that topic name is not something you will find directly in the sources of any of those packages, as it's dynamically created inside a xacro macro (here) that loads a Gazebo camera plugin (here) which finally is instantiated here with the argument "down". Taking all that together we end up with a Gazebo sensor that uses a Gazebo ROS Camera plugin, has the name camera_down and publishes on the topic /pioneer3at/camera_down/image_raw (as you can see here). The complete string /pioneer3at/camera_down/image_raw doesn't appear in any of these sources, but if we take the dynamic behaviour of all the involved components into account we can figure it out. Note that in this particular case grepping the output of xacro after converting the pioneer3at.urdf.xacro file with the proper arguments supplied would have resulted in at least parts of the topic name appearing in the .urdf file. Originally posted by gvdhoorn with karma: 86574 on 2019-03-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by nbro on 2019-03-03: I even tried to use grep to find the topics, but I didn't find them. Anyway, yes, I was aware of the fact that you can't create topics (but I used the term "create" not to use both publish and subscribe). I also know that topics are published by nodes, but nodes reside inside packages. Comment by gvdhoorn on 2019-03-03:\ I even tried to use grep to find the topics, but I didn't find them If you can give an example of a topic you couldn't find, perhaps I can try to see/explain why that could have been the case. Comment by gvdhoorn on 2019-03-03:\ I also know that topics are published by nodes, but nodes reside inside packages. True, but: I live in a house, and I make bread sometimes. My house doesn't make bread, I do. Comment by nbro on 2019-03-03: I'm trying to use this ROS package. After having launched the Pioneer 3AT, if I type rostopic list, I see a bunch of topics related to the Pioneer, but I can't find the place where they are being published from. Comment by nbro on 2019-03-03: Yes, but I am referring to the package inside the workspace pioneer3at_simulation. I only used this package. Comment by gvdhoorn on 2019-03-03: I believe my answer still applies. That pkg contains a nr of launch files that start all sorts of nodes, from a nr of other packages. If you can give an example of a topic name that you can't find (as I asked earlier), perhaps that could be explained. Comment by nbro on 2019-03-03: For example, using rostopic list, I see the topic /pioneer3at/camera_down/image_raw. The only place inside that package where I find that topic is in the file rqt_gui_cameras.perspective
{ "domain": "robotics.stackexchange", "id": 32574, "tags": "ros-melodic, ros-kinetic" }
Is there a gadget that reduces generalized geography to undirected graphs?
Question: The directed Generalized Geography game is well-known to be PSPACE-complete, however, I could not find anything for the undirected version. I saw that in Hans L. Bodlaender, Complexity of path-forming games, Theoretical Computer Science, Volume 110, Issue 1, 15 March 1993, Pages 215-245 (thx to Marzio De Biasi for the link) it is posed as an open problem. Is it still open? In fact, I wonder if there is a simple gadget that we can put in the place of every directed edge to make a straight-forward reduction from the directed version. Is it known that no such gadget can exist? Answer: Undirected (Vertex) Geography is in P. In particular, the game on graph $G$ with starting vertex $v$ is a win for player 1 if and only if every maximum matching of $G$ uses the vertex $v$. This can be checked in polynomial time. The above is Theorem 1.1 from the paper "Undirected Edge Geography", by Fraenkel, Scheinerman and Ullman, Theoretical Computer Science 1993.
{ "domain": "cstheory.stackexchange", "id": 2887, "tags": "graph-theory, pspace, combinatorial-game-theory, two-player-games" }
Nearly Free Electron Model (Perturbation Theory)
Question: I am having difficulty understanding the degenerate perturbation theory treatment of the nearly free electron model. So for a free electron, the energy dispersion is relation is $E^{0}=\frac{h^{2}k^{2}}{2m}$. Assuming the 1d lattice constant is $a$ and so the two states at $|k=\frac{\pi}{a}\rangle$ and $|k'=\frac{-\pi}{a}\rangle$ will satisfy Bragg condition and will be degenerate since they have the same energy (i.e $E_{k}^{0}=E_{k'}^{0}=\frac{h^{2}{\pi}^{2}}{2ma^{2}}$). So assuming the periodic potential is $V$ then the first order energy correction for the energies of $|k=\frac{\pi}{a}\rangle$ and $|k'=\frac{-\pi}{a}\rangle$ are the eigenvalues of the following matrix: $$\begin{pmatrix} \langle k|V|k\rangle & \langle k|V|k'\rangle \\ \langle k'|V|k\rangle \ & \langle k'|V|k'\rangle \\ \end{pmatrix}=\begin{pmatrix} 0 & V_{G} \\ V^{*}_{G} \ & 0 \end{pmatrix}$$ which are $\lambda = \pm\left| V_G\right|$. So am I right to assume that the state $|k'=\frac{-\pi}{a}\rangle$ is pushed down by - $\left| V_G\right|$ and state $|k'=\frac{\pi}{a}\rangle$ is pushed up by $\left| V_G\right|$. I know this is incorrect since the gap opens up in both directions (up and down) at $k=\frac{\pi}{a}$ and $k=\frac{-\pi}{a}$so I want to know what I am missing.I attached a picture to show the results of perturbation theory. However, this is not correct and we actually get two more states after perturbation and a bandgap of $2\left| V_G\right|$. So how can we start with two states before perturbation and then end up with four states after perturbation (see picture).Could anyone explain why this is the case? Answer: There is a misconception in the question. The states don't somehow multiply when we introduce the periodic potential and shift energies at the Brillouin zone boundary up and down. Instead what we are doing is changing the basis from the free-particle plane-wave basis to one that (approximately) diagonalizes the full Hamiltonian (which includes the kinetic energy and periodic potential terms). Once the periodic potential is introduced, the plane-wave states are no long states of well-defined energy (they are not energy eigenstates), and so they cannot be labeled with energies. Instead, it is particular linear combinations of the plane-wave states that are eigenstates of the Hamiltonian, and it is these states that can be labeled with energies. (More detail below the break.) There is another possible cause of the misconception. We don't shift both states ($k=\pm \pi/a$) both up and down, making four states. Instead, these two sates combine to form two new states, one which is shifted down and one which is shifted up. When we draw the dispersion relation is reduced zone scheme, the states at the Brillouin zone boundary with the same energy are the same state (i.e., you kind of have to think of the Brillouin zone as having "periodic boundary conditions", in a sense. Consider the picture below: the dots with the same color are actually the same state. It looks as if, in this picture, we have created four states out of two, but that's just a problem with our picture. We want to diagonalize the full Hamiltonian, including the free-particle term (kinetic energy $\hat{T}$) and the periodic potential $\hat{V}$. We do this approximately by using perturbation theory. Due to the periodic perturbing Hamiltonian, the plane wave state corresponding to the free-particle dispersion relation that you are showing are no longer energy eigenstates. Instead, you must form linear combinations of the plane wave state to construct the states of well-defined energy (i.e., energy eigenstates). Thus, you are changing the basis from the set of plane-wave states to the set of energy eigenstates of the full Hamiltonian. This is difficult to do in general, but for the states at the Brillouin zone boundary (at $k=\pm \pi/a$), we can make a good approximation that the eigenstates are the symmetric and anti-symmetric combinations of the corresponding plane-wave states. This is in fact what results from diagonalizing the two-by-two matrix shown in the OP: the eigenvectors of that matrix are $(1,1)$ and $(1,-1)$, and so the resulting (approximate) eigenstates are just $$ \psi_{\pm}(x) = \frac{e^{\pi i x/a}\pm e^{-\pi i x/a}}{\sqrt{2}}\,. $$ These are clearly no longer plane-wave states. The plane-wave states are not part of the new basis. They still exist, but they're not energy eigenstates, so you cannot label them with energies, as OP is doing in the picture. Part of the problem might be that the symbol $k$ is often used in this context to denote two different (but related) things. For plane-waves, it's just the wave-vector (related to the momentum). But once you've introduced the periodic potential and "folded" the dispersion relation over into the first BZ, $k$ now corresponds to quasi-momentum (aka crystal momentum), the momentum-like quantity conserved in systems with discrete translational invariance. Momentum $k$ labels the plane-wave basis, and quasi-momentum $k$ labels the Bloch basis, constructed using particular linear combinations of the plane-wave states.
{ "domain": "physics.stackexchange", "id": 97427, "tags": "quantum-mechanics, waves, solid-state-physics, perturbation-theory, crystals" }
Why do electrical storms cause metal objects to vibrate?
Question: I am curious about the vibration of metal objects as a sign of an impending lightening strike. This is apparently a common occurrence; so much so that a quick google search will yield many pages instructing people to head for shelter if their keys (other other small metal objects) start to vibrate in the midst of an electrical storm. One climbing site even mentions that a vibrating rope is a sign of a lightening strike. Can someone explain the mechanism behind this? My guess is that it a piezo electric effect, though this contradicts my (limited) understanding that atmospheric electricity is static. Thanks :) Answer: The lightnings primarily depend on strong electrostatic fields. They're up to 100 volts per meter in the summer and 500 volts per meter in the winter. These fields are fluctuating. When a certain critical threshold not far from those values is reached, a lightning strikes. Does the electrostatic field move a metal? There is no direct electrostatic force acting on the keys because they keys' total electric charge is basically zero. That's true even if the electric field of the thunderstorm depends on location and time. However, the metallic object gets charged locally. The electric field polarizes the keys. It means that one side of the keys (measured relatively to the direction of the electrostatic field of the thunderstorm) is positively charged while the opposite side is negatively charged. The total force acting on this polarized object would be zero if the electrostatic field were uniform and constant in time. But when the electric field is non-uniform (which may also be guaranteed by its being time-dependent because the signals only propagate by the speed of light), there will be a net force acting on the keys. I am not able to produce any numbers now but the keys (or metallic object) get the charge $+Q$ coulombs in the upper part of the keys and the opposite charge $-Q$ coulombs on the lower side. Because the electric field depends on the altitude and oscillates, the electric field may be higher on the upper side of the keys than on the opposite one, and there will be a net force. The electric field may fluctuate. When the object is light enough relatively to the length in the relevant direction, the electrostatic force will be enough to move the keys or make them oscillate. What really matters is some ratio of the length to the width etc. Even bad conductors may get polarized and exhibit the same effect – which is why long ropes, perhaps especially wet ones, can oscillate.
{ "domain": "physics.stackexchange", "id": 24381, "tags": "electricity, atmospheric-science, electrical-engineering" }
What is IR CFT and UV CFT?
Question: What is IR CFT and UV CFT? In many physics related materials, they often mention IR, and UV. I think it is related with regularization (I remember in QFT, there is UV cutoff in some regularization scheme). What is the difference between IR and UV regularization (It seems to me it is related with frequency i.e energy some how). Answer: Imagine a QFT with some particle content. Some of these fields will be massless and some massive. For simplicity, consider a massles scalar field and a massive scalar field with mass $M$. If we are working at some energy $E\ll M$, we won't see the massive field (as happened with the Higgs before LHC, for example). This is the IR CFT. Why IR? Because we are at low energy. Why CFT? Well, the effective QFT that we are seeing under these conditions is, as I've said, made up of massles fields. These kind of theories are scale invariant, because you don't have any parameter in the theory that fixes a energy scale (a mass, or a coupling, or a characteristical length). Scale invariant theories have conformal symmetry, and are called CFT. On the other hand, if now we take the energy to be $E\gg M$ we will se both fields, but now almost all of their energy will be momentum and their mass will be negligible. They will be behaving as two massless fields. That is the UV CFT. Summing up, depending on the energy we are working on, the physics of our theory will change, and this is described by effective theories. It happens that at enough low energies and at enough high energies this effective theory is a CFT, IR and UV respectively. Roughly speaking, regularization is to introduce a cut-off so we can hide infinities. Imagine a spins lattice. If we consider that the lattice is infinite, and we want to compute its energy, for example, we are going to obtain that it is infinite, just because we are summing a infinite number of contributions. So usually one says that the lattice has a volume $V$ and then it takes arbitrarily high values. This volume is called a regulator, and since it is related to long distances or low energies, it's called IR regulator. On the other hand, if we want to consider that our lattice is a continuum, we will have problems too, because of having infinite degrees of freedom in a local region. So we introduce a separation length $\epsilon$ between the sites which goes to zero. This is the UV regulator, because it's related with short distances, or high energies.
{ "domain": "physics.stackexchange", "id": 16589, "tags": "quantum-field-theory, renormalization, conformal-field-theory, regularization" }
Route structure for multiple associations in Rails
Question: I'm doing a site at the moment (first Rails site, learning as I code) and I'm concerned that I'm over complicating the routes. My associations boil down to: class Recipe < ActiveRecord::Base has_and_belongs_to_many :categories belongs_to :book end class Category < ActiveRecord::Base belongs_to :category_type has_and_belongs_to_many :recipes end class Book < ActiveRecord::Base has_many :recipes end I want to end up with URLs like this: /books/3/recipes #to show all recipes from book 3 /category/12/recipes #to show all categories from category 12 /recipes/3 #to show recipe 3 The good news is I have this all working, but in the course of doing so I feel like I may have strayed a little from "the Rails way" and I was wondering if someone could take a look and let me know if I've gone wrong. The bits I'm particularly concerned about are my routes.rb file (because I appear to be repeating myself): resources :books do resources :recipes end resources :categories do resources :recipes end In particular, the recipes#index action, which seems a little verbose: def index if(params[:category_id]) @recipes = Recipe.find_by_category(params[:category_id]) elsif(params[:book_id]) @recipes = Recipe.where(:book_id => params[:book_id]) else @recipes = Recipe.find(:all) end end Now, it might be that this is all absolutely fine, but I just want to check before I get too much further in! Answer: This is probably fine, though there are other ways to do it like passing filters to /recipes (?category_id=x or ?book_id=x). Your controller action is probably fine too, though I may have considered making multiple controllers. Though i'm not sure why you're doing find_by_category instead of the a find_all_by_category or the same .where as you're doing for book_id. Also, Recipe.all is more rails 3.
{ "domain": "codereview.stackexchange", "id": 1241, "tags": "ruby, ruby-on-rails" }
How can I argue that $3\mathsf{SAT}\leq_p \mathsf{IndSet}$ is polynomial in time?
Question: Given the reduction $3\mathsf{SAT}\leq_p \mathsf{IndSet}$ as follows: How can I argue that it's in polynomial time? I understand how the reduction works, but even though it appears rather trivial, I can't explain why it's efficient. To place $\mathsf{IndSet}$ in $\mathsf{NP}$-Hard, we will show $3\mathsf{SAT}\leq_p \mathsf{IndSet}$: Given $$\phi=\bigwedge_{m=1}^{n}(x_m\vee y_m\vee z_m)$$ with $m$ clauses, produce the graph $G_\phi$ that contains a triangle for each clause, with vertices of the triangle labeled by the literals of the clause. Add an edge between any two complementary literals from different triangles. Finally, set $k=m$. In our example, we have triangles on $x,y,\overline{z}$ and on $\overline{x},w,z$ plus the edges $(x,\overline{x})$ and $(\overline{z},z)$. We need to prove two directions. First, if $\phi$ is satisfiable, then $G_\phi$ has an independent set of size at least $k$. Secondly, if $G_\phi$ has an independent set of size at least $k$, then $\phi$ is satisfiable. (Note that the latter is the contrapositive of the implication "if $\phi$ is not satisfiable, then $G_\phi$ does not have an independent set of size at least k".) For the first direction, consider a satisfying assignment for $\phi$. Take one true literal from every clause, and put the corresponding graph vertex into a set $S$. Observe that $S$ is an independent set of size $k$ (where $k$ is the number of clauses in $\phi$). For the other direction, take an independent set $S$ of size $k$ in $G_\phi$. Observe that $S$ contains exactly one vertex from each triangle (clause) , and that $S$ does not contain any conflicting pair of literals (such as $x$ and $\overline{x}$, since any such pair of conflicting literals are connected by an edge in $G_\phi$). Hence, we can assign the value True to all the literals corresponding with the vertices in the set $S$, and thereby satisfy the formula $\phi$. This reduction is polynomial in time because $\Huge\dots?$ I've looked at many different examples of how this is done, and everything I find online includes everything in the proof except the argument of why this is polynomial. I presume it's being left out because it's trivial, but that doesn't help me when I'm trying to learn how to explain such things. Answer: You are given a formula, and you will construct a graph. You can argue the reduction is polynomial time by analyzing the time you need to construct the graph. The transformation is an algorithm: write each step down, and analyze them. If every step takes polynomial time, you have shown the reduction runs in polynomial time. Scan the clauses of the formula in time that is linear in the number of clauses. Scan the 3 literals in every clause. Build 3 vertices, with labels corresponding to the literals in a clause. This you can do in constant time, clearly. Once you have built a triangle, add it to your graph. With reasonable assumptions, you can do this in linear time, but details naturally depend on the way you represent your graph. Nevertheless, the time taken will be polynomial. Finally, you only need to add edges between any two complementary literals from different triangles. Scan your triangles. For every literal in the triangle, scan through every other triangle, and see if there is a complementary triangle. If so, add an edge. This process takes polynomial time; for every triangle you are scanning every other triangle, so think of something quadratic. After every triangle has been scanned, $G_\phi$ has been built.
{ "domain": "cs.stackexchange", "id": 1080, "tags": "complexity-theory, reductions" }
How to improve the binary classification model for text (News Articles) of Recurrent Neural Net with word emmbeding?
Question: I am trying to do binary classification of news articles using Recurrent Neural Net with word embedding. Following are the parameters of the model: Data: 8000 labelled news articles (Sports:Non-sports::15:85) Parameters: embedding size = 128 vocabulary size = 100000 No. of LSTM cell in each layer = 128 No. of hidden layers = 2 batch size = 16 epochs = 10000 Result: AUC on training set = 0.60 AUC on testing set = 0.55 As the both training and testing error is high model is underfitting and require more data. So I have couple of doubts here: What would be the optimum data size required? Can we change the parameters to improve AUC. By decreasing, embedding size or No. of neurons we can minimize degree of freedom. Answer: I think you should be careful as to which algorithms you tend to use. A machine learning algorithm should be structured as follows: feature extraction and then your model. These are two things that should be done separately. Feature Extraction This is the bag of words, n_grams and word2vec. These are all good choices for text examples. I think bag of words is a good choice in your case. However, if this generates a sparse matrix then maybe n_grams can be better. You can test all 3 methods. If you have a vocabulary size of 100,000 then you really need to use some extra feature extraction. The Model Theoretically, the more parameters in your model the more data you need to train it sufficiently otherwise you will retain a large amount of bias. This means a high error rate. Neural networks tend to have a very high number of parameters. Thus they require a lot of data to be trained. But, you have 8000 instances!!! Yes. But, you also have 100,000 features for each instance. This is insufficient even for shallow machine learning models. For a neural network, i usually suggest to follow this very general rule of thumb, $\#examples = 100 * \#features$. So you will need a MASSIVE amount of data to properly train your neural network model. Moreover, if you have a skewed dataset then you should expect to be using even more training examples, to show sufficient examples such that the model is capable of distinguishing the two classes. I would suggest a less intensive model. You should try to use: naive bayes, kernel-SVM or knn for text classification. These methods would do MUCH MUCH better than a neural network. Especially considering you have a skewed dataset!! My Suggestion I would start with bag of words and then use kernel-SVM. This should be a good starting point. A recurrent neural network is 0% recommended for the amount of data you have.
{ "domain": "datascience.stackexchange", "id": 2602, "tags": "classification, rnn, word-embeddings" }
How do you change atomic valence in Avogadro?
Question: I have a bridging hydroxyl group between two metals, so the oxygen is triply coordinated in this structure. How do I change the valency to three in Avogadro so that these bonds to the metal are displayed? Perhaps it might be necessary to also play around with the van der Waals radius. Or is there a direct way to enforce a drawn bond between two atoms? Answer: While there are some automatic valence features in Avogadro, you can always turn them off. Bonding is mostly a matter of appearance. For example, let's import $\ce{IF7}$. Notice that the central iodine doesn't show enough bonds. So we'll draw some in. Switch to the draw tool and turn "Adjust Hydrogens" off. Now just draw connections between the atoms. Voila!
{ "domain": "chemistry.stackexchange", "id": 6452, "tags": "software" }
How to determine if a tree $T = (V, E)$ has a perfect matching in $O(|V| + |E|)$ time
Question: This is a problem I've come across while studying on my own; it's from Algorithms by Papadimitriou, Dasgupta and Vazirani. Specifically, the problem statement is: Give a linear-time algorithm that takes as input a tree and determines whether it has a perfect matching: a set of edges that touches each node exactly once. In the context of this book: A tree is an undirected, connected and acyclic graph. A linear-time algorithm on a graph $G = (V,E)$ is something that runs on $O(|V| + |E|)$ time. Thinking on the problem, the following seemed promising: Of course, the tree must have an even number of nodes, otherwise we can exit early and report that there is no perfect matching. Additionally, if at any point we find an isolated vertex, we can can do the same. Since we're dealing with a tree, we can search for a leaf $v$, a node with a single incident edge $(u,v)$. This edge must be in the perfect matching, for it's the only one that matches $v$. We then remove the vertices $u$ and $v$ from $T$, along with all edges that involve them, and repeat the process. If we remove all vertices in this manner, we have found a perfect matching. Notice that, while the removal may disconnect the graph, it will remain acyclic and this is what matters. Disconnecting the graph essentially splits the tree into multiple trees, so the iteration continues to make sense even in this case. Now, this problem shows up in the chapter about greedy algorithms, so this approach seems like a natural fit. Moreover, searching on the topic I've found that a tree has at most one perfect matching (see, for instance, this). In other words, if it has one, it is unique, so the algorithm must produce the one if it exists. However, when actually thinking about implementation, I don't really see how this runs in $O(|V| + |E|)$. The naive approach appearcs to be quadratic, because at each step we need to scan for a leaf and update the graph, and neither of these take constant time. I'm mostly thinking of adjacency list representation, but this seems to hold true for adjacency matrix as well. I've also toyed with precomputing and updating degree values for vertices, but haven't nailed it. Searching more on the topic, I've come across this link, which suggests what's essentially the same algorithm to solve the same problem (number 2 on the exam). However, in the statement and solution, they only claim it to run in polynomial time. How would one implement this algorithm in linear time, or else design another algorithm to solve the problem in linear time? What do we assume about data structures or operation complexities involved? EDIT: Following on sdcvvc's suggestion, here's some C++-esque pseudocode implementing his idea. I think this works. enum class MatchStatus { HAS_MATCH, ROOT_NEEDS_MATCHING, CANNOT_MATCH } bool hasMatching(tree t) { return hasMatching(t.root) == MatchStatus::HAS_MATCH; } MatchStatus hasMatching(node treeRoot) { // Keep track of whether the current node (root of the current (sub)tree) // is matched to some child node bool isMatched = false; for(node subTreeRoot : treeRoot.children) { MatchStatus subTreeStatus = hasMatching(subTreeRoot); if(subTreeStatus == MatchStatus::CANNOT_MATCH) return MatchStatus::CANNOT_MATCH; // The subtree has a perfect matching *except* its root needs to be matched else if(subTreeStatus == MatchStatus::ROOT_NEEDS_MATCHING) { if(isMatched) // Current node is already matched // The subtree root needs a match but cannot be matched return MatchStatus::CANNOT_MATCH; else // Match the current node to subTreeRoot isMatched = true; } // else if(childStatus == MatchStatus::HAS_MATCH) // continue; } return isMatched ? MatchStatus::HAS_MATCH : MatchStatus::ROOT_NEEDS_MATCHING; } Answer: Suppose you remove vertices of degree 1 as long as possible, but with an additional constraint: you never disconnect the graph. If you think about it, either you will succeed in removing all vertices (the tree has a matching), or you will end up with a single vertex (which cannot be covered), or you'll be stuck with this situation: ... | * / \ * * i.e. a vertex with two (or more) leaves, which does not have a matching no matter what is the rest of the graph. You can classify trees into (A) coverable, (B) coverable except the root and (C) inherently uncoverable. Suppose you've computed the status of trees $T_1,\dots,T_n$ and would like to compute the status of the tree consisting of a vertex connected to $T_1,\dots,T_n$. If any of $T_i$ is (C), then the result is (C). If all $T_i$ are (A), then the result is coverable except for the root (B). If there is one (B) and everything else is (A), then you can match that vertex with the root and the result is (A). Finally, if the tree is connected to more than one (B), it can't be covered (C). This is a recursive traversal of the tree, which can be done in linear time.
{ "domain": "cs.stackexchange", "id": 19348, "tags": "complexity-theory, graphs, time-complexity, trees, matching" }
Conceptual questions on MLP and Perceptrons
Question: I am facing some confusion regarding the terminologies assocaiated to classification and regression problems esp. using the MLP and Perceptron models. These are the following: 1) When the data is linearly inseparable, we use MLP. Here what is meant b "data"--is it the response or the input feature that is linearly inseparable? 2) If it is linearly inseparable then does it mean that the mapping function from input to output will always be non-linear? Hence, we prefer MLP or the latest new models such as deep learning? 3) Linear regression fails in the case of linearly inseparable data or can linear regression work for inseparable data but if the function mapping is nonlinear then it fails? Answer: When the data is linearly inseparable, we use MLP. Here what is meant by "data"--is it the response or the input feature that is linearly inseparable? This means that a linear function of the input features is unable to separate the response. To answer your question a bit more directly: Given only a linear function of the inputs, the response is the thing that's inseparable. If it is linearly inseparable then does it mean that the mapping function from input to output will always be non-linear? Hence, we prefer MLP or the latest new models such as deep learning? Yes. If the mapping from input to output were linear, then the output would necessarily be linearly separable by the input. Linear regression fails in the case of linearly inseparable data or can linear regression work for inseparable data but if the function mapping is nonlinear then it fails? Linear regression will never be able to perfectly separate linearly inseparable data. Consider the following example, where the input features are x1 and x2, and the output is the color: It doesn't matter how you draw a line in the 2D space - you'll never be able to separate the colors. The same idea applies in higher dimensions. I hope that helps!
{ "domain": "datascience.stackexchange", "id": 6168, "tags": "neural-network, perceptron, terminology, mlp" }
Algorithm for sorting within windows
Question: I am writing an app which displays speeches on various topics, with each speech having a number of attributes. I want to give the user the choice to sort a list of speeches by an attribute, even within a previously sorted list of speeches. Consider for example: Suppose the previous list had been sorted by the attribute: "speaker's name", and there were three different speakers (so the list was split into three windows with each window consisting of the same speaker). Now, the user wants to sort the speeches based on the attribute: "length of the speech" such that the new list would now be sorted by speaker's name, and within each window of the speaker's name, the speeches would be sorted by the length (creating a new window within this window for each length of speech in the speaker's window). I would like to give the user the ability to do this with an arbitrary number of attributes. How can I go about doing such sorting, preferably in the most efficient way? P.S: I am just a teenage programmer who has been programming for a few years and can understand algorithms in concrete steps - I have not taken college classes in CS, so please keep that in mind when explaining the algorithms. Answer: What you require is the lexicographic ordering. Let me describe this. Suppose the speeches have two attributes. The first one is "speaker's name" and the second one is the "length". For simplification, let me define the speaker's name by the integer value instead of a character string. Now, suppose the speeches are represented using the following tuples: $(0,0)$, $(1,2)$, $(0,3)$, $(1,4)$, $(2,3)$, $(2,4)$, and $(1,1)$. Here, the second speech has the speaker's name: $1$, and length $= 2$. Requirement 1: You simply want to sort the speeches by the "speaker's name". Just consider the first attribute for comparing any two speeches and apply any sorting algorithm. You will get the following ordering: $(0,0)$, $(0,3)$, $(1,2)$, $(1,4)$, $(1,1)$, $(2,3)$, and $(2,4)$. Requirement 2: You want to sort first by the "speaker's name" and then by the "length". This is the requirement that you have mentioned in your question. Now, define a new comparison operation between two tuples as follows: $(a,b) < (c,d)$ if and only if $(a<c)$ or $(a = c$ and $b<d)$ Here, we are comparing by giving the "first attribute" more priority. And if the first attribute value is the same, we compare based on the "second attribute". Smilarly, you can define $(a,b) > (c,d)$ if and only if ($a>c$) or ($a = c$ and $b>d$). Now, you can simply use any sorting algorithm by replacing the standard comparison operation with the above-defined comparison operation. You will get the following ordering: $(0,0)$, $(0,3)$, $(1,1)$, $(1,2)$, $(1,4)$, $(2,3)$, and $(2,4)$. Requirement 3: Suppose, you want to sort by the length and then by the speaker's name Simply, define the comparison operation between two tuples as: $(a,b) < (c,d)$ if and only if $(b<d)$ or $(b = d$ and $a<c)$. Now, again you can use any standard sorting algorithm by replacing the standard comparison operation with this comparison operation. You will get the following ordering: $(0,0)$, $(1,1)$, $(1,2)$, $(0,3)$, $(2,3)$, $(1,4)$, and $(2,4)$. Similarly, you can define a new comparison operation for the multiple attributes based on your requirements.
{ "domain": "cs.stackexchange", "id": 17618, "tags": "algorithms, sorting, algorithm-design" }
To detect isomorphic graphs Is it enough to check if they have the same number of same degree vertices?
Question: Given two lists of non directional graph edges e.g. [(1,3),(3,5),(5,1),(5,7)] [(4,5),(2,3),(3,4),(4,2)] In order to check if the two graphs are isomorphic is it enough to count the vertices with the same degree between them? e.g. Vertice 1: 2 1: 0 2: 0 2: 2 3: 2 3: 2 4,6: 0 4: 3 5: 3 5: 1 7: 1 6,7: 0 So in our example: both graphs have 2 vertices with 2 edges 1 with 3, one with one and 3 with 0 Answer: No. If two graphs have different degree sequences, they are definitely not isomorphic, so the algorithm "half-works". But consider a six cycle versus two three-cycles. Both graphs have six vertices, all of degree two; they are not isomorphic. Indeed, this counterexample shows that nothing based solely on degrees can work.
{ "domain": "cs.stackexchange", "id": 3915, "tags": "graphs" }
Simplfiying a system output equation
Question: I have a problem getting the final (simplified) version of the system's (in the figure below) output equation y[n]: For this system, I know that $$w[n] = x[n] + aw[n − 1]$$ and $$y[n] = w[n] + bw[n − 1] = x[n] + aw[n − 1] + bw[n − 1] $$ however, the simplification method that yields this form: $$y[n] = x[n] + bx[n − 1] + ay[n − 1]$$ is not clear for me. Could anyone give me any tips/methods for deriving the final form of y[n]? Answer: These problems are most easily solved by using the $\mathcal{Z}$-transform. But it is also possible to do it in the sample domain. You have two equations for expressing $w[n]$ and $w[n-1]$ in terms of $x[n]$ and $y[n]$: $$x[n]=w[n]-aw[n-1]\tag{1}$$ $$y[n]=w[n]+bw[n-1]\tag{2}$$ From $(1)$ and $(2)$, $w[n]$ is obtained as $$w[n]=\frac{1}{a+b}\big(bx[n]+ay[n]\big)\tag{3}$$ Plugging $(3)$ into $(2)$ gives $$y[n]=\frac{1}{a+b}\big(bx[n]+ay[n]+b^2x[n-1]+aby[n-1]\big)\tag{4}$$ Bringing all terms with $y[n]$ to the left side gives $$y[n]\frac{b}{a+b}=\frac{1}{a+b}\big(bx[n]+b^2x[n-1]+aby[n-1]\big)\tag{5}$$ which finally results in $$y[n]=x[n]+bx[n-1]+ay[n-1]\tag{6}$$
{ "domain": "dsp.stackexchange", "id": 8879, "tags": "discrete-signals, linear-systems, homework" }
Why NP-Complete reduction is not reversible?
Question: I have read the question asked here Is polynomial reduction reversible and the logic actually makes sense to me. In other words, if A is polynomially reducible to B, it means that A <= B in terms of hardness. However, when I thought more about the reduction process, I am confused. As far as I can understand, the NP-Complete reduction process is as follows: (1) Find some NP-Complete problem (i.e. A), convert its inputs and output to the inputs and output of the problem we want to prove (i.e. B) (2) Show that if A is YES, then B is YES (3) Show that if B is YES, then A is YES. Then I also read several classic reduction proofs, and here is my confusing part. In all the reductions I read, step (1) seems to be a "one-to-one mapping" between the inputs of A to the inputs B. For example, minimum set cover (MSC) to minimum vertex cover (MVC), edges in MVC is elements in MSC and nodes in MVC is sets in MSC. Since it's a "one-to-one mapping" of inputs A to inputs B, if we can convert the inputs of A to the inputs of B, then we can always convert the inputs of B to the inputs of A. In this case, isn't step (1) a reversible process? I am not sure if "one-to-one mapping" is the correct terminology, but the main idea I want to say is "the specific way" of mapping inputs of A to the inputs of B. Is there any reduction example that maps the input from A to the inputs of B, but the mapping is not reversible, i.e. can't map the inputs of B back to A? What's more, no matter if we convert A to B or convert B to A, I think step (2) and step (3) above are still the same. Therefore, I am really confused about which part makes the NP-Complete reduction not reversible? Answer: Here is a concrete example. Let $A$ be the problem in which all instances are NO instances, and let $B$ be the problem with a unique YES instance $1$. The function $f(x) = 0$ is a reduction from $A$ to $B$, but there is no reduction from $B$ to $A$, since there is nothing we can map $1$ into. The asymmetric part in the reduction is the reduction itself, which is not guaranteed to be invertible. As the example above shows, it doesn't have to be one-to-one either, though usually we can make it so.
{ "domain": "cs.stackexchange", "id": 19605, "tags": "np-complete, reductions, np-hard, np, polynomial-time-reductions" }
Was GRAVITY built to look at one star?
Question: GRAVITY (shown below) is a interferometric combiner of near infrared light from four very large telescopes called The Very Large Telescope in order to make careful astrometric measurements near the dusty galactic center where very exciting things happen near what we assume to be a supermassive black hole. There have been publications of verification of general relativity by watching the star S2 pass through its periapsis around whatever the supermassive black hole-sized object is that it orbits. This was a major milestone in GRAVITY's contribution to science. Is S2 still the fastest known star in the galaxy? It may still be used for other things, but primarily... Question: Was GRAVITY built to look at one star? Was the observation of S2 sufficient to justify this ESA effort? Or was it always also expected and required to do many more things? From eso1825 — Science Release: First Successful Test of Einstein’s General Relativity Near Supermassive Black Hole; Culmination of 26 years of ESO observations of the heart of the Milky Way: New infrared observations from the exquisitely sensitive GRAVITY [1], SINFONI and NACO instruments on ESO’s Very Large Telescope (VLT) have now allowed astronomers to follow one of these stars, called S2, as it passed very close to the black hole during May 2018. At the closest point this star was at a distance of less than 20 billion kilometres from the black hole and moving at a speed in excess of 25 million kilometres per hour — almost three percent of the speed of light [2]. The team compared the position and velocity measurements from GRAVITY and SINFONI respectively, along with previous observations of S2 using other instruments, with the predictions of Newtonian gravity, general relativity and other theories of gravity. The new results are inconsistent with Newtonian predictions and in excellent agreement with the predictions of general relativity. [1] GRAVITY was developed by a collaboration consisting of the Max Planck Institute for Extraterrestrial Physics (Germany), LESIA of Paris Observatory–PSL / CNRS / Sorbonne Université / Univ. Paris Diderot and IPAG of Université Grenoble Alpes / CNRS (France), the Max Planck Institute for Astronomy (Germany), the University of Cologne (Germany), the CENTRA–Centro de Astrofisica e Gravitação (Portugal) and ESO. [2] S2 orbits the black hole every 16 years in a highly eccentric orbit that brings it within twenty billion kilometres — 120 times the distance from Earth to the Sun, or about four times the distance from the Sun to Neptune — at its closest approach to the black hole. This distance corresponds to about 1500 times the Schwarzschild radius of the black hole itself. Source A new instrument called GRAVITY has been shipped to Chile and successfully assembled and tested at the Paranal Observatory. GRAVITY is a second generation instrument for the VLT Interferometer and will allow(s) the measurement of the positions and motions of astronomical objects on scales far smaller than is was currently previously possible. The picture shows the instrument under test at the Paranal Observatory in July 2015. Answer: No, the GRAVITY instrument is multi-purpose. This link gives you all the papers that have cited the instrument description paper. The list of papers shows that it has been used for studying: the centres of AGN, close binary systems, discs around young stars, the atmospheres of AGB stars and interferometric imaging of exoplanets at least. Here is a paragraph from the instrument description paper Inspired by the potential of phase-referenced interferometry to zoom in on the black hole in the Galactic center and to probe its physics down to the event horizon (Paumard et al. 2008), we proposed in 2005 a new instrument named GRAVITY as one of the second generation VLTI instruments (Eisenhauer et al. 2008). At its target accuracy and sensitivity, GRAVITY will also map with spectro-differential astrometry the broad line regions of active galactic nuclei (AGN), image circumstellar disks in young stellar objects and see their jets evolve in real time, and detect and characterize exo-planets especially around low mass stars and binaries – in short we will “Observe the Universe in motion” (Eisenhauer et al. 2011).
{ "domain": "astronomy.stackexchange", "id": 5432, "tags": "observational-astronomy, interferometry, infrared, instruments, gravity-vlt-interferometer" }
How hot can a beverage get without burning the tongue?
Question: What is the maximum temperature that a beverage can be without burning the tongue? I suspect there's a maximum temperature that the surface tissue of a tongue can withstand without being damaged (causing the burning feeling that lasts even after the heat goes away). So let's say I want to setup a device to keep my coffee to the right temperature to drink without damaging my mouth - I'm interested to know the highest possible temperature that will not damage the surface tissue of the tongue in a way that will result in lingering pain after the heat is gone. Answer: Afaik the tongue is not more heat tolerant than other parts of the body, so the minimum temperature to cause burn is about drinking a beverage having 45°C temperature for a long time (more than 5 minutes). The pain threshold of tongue is around 47°C, so you will feel when it really burns. According to studies the hedonic value of coffee has a maximum by 60°C, and it is significantly lower by safe levels (45°C). Frequent burns of mouth increases the risk of oral cancer. So I think this will be a hard choice... ;-) A burn is an injury which is caused by application of heat or chemical substances to the external or internal surfaces of the body, which causes destruction of tissues. The minimum temperature for producing a burn is about 44°C for an exposure of about 5-6 hours or about 65°C for two seconds are sufficient to produce burns. Thermal injuries I. With the object of producing standard low-temperature burns in animals, and of studying the area of tissue only partly damaged in a burn, a burning iron has been made capable of applying temperatures from 45°-80°C. to the skin; with this the amount of heat and temperature causing skin damage has been studied, and the macroscopic and microscopic damage due to graded temperatures have been delineated. Graded temperatures of 45°-80°C. have been applied to the skin of shaved, anæsthetised guinea-pigs, and in some cases rats, for times varying from 10 sec. to 6 and 10 min. Observations have been made of the development of erythema, flare, blanching, blueing, heat fixation, incipient blister formation, œdema, and edge wheal, as also upon the later scab formation and rate of epithelium regeneration. Applications of 47°C. up to 6 minutes produce no visible change. At 50°-55°C. applied for 1 minute and over, there is a critical temperature for the development of permanent and irreversible damage; in animals good scab formation occurs after burning at this temperature. After temperatures of 60°-65°C. the epidermis can be peeled off from the exposed area, leaving a punched-out exposed surface area somewhat like the exposed human blister. 1943 - EXPERIMENTAL THERMAL BURNS, ESPECIALLY THE MODERATE TEMPERATURE BURN According to Green the pain threshold of tongue is around 46-48°C. 1985 - Heat pain thresholds in the oral-facial region In all experiments, the chosen mean preferred temperature for drinking was around 60 °C (140 °F). Black coffee drinkers chose a slightly higher mean temperature than drinkers with added creamer, and they also chose a slightly lower mean temperature when the flavor was stronger. In all cases, consumers tended to choose, on average, temperatures for drinking coffee that were above the oral pain threshold and the burn damage threshold. 2006 - At What Temperatures Do Consumers Like to Drink Coffee?: Mixing Methods Overall, the available results strongly suggest that high-temperature beverage drinking increases the risk of EC. Future studies will require standardized strategies that allow for combining data, and results should be reported by histological subtypes of EC. High-temperature beverages and Foods and Esophageal Cancer Risk -- A Systematic Review
{ "domain": "biology.stackexchange", "id": 3055, "tags": "human-biology, skin, heat" }
making sulfuric acid from pyrite
Question: I recently found a place with an insane amount of pyrite (we are talking about tons) already smashed into smaller pieces, and I'm wondering if there is any way of converting it into sulfuric acid. I'm aware that this is possible: $$\ce{4 FeS2 + 11 O2 -> 2 Fe2O3 + 8 SO2}$$ $$\ce{2 SO2 + 2 H2O + O2 -> 2 H2SO4}$$ but it requires to heat up pyrite to quite some temperature :/ Is there any chemical extraction method available? Answer: There is a direct reaction to convert pyrite to sulfuric acid by treating it with nitric acid. $$\ce{FeS2 + 18HNO3 → Fe(NO3)3 + 2H2SO4 + 15NO2 + 7H2O}$$ Iron(II) disulfide react with and nitric acid to produce iron(III) nitrate, sulfuric acid, nitrogen dioxide and water. Nitric acid - concentrated solution. The reaction takes place in a boiling solution.(source) But the problem is that iron(III) nitrate is soluble and does not precipitate in the solution. You have to distill off the sulfuric acid. So, in my opinion heating pyrite in oxygen and then hydro-oxidising sulfur dioxide produced to form sulfuric acid is the cheap and efficient way because it does not require any additional chemicals as @Ivan mentioned. Also, if you have money/curiosity, try using a cheap, oxidizing agent like nitrous or nitric oxide. The advantage in this process is that the products beside sulfuric acid are gases which can leave easily but I recommend you to stick to the plan.
{ "domain": "chemistry.stackexchange", "id": 8383, "tags": "inorganic-chemistry, thermodynamics" }
Repeated calls to rosjava parameter server produce unexpected results
Question: During repeated runs with a rosjava master mixing java and C++ nodes (all on Windows, compiled with MSVC), it appeared that a socket connection was dying within the Java code. After some period of time one or more of the C++ nodes would appear to block somewhere down in OS code. Was unable to debug effectively. However, it was noted that the C++ nodes affected were making multiple default parameter calls at 1HZ to the rosjava master: nh.param<std::string>("default_param", default_param, "default_value"); After removing these calls, the nodes worked reliably. It seems like repeated calls to the java implementation of the parameter server was causing some kind of hang up. Anyone experienced this? I'll try to debug more and get a stack trace as time allows. Originally posted by stevefturner on ROS Answers with karma: 26 on 2012-09-10 Post score: 0 Answer: I haven't heard of this issue. Please file an issue and include as much detail (e.g. sample code) as possible so that I may reproduce the issue. http://code.google.com/p/rosjava/issues/list Originally posted by damonkohler with karma: 3838 on 2012-09-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10968, "tags": "ros, rosjava, parameter-server" }
Is this set computable?
Question: Let be $B$ a Busy Beaver function and set $W=\{\langle M \rangle :\text{$M$ stops in less than $B(10^{1000})$ steps on an empty tape}\}$. Is this set computable? I'm not sure how to approach this question. I suspect that this set is computable and have tried to see if it is finite, but haven't reached anything. Answer: Your function is computable. Just run the input machine on the empty input for $B(10^{1000})$ steps. Note that $B(10^{1000})$ is just a constant which can be hardcoded into your code.
{ "domain": "cs.stackexchange", "id": 14527, "tags": "computability" }
Mass term in the Lagrangian
Question: I have read that the mass term appearing in the electroweak Lagrangian stops it (the Lagrangian) from becoming gauge invariance. Can someone explain where and why this term is creating the problem? Answer: Let's assume a typical fermionic mass-term (interacting leptons and quarks are spin 1/2-particles): $$ \tag 1 \bar{\Psi}\Psi = \bar{\Psi}\left(\frac{1 + \gamma_{5}}{2} + \frac{1 - \gamma_{5}}{2}\right)\Psi = \left| \bar{\Psi}\left( 1 \pm \gamma_{5} \right) = \left( (1 \mp \gamma_{5})\Psi\right)^{\dagger}\gamma_{0} \right| = $$ $$ =\bar{\Psi}_{L}\Psi_{R} + \bar{\Psi}_{R}\Psi_{L}. $$ Then let's assume $SU(2)\otimes U(1)$ gauge-invariant, realistic theory (the electroweak part of the SM). According to this theory, the left representation $\Psi_{L}$ transforms as the doublet part under the gauge transformations, while $\Psi_{R}$ transforms as the singlet. So of course, the mass term isn't gauge invariant. But if we assume only $U(1)$ gauge theory, there isn't doublets, so the mass term is indeed gauge invariant (except Majorana case, when $\Psi = \hat{C} \Psi$, where $\hat{C}$ refers to the charge conjugation). This is the reason why we must include (gauge-invariant) interaction of the Yukawa-type with scalar doublets. For example, I will illustrate my statement by describing the mechanism of appearance of mass of charged leptons into the Standard model. We "replace" the mass term $(1)$ by $$ L_{int} = -G\bar{\Phi}_{L}\varphi \Psi_{R} + h.c. $$ Here $\varphi = \begin{pmatrix} \varphi_{1} & \varphi_{2} \end{pmatrix}^{T}$ refers to the doublet of the scalar complex field, and $\Phi_{L} = \begin{pmatrix} \nu_{L} & \Psi_{L}\end{pmatrix}^{T}$. After using unitary gauge (\varphi \to $\begin{pmatrix} 0 & \sigma \end{pmatrix}^{T}$) and shifting the vacuum ($\sigma \to \sigma + \eta$) we will give the mass term and interaction with Higgs boson: $$ L_{\int} = -G\eta (\bar{\Psi}_{L}\Psi_{R} + h.c.) - G\sigma (\bar{\Psi}_{L}\Psi_{R} + h.c.). $$ So we have the gauge-invariant mass term. But the payment for this is the appearance of Yukawa-interaction with massive real scalar field.
{ "domain": "physics.stackexchange", "id": 15439, "tags": "mass, lagrangian-formalism, fermions, yang-mills, gauge-invariance" }
Big-O Notation of Anagram solution algorithm
Question: In Solution 1: Checking Off of Problem Solving with Algorithms and Data Structures, just beneath the ActiveCode: 1 extract (included at the bottom of this post for reference), it is stated: each of the n characters in s1 will cause an iteration through up to n characters in the list from s2. ...which to paraphrase (less succinctly), means "each character in s1 could result in all characters in s2 being inspected". The following sentence states: Each of the n positions in the list will be visited once to match a character from s1. Firstly I'm unsure which list is being referred, I think it means the variable alist, i.e. s2. Is this correct? The number of visits then becomes the sum of the integers from 1 to n. Secondly (and the real point of this post), if each item in the list (variable alist) is inspected for each character in s1, then why is this not simply O(n) = n^2 as per "nested loops"? If n=5, then then sum(n) = 15, where as n^2 = 25. I'm fine with simplifying the equation for summing 1 to n integers to get n^2, but I'm confused about how sum(5) = 15, whereas O(n) = n^2. ActiveCode: 1 Checking Off (active5) def anagramSolution1(s1,s2): alist = list(s2) pos1 = 0 stillOK = True while pos1 < len(s1) and stillOK: pos2 = 0 found = False while pos2 < len(alist) and not found: if s1[pos1] == alist[pos2]: found = True else: pos2 = pos2 + 1 if found: alist[pos2] = None else: stillOK = False pos1 = pos1 + 1 return stillOK print(anagramSolution1('abcd','dcba') Edit to add Having written this out it struck me that perhaps dropping the (1/2) and (1/2)n from the formula to sum(n) is the reason for the difference of actually summing n, as opposed to just doing n^2. To be absolutely clear, I understand the formula for summing n is n^2 when dropping the non-dominant terms. What I don't understand is why we're summing n and not immediately and directly stating it is actually n^2 as a result of the nested loop and this sentence: each of the n characters in s1 will cause an iteration through up to n characters in the list from s2. Answer: Let's be concrete. Suppose s1 = 'abcd' and s2 = 'dcba'. For the a in s1, it will take 4 iterations to find the a in s2. For the b in s1, it will take 3 iterations to find the b in s2. For the c in s1, it will take 2 iterations to find the c in s2. For the d in s1, it will take 1 iteration to find the d in s2. So in this case, it does take sum(i) for i = 1,...,n iterations. Now some quiet meditation should convince you than if s2 is an anagram of s1, it will again take exactly the same number of iterations; the summands will just appear in a different order. (After all, whatever is the first character in s2 will only require 1 iteration to be found, the second character requires 2 iterations, etc.) If s2 is not an anagram of s1, then the algorithm will short-circuit when the first character not in s2 is tested. So then the algorithm will take less than O(n^2) time. But in the worst case, the algorithm takes O(n^2) time.
{ "domain": "cs.stackexchange", "id": 3687, "tags": "complexity-theory, algorithm-analysis" }
What's with all the index notation in General Relativity?
Question: I am self-studying General Relativity with Leonard Susskind's lectures from Stanford. The thing that is bothering me is the notation of GR, specifically, the index notation. In simple layman terms first, can someone explain these and then move on to the more precise and physics explanation of why: Why do you need all that index notation? What is it good for? Provide a concrete (not abstract) and a simple example of this notation in action Answering these questions will help me understand more about GR and provide me with the basics of GR. Answer: Because it provides a nice, easy way of dealing with tensors and the operations that exist between them. Along with the summation convention, the index notation massively condenses the equations used in general relativity. It makes manipulations in general relativity as simple as knowing a few rules on how indices can and can't interact with each other. Even someone new to general relativity will be able to see that: $$T^\mu g_{\mu\nu}A_\mu=X_{\nu} \tag{1}$$ is an invalid tensor equation, because we have used the same index three times in one term (and we have the same index down twice). If we wrote this out without the index notation and without the summation convention it would be pretty hard to decipher and would not be immediately obvious that it was invalid. The index/summation conventions are ways of visually decluttering equations in GR without sacrificing important information.
{ "domain": "physics.stackexchange", "id": 70626, "tags": "general-relativity, tensor-calculus, notation, covariance" }
Frames and coordinate transformations in General Relativity
Question: I am learning the frame formalism in differential geometry and I am trying to reconcile this with applications in general relativity, especially in contexts like the tetrad formalism. Consider a vector field $X$ on a differentiable manifold $\mathcal{M}$. Then, in some coordinate patch we can expand the vector in a basis $\partial_\mu$ as, $X = X^{\mu}(x)~\partial_\mu$ where $X^\mu(x)$ are the components of the vector field. If instead, I use coordinates $y$, then the components transform as, $$X'^\mu(y) = \frac{\partial y^\mu}{\partial x^\nu}~X^{\nu}(x)~.$$ An alternative way to describe these objects is through the introduction of frame fields which are a collection of vector fields $E_a$ with $a = 1,2,3,...,n$ for an $n$ dimensional manifold, which are orthonormal $\langle E_a, E_b\rangle_G = \delta_{ab}$ with respect to some metric $g_{\mu \nu}$ on the manifold. Then, these $n$ vector fields act as a coordinate system at each point $p \in \mathcal{M}$. Since these are vector fields I still have $E_a = E_a^{~\mu}~\partial_\mu$ and similarly I can define a dual frame $e^a = e^a_{~\mu}~dx^\mu$ which are one-forms such that $\langle E_a, e^b\rangle = \delta_{a}^{~b}$. Now, given the same vector field $X$, I can project this vector field into my frame to obtain $$X^a = \langle e^a, X\rangle = e^a_{~\mu}X^{\mu}, $$ which is a scalar under coordinate transformations. However, the frame does transform under $O(n)$ rotations. So, for $M^a_{~b} \in O(n)$, $e'^a = M^a_{~b}~e^b$ and hence, $X^a \to X'^a = M^a_{~b}X^b$ under $O(n)$. Therefore, $X^a$ is a scalar under coordinate transformations but an $O(n)$ vector. This is my confusion. In special relativity for example, one can make a Lorentz transformation (where we apply the above formalism but with $O(1,n-1)$) and move between different inertial frames. However, these transformations are also considered coordinate transformations, since I have an invertible coordinate map between the coordinates in the new frame and the old one. Therefore, what really is the difference between a coordinate transformation and a frame change? As a corollary, I understand that going between different frames via an $O(n)$ rotation, is also like making a gauge transformation. But I have heard the statement that "diffeomorphisms are much like gauge transformations." How is this relevant in this context, since coordinate invariance is promoted to diffeomorphism invariance in GR? Answer: The difference between a coordinate transformation and a "frame change" is the following. If you think of the rotation as a coordinate transformation, a diffeomorphism, you are rotating both the frame ($e^a \to M^a_{\ \ b} e^b$) and the vectors ($X^a \to M^a_{\ \ b} X^b$), so that the projection of the rotated vector on the rotated frame $X^a = \langle e^a, X \rangle$ is invariant, as you mentionned. However, if you only rotate the frame ($e^a \to M^a_{\ \ b} e^b$, $X^a \to X^a$), you are only rotating the coordinate system, so that the projection of the initial vector on the rotated frame is behaving like an $O(n)$ vector. Equivalently, you can fix the frame and only rotate the vectors ($e^a \to e^a$, $X^a \to M^a_{\ \ b} X^b$). How to relate this to physical theories ? Let us consider a physical differential equation $$ D(x) \phi(x) = 0 $$ where $D(x)$ is a differential operator written in $x$ coordinates. When we claim that the theory is invariant under rotations, we mean that if $\phi(x)$ is a solution, the rotated solution $\phi(M x)$ is also a solution in the same coordinate system, of the same differential operator $D(x)$ $$ \text{Rotationally invariant theory : } D(x) \phi(x) = 0 \implies D(x) \phi(M x) = 0 $$ This is the concrete "rotational invariance", rotating the solution without changing the coordinate system is also a solution of physics. Note that we could equivalently check $D(x) = D(M x)$. This is the standard verification that the differential operator is invariant under rotation. This is to be contrasted to diffeomorphism invariance : $$ \text{Diffeomorphic invariant theory : } D(x) \phi(x) = 0 \implies D(M x) \phi(M x) = 0 $$ This is satisfied by design, because differential equations are invariant under coordinate reparametrization $x = M x'$. I think this post may be useful for details on diffeomorphism invariance, Diffeomorphism Invariance of General Relativity
{ "domain": "physics.stackexchange", "id": 96517, "tags": "general-relativity, differential-geometry, reference-frames, coordinate-systems, gauge-invariance" }
while loop lock subcribing
Question: Hi everybody,I'm part of an international project team working on robot using ROS. We are stuck with a problem about multithreading. The code below never exit the loop because of the _current_angle is not changed: while(ros::ok() && _current_angle <= -1) { ROS_INFO("[Navigation::Navigation] angle: %g", _current_angle); _motor_power_pub.publish(_motor_power_msg); ros::spinOnce(); loop_rate.sleep(); } You may be agree that it should not change by itself but the _current_angle is modified in a function called back by the subscriber. The problem, I think, is that during the while loop the function called back from the subscriber is not executed. How should we manage it then ? PS : the _current_angle is a class' attribute and the part of the code below is in the constructor of this class. Originally posted by Auzias on ROS Answers with karma: 26 on 2012-12-08 Post score: 1 Answer: My bad, sorry. The problem was that I subscribe to the topic (to change the _current_angle) after the constructor. Now it's working . About the code (which is on githug) I cannot post the link due to low karma: https:// github. com/kissdestinator/FroboMind Originally posted by Auzias with karma: 26 on 2012-12-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12020, "tags": "ros, multithreading" }
How to know if a continuous function can be represented by a finite sum of sinusoids?
Question: I have a lack of mathematical knowledge, and notably in mathematical vocabulary, so maybe a similar question exists but with a different wording. What I want to know, is actually how to know if a function, given its properties (for example, a polynomial with natural-number coefficients) can be converted to a finite sum of sinuoids, or in other words if the Fourier transform of the function will give a finite sum of frequencies (because, as I understand the FT, which decomposes a signal into sinusoids, an infinite sum of frequencies means that the function is not sinusoidal in its nature and thus can only be approximated to a sinuoid, through an infinite sum of sinuoids). Is there a theorem or a principle that can told me if a function, as long as it does not involve some specific operations, stay perfectly convertable (= no approximation) to a sinusoid ? Please avoid complex mathematical notations as possible (unless each indeterminate or non-trivial greek letter is explained), I better understand natural phrasing and analogies, I'm only a beginner in signal processing, which I investigate for the purpose of advancing knowledge in image/graphics processing. Answer: I think a good rule of thumb is this: "If it isn't already written as a finite sum of sinusoids, then it probably can't be written as a finite sum of sinusoids." Most functions are not a finite sum of sinusoids. Polynomials certainly are not, and neither are square waves, triangle waves, or sawtooth waves. It seems like being a finite sum of sinusoids is a property that rarely happens by accident, so to speak.
{ "domain": "dsp.stackexchange", "id": 12509, "tags": "signal-analysis, fourier-transform, fourier-series, math" }
Why don't main sequence stars continuously expand?
Question: I understand main sequence stars become subgiant when hydrogen is depleted in their cores and they start hydrogen shell burning. But I don't understand why this process is divided into two distinct phases, instead of being a continuous expansion as more and more helium builds up in the core. Why is there a sudden point when the star starts "proper" shell burning and begins to expand? Answer: I assume you are talking about the evolution of moderate mass $1.5 < M/M_{\odot} <4$ stars after they leave the main sequence. These stars have a core that is now made of He, surrounded by a H-burning shell. The He core starts off with a relatively low mass and gradually accumulates more, due to "ash" from the H-burning shell adding to it. The core is isothermal because it is not generating energy and is kept hot by the overlying H-burning shell. It can be shown that this equilibrium is sustainable (via a density gradient) until the core reaches the Schonberg-Chandrasekhar limit of around 15% of the total stellar mass. It is this phase that leads to a slow progression of the star to the right in the HR diagram at almost constant luminosity and gradually increasing radius. As the core mass grows, it reaches and then exceeds the Schonberg-Chandrasekhar limit (in the mass range of stars considered). The core then begins to contract rapidly, releasing gravitational potential energy that is available to lift the envelope and change the size rapidly on the contraction timescale of the core. The evolution for lower and higher mass stars is different. Lower mass stars achieve a degenerate core prior to reaching the SC limit. Higher mass stars leave the main sequence with a core higher than the SC limit already. If you really are talking about main sequence stars then there seems to be a false premise. Main sequence stars do get continuously bigger and more luminous during their main sequence lifetimes, due to the changing chemical composition of their cores. Here for example are the expected trends for a star like the Sun. There is a gradual acceleration once it exhausts hydrogen, but no discontinuity. There is more of a discontinuity for higher mass stars, as I described above, which takes place during the subgiant phase, not at the end of the main sequence. However, you are correct that there is a relatively sudden transition between core hydrogen burning and shell hydrogen burning (relative to the main sequence lifetime anyway). It is more rapid in higher mass stars; for a star like the Sun, the transition still occurs over a billion years or so. The reason for this is twofold. First, the core is convective - that means that even if the very centre depletes all its hydrogen, a new fuel supply can be mixed in from further out. This means that all parts of the core run out of hydrogen at nearly the same time and once they do so, then convection, which is driven by the energy generation, also stops. Second, the temperature dependence of nuclear reactions is high, and that means that the shell burning reactions are turned on rather suddenly when the shell temperature reaches the ignition point. Of course there cannot be a gap between core burning ceasing and shell burning starting because overall hydrostatic equilibrium must be maintained, but the transition from one to the other is quite quick because of the two factors above.
{ "domain": "physics.stackexchange", "id": 69798, "tags": "stars, stellar-evolution" }
Distributive property of Ket over addition
Question: If: $$|a> = \binom{-4i}{2} $$ $$|b> = \binom{1}{-1+i} $$ Then what will the value of $$|a + b> $$ be? That is, is addition distributive over the ket notation? Answer: As far as I'm aware, writing something like $\lvert a+b\rangle$ does not make sense (is not defined). Here's a quote from the Wiki page on bra-ket notation: Symbols, letters, numbers, or even words—whatever serves as a convenient label—can be used as the label inside a ket, with the $\lvert\rangle$ making clear that the label indicates a vector in vector space. In other words, the symbol $\lvert A\rangle$ has a specific and universal mathematical meaning, while just the "$A$" by itself does not. For example, $\lvert 1\rangle + \lvert 2\rangle$ is not necessarily equal to $\lvert 3\rangle$. Nevertheless, for convenience, there is usually some logical scheme behind the labels inside kets, such as the common practice of labeling energy eigenkets in quantum mechanics through a listing of their quantum numbers. What we put inside of the $\lvert \rangle$ is, essentially, just a name for a vector, and this name may be a number, like $\lvert 0\rangle$ (which we use to denote the first standard basis vector of $\mathbb C^2$), or it could be some other symbol like $\lvert +\rangle$ (used to denote the first element of the Hadamard basis) or $\lvert \uparrow\rangle, \lvert\downarrow\rangle$ (used to denote a "spin up" or "spin down" state). In the latter case, it becomes more clear that what you're asking is not really well-defined. What would $\uparrow + \downarrow$ even mean?
{ "domain": "quantumcomputing.stackexchange", "id": 4666, "tags": "quantum-state" }
Why does ohms law not work on transformers?
Question: The voltage for transformer is given by $\frac{V_1}{V_2} =\frac{N_2}{N_1}$ if we use ohms law and assume resistance are same for both inductors, $\frac{I_1 R} { I_2 R}=\frac{N_2}{N_1}$ but in my book they derive using power equivalence $I_1 * V_1 = I_2 V_2$ which gives $\frac{V_1}{V_2}=\frac{I_2}{I_1}$ where have I made the mistake? Answer: where have I made the mistake? For an ideal transformer, the coils are considered to have no ohmic resistance, i.e., R = 0. That is, the coils are viewed as ideal inductors. Also, there are no power losses in the transformer laminations due to eddy currents and no magnetic flux leakage. Hope this helps.
{ "domain": "physics.stackexchange", "id": 61350, "tags": "electromagnetism" }
Java Pig Latin Translator
Question: Over the last few days I created this Pig Latin Translator just for fun. I would really appreciate it if anybody could give suggestions of how to improve the code and make the program more efficient. Here is the GitHub link for the project. For anybody who doesn't know the Pig Latin, feel free to look at the README, which explains what Pig Latin is, and the rules of the language. The code is in PigLatin.java. /* -------------------- Program Information -------------------- Name Of Program: PigLatin.java Date of Creation: 7/3/2018 Name of Author(s): Agnik Banerjee Version of Java: 1.8.0_171 Created with: IntelliJ IDEA 2017.3.5 Community Edition -------------------- Program Information -------------------- */ import java.util.Scanner; public class PigLatin { private static Scanner scan = new Scanner(System.in); public static void main(String[] args) { System.out.print("Enter one or more words that you would like to translate to Pig Latin: "); final String userInput = scan.nextLine(); scan.close(); String[] word = userInput.split(" "); // Splits the string into an array of words String output = ""; for (int i = 0; i < word.length; i++) { String pigLatinWord = translateWord(word[i]); // Translates each word individually output += pigLatinWord + " "; // Joins the translated word back into the output } System.out.println("Original Word(s): " + userInput); System.out.println("Translation: " + output + "\n"); } public static String translateWord(String word) { String lowerCaseWord = word.toLowerCase(); int pos = -1; // Position of first vowel char ch; // This for loop finds the index of the first vowel in the word for (int i = 0; i < lowerCaseWord.length(); i++) { ch = lowerCaseWord.charAt(i); if (isVowel(ch)) { pos = i; break; } } if (pos == 0) { // Translating word if the first character is a vowel (Rule 3) return lowerCaseWord + "yay"; // Adding "yay" to the end of string (can also be "way" or just "ay") } else { // Translating word if the first character(s) are consonants (Rule 1 and 2) String a = lowerCaseWord.substring(pos); // Extracting all characters in the word beginning from the 1st vowel String b = lowerCaseWord.substring(0, pos); // Extracting all characters located before the first vowel return a + b + "ay"; // Adding "ay" at the end of the extracted words after joining them. } } // This method checks if the character passed is a vowel (the letter "y" is counted as a vowel in this context) public static Boolean isVowel(char ch) { if (ch == 'a' || ch == 'e' || ch == 'i' || ch == 'o' || ch == 'u' || ch == 'y') { return true; } return false; } } Answer: I would suggest the following points to make this programme more object-oriented: Try using a collection object provided by Java instead of using array String[] word = userInput.split(" "); could be List<String> words = Arrays.asList(userInput.split(" ")); or using external library Guava List<String> words = Splitter.on(" ").splitToList(userInput); Avoid comments where you can. In this case I can see you can easily avoid comment if you can wrap up the particular code snippet with the meaningful name. For instance You could take the user input and breaking it in a list in separate class and call the class from your main List<String> words = new UserInputHandler().getInputAsList(); Try to use foreach loop over for loop: You can easily use the foreach loop to iterate over words for (String word : words) { String pigLatinWord = translateWord(word); // Translates each word individually output += pigLatinWord + " "; // Joins the translated word back into the output } Again as suggested before you can wrap the code inside for loop in a separate class so that you could call for (String word : words) { output = new PigLatinTranslator(word, output).translate(); }
{ "domain": "codereview.stackexchange", "id": 36505, "tags": "java, pig-latin" }
rosinstall on ubuntu 11.04
Question: Hi! I am trying to install ROS on Ubuntu 11.04. I tried to follow howtos and similar documents, but it didn't work for me. My main goal is to get a working system with MORSE, ROS and Blender. I managed to install Blender and MORSE, they work together. I also installed python 3.2.1 and ROS electric from the repositories. Now, when I try to run roscore or roslaunch I get the following errors: Traceback (most recent call last): File "/opt/ros/electric/ros/bin/roslaunch", line 2, in <module> from ros import roslaunch ImportError: cannot import name roslaunch I thought that it is maybe because I hadn't yet installed the rosinstall software. So I tried to install it from the repositories using sudo apt-get install python-rosinstall as it was written here: http://www.ros.org/doc/api/rosinstall/html/ Unfortunately, it seems to only work for fuerte not for electric, so I tried to install it using pip: sudo pip install -U rosinstall vcstools Then I try to run rosinstall, but I always get errors like this: Traceback (most recent call last): File "/home/zsarosi/.local/bin/rosinstall", line 5, in <module> from rosinstall.rosinstall_cli import rosinstall_main File "/home/zsarosi/.local/lib/python2.7/site-packages/rosinstall/__init__.py", line 33, in <module> import rosinstall.helpers File "/home/zsarosi/.local/lib/python2.7/site-packages/rosinstall/helpers.py", line 35, in <module> from rosinstall.config_elements import SetupConfigElement File "/home/zsarosi/.local/lib/python2.7/site-packages/rosinstall/config_elements.py", line 38, in <module> from vcstools import VcsClient File "/home/zsarosi/.local/lib/python2.7/site-packages/vcstools/__init__.py", line 45, in <module> from vcstools.tar import TarClient File "/home/zsarosi/.local/lib/python2.7/site-packages/vcstools/tar.py", line 52, in <module> import yaml File "/usr/local/lib/python3.2/dist-packages/yaml/__init__.py", line 284 class YAMLObject(metaclass=YAMLObjectMetaclass): ^ SyntaxError: invalid syntax Since it shows directories with the version 2.7 in them instead of 3.2, I suspect that my PYTHONPATH environmental variable is still missing something, at the moment it looks like this: /opt/morse/lib/python3/site-packages:/opt/ros/electric/ros/core/roslib/src::/usr/local/lib/python3.2:/usr/local/lib/python3.2/site-packages:/usr/local/lib/python3.2/dist-packages:/home/zsarosi/builds/morse/bindings/pymorse:/opt/morse/lib/python3/dist-packages:/opt/ros/electric/ros/core/roslib/src I am stucked now. Can anyone please help me? Originally posted by zoltan on ROS Answers with karma: 11 on 2012-10-29 Post score: 0 Original comments Comment by Po-Jen Lai on 2012-10-29: Do you need python 3.2?Try install python2.7? Comment by Lorenz on 2012-10-29: Blender needs python 3 and making morse work with ros is definitely non-trivial. Comment by Po-Jen Lai on 2012-10-29: Sorry for the incorrect comment. Answer: First, rosinstall is only a tool to download sources from version control. It makes life easier when you want to download sources, and that's all. So it will not solve your earlier problem. Traceback (most recent call last): File "/opt/ros/electric/ros/bin/roslaunch", line 2, in <module> from ros import roslaunch ImportError: cannot import name roslaunch This means your the roslaunch python libraries are not on your PYTHONPATH. To get them on your pythonpath, it is usually sufficient to run: source /opt/ros/electric/setup.bash before attempting to run roscore or any ros node or tool. Since your questio ndoes not show what exactly you have tried to run when getting the error message, we can only guess what went wrong. As to rosinstall, we have not finalized python3 compatibility. So it might be best for you to not use it for now, unless you manage to run it using python2. getting that to run might be good for a different question. (One way to run cleanly things using python2 in Ubuntu is to use virtualenv. Note that apparently you are running rosinstall using python2.7, therefore when it tires to use yaml for python3, you get the invalid syntax error. So what you would need is to create a virtualenv for python2.7, install yaml using pip to that virtualenv, then using that virtualenv run rosinstall. Then rosinstall would not use python3 yaml, but python2 yaml from your environment. Note that for this approach, you would probably also have to install other python libraries for python2.7.) Originally posted by KruseT with karma: 7848 on 2012-10-30 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 11550, "tags": "ros, ubuntu, blender, morse, rosinstall" }
Can anyone help me find the kinematic solution of a robot arm?
Question: I draw a robot arm in Solidworks, but not so sure about how to find out the DOF, forward and backward kinematics..... Can anyone help me /___? https://www.flickr.com/photos/22576989@N04/16355503864/ Originally posted by s1155031008 on ROS Answers with karma: 1 on 2015-03-30 Post score: 0 Answer: You have to create a URDF and specify the parameters of your robot through that file. Once you do that, you can export your SW parts to the URDF. Originally posted by aak2166 with karma: 593 on 2015-03-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21296, "tags": "matlab, solidworks" }
Buchi arithmetic meaning
Question: I am studying this article. But I have trouble with understanding the Buchi arithmetic. It says in section IV: ... Formulas in this fragment generalise classical integer programming and are of the form $$ \mathbf{A} \boldsymbol{x}=\boldsymbol{c} \wedge \bigwedge_{i \in I} V_{p}\left(x_{i}\right)=y_{i} $$ But I don't understand what does it mean by $ \mathbf{A} \boldsymbol{x}=\boldsymbol{c} \wedge \bigwedge_{i \in I} V_{p}\left(x_{i}\right)=y_{i} $. I know that the goal is to finding $\boldsymbol{x}$ such that was an answer for previous equation. But my question is about the meaning of $\boldsymbol{c} \wedge \bigwedge_{i \in I} V_{p}\left(x_{i}\right)=y_{i}$. According to the previous sections of the article $\boldsymbol{c}$ is a vector and I want to know what does wedge symbol mean in this case? In addition, I want to know, as $V_{p}\left(x_{i}\right)$ is an integer, is the big wedge notation here a bitwise and or something else? As It is the first time I have studied this kind of material, my question may seem naive. However, I appreciate any help containing some references about Buchi arithmetic. Answer: The wedge sign means AND. The formula states that $\mathbf{A}x=c$ and for all $i \in I$, $V_p(x_i) = y_i$.
{ "domain": "cs.stackexchange", "id": 19609, "tags": "complexity-theory, computability, automata, arithmetic, buchi-automata" }
Class for RegQueryInfoKey pinvoke
Question: I'd like to make sure I'm following some basic coding best practices since I've had only little practice creating my own classes and using pinvoke. I've found this class to be pretty useful and I'd like to put it up online but not if it's a poorly written or could cause issues I haven't considered. I would love any feedback or suggestions for things I should review/revise or completely redo. /// <summary> /// Description: Class for the use of RegQueryInfoKey pinvoke interop in C#. /// Usage Example: DateTime dTime = RegQuery.lastWriteTime("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> using System; using System.Runtime.InteropServices; using System.Text; namespace WindowsForm { public class RegQuery { public const int KEY_QUERY_VALUE = 0x1; static UIntPtr hKey = (UIntPtr)0x80000002; static UIntPtr hKeyVal; static StringBuilder classStr = new StringBuilder(255); static uint classSize = (uint)classStr.Capacity + 1; static uint lpcSubKeys; static uint lpcbMaxSubKeyLen; static uint lpcbMaxClassLen; static uint lpcValues; static uint lpcbMaxValueNameLen; static uint lpcbMaxValueLen; static uint lpcbSecurityDescriptor; static long lpftLastWriteTime; [DllImport("advapi32.dll", EntryPoint = "RegOpenKeyEx")] extern private static int RegOpenKeyEx_DllImport(UIntPtr hKey, string lpSubKey, uint ulOptions, int samDesired, out UIntPtr phkResult); [DllImport("advapi32.dll")] extern private static int RegQueryInfoKey( UIntPtr hkey, StringBuilder lpClass, ref uint lpcbClass, IntPtr lpReserved, out uint lpcSubKeys, out uint lpcbMaxSubKeyLen, out uint lpcbMaxClassLen, out uint lpcValues, out uint lpcbMaxValueNameLen, out uint lpcbMaxValueLen, out uint lpcbSecurityDescriptor, out long lpftLastWriteTime ); public static void doQuery(string fullKey) { string[] hive = fullKey.Split(new char[] { '\\' }, 2); if (String.Equals(hive[0], "HKEY_LOCAL_MACHINE", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKLM", StringComparison.OrdinalIgnoreCase)) hKey = (UIntPtr)0x80000002; else if (String.Equals(hive[0], "HKEY_CURRENT_USER", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKCU", StringComparison.OrdinalIgnoreCase)) hKey = (UIntPtr)0x80000001; else if (String.Equals(hive[0], "HKEY_CLASSES_ROOT", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKCR", StringComparison.OrdinalIgnoreCase)) hKey = (UIntPtr)0x80000000; else if (String.Equals(hive[0], "HKEY_USERS", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKU", StringComparison.OrdinalIgnoreCase)) hKey = (UIntPtr)0x80000003; else if (String.Equals(hive[0], "HKEY_CURRENT_CONFIG", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKCC", StringComparison.OrdinalIgnoreCase)) hKey = (UIntPtr)0x80000005; RegOpenKeyEx_DllImport(hKey, hive[1], 0, KEY_QUERY_VALUE, out hKeyVal); RegQueryInfoKey(hKeyVal, classStr, ref classSize, IntPtr.Zero, out lpcSubKeys, out lpcbMaxSubKeyLen, out lpcbMaxClassLen, out lpcValues, out lpcbMaxValueNameLen, out lpcbMaxValueLen, out lpcbSecurityDescriptor, out lpftLastWriteTime); } /// <summary> /// A pointer to a buffer that receives the user-defined class of the key. /// Example: int cString = DateTime dTime = RegQuery.classString("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static string classString(string fullKey) { doQuery(fullKey); return classStr.ToString(); } /// <summary> /// A pointer to a variable that receives the number of subkeys that are contained by the specified key. /// Example: uint sKeys = RegQuery.subKeys("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint subKeys(string fullKey) { doQuery(fullKey); return lpcSubKeys; } /// <summary> /// A pointer to a variable that receives the size of the key's subkey with the longest name, in Unicode characters, not including the terminating null character. /// Example: uint mSubKeyLen = RegQuery.maxSubKeyLen("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint maxSubKeyLen(string fullKey) { doQuery(fullKey); return lpcbMaxSubKeyLen; } /// <summary> /// A pointer to a variable that receives the size of the longest string that specifies a subkey class, in Unicode characters. The count returned does not include the terminating null character. /// Example: uint mClassLen = RegQuery.maxClassLen("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint maxClassLen(string fullKey) { doQuery(fullKey); return lpcbMaxClassLen; } /// <summary> /// A pointer to a variable that receives the number of values that are associated with the key. /// Example: uint vals = RegQuery.values("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint values(string fullKey) { doQuery(fullKey); return lpcValues; } /// <summary> /// A pointer to a variable that receives the size of the key's longest value name, in Unicode characters. The size does not include the terminating null character. /// Example: uint mValueNameLen = RegQuery.maxValueNameLen("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint maxValueNameLen(string fullKey) { doQuery(fullKey); return lpcbMaxValueNameLen; } /// <summary> /// A pointer to a variable that receives the size of the longest data component among the key's values, in bytes. /// Example: uint mValueLen = RegQuery.maxValueLen("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint maxValueLen(string fullKey) { doQuery(fullKey); return lpcbMaxValueLen; } /// <summary> /// A pointer to a variable that receives the size of the key's security descriptor, in bytes. /// Example: uint sDesc = RegQuery.securityDescriptor("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static uint securityDescriptor(string fullKey) { doQuery(fullKey); return lpcbSecurityDescriptor; } /// <summary> /// A pointer to a FILETIME structure that receives the last write time. /// Example: DateTime dTime = RegQuery.lastWriteTime("HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox"); /// </summary> public static DateTime lastWriteTime(string fullKey) { doQuery(fullKey); return DateTime.FromFileTime(lpftLastWriteTime); } } } Answer: Scope of methods and const There is no reason for KEY_QUERY_VALUE to has a public scope. The same is true for the doQuery() method. Naming Following the naming guidlines you should use PascalCasing for names of classes, structs and methods. See also: https://stackoverflow.com/a/1618325/2655508 So public static void doQuery(string fullKey) will become ( with casing and scope ) private static void DoQuery(string fullKey) and public static uint subKeys(string fullKey) will become public static uint SubKeys(string fullKey) but this is more named like a property. A method should be named as a verb or a verb phrases. It also reads in the comments to this method ...that receives the number of subkeys... , so a better name would be GetNumberOfSubKeys. This is also true for your other methodnames. Braces It is best practice to use braces for if..else, for.. etc. everytime, also if it would not be neccessary. See: https://codereview.stackexchange.com/a/49212/29371 So the if..else construct in the DoQuery() method would look like if (String.Equals(hive[0], "HKEY_LOCAL_MACHINE", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKLM", StringComparison.OrdinalIgnoreCase)) { hKey = (UIntPtr)0x80000002; } else if (String.Equals(hive[0], "HKEY_CURRENT_USER", StringComparison.OrdinalIgnoreCase) || String.Equals(hive[0], "HKCU", StringComparison.OrdinalIgnoreCase)) { hKey = (UIntPtr)0x80000001; } ..... Also it is only a matter of taste, i prefer switch..case over if..else if . Validation You should validate the inputparameter of the public methods for correctness. Assume a user of this class will call one of these methods with a parameter like "\\HKEY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox" or null or "HKEYY_Current_user\\software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\Dropbox" GetLastError While pinvoking native dlls, you should always check, if the call to one of the native methods,results in an error. You can do this by setting the attribute SetLastError=true like [DllImport("advapi32.dll",, SetLastError=true, EntryPoint = "RegOpenKeyEx")] extern private static int RegOpenKeyEx_DllImport(UIntPtr hKey, string lpSubKey, uint ulOptions, int samDesired, out UIntPtr phkResult); and calling Marshall.GetLastWin32Error . See also: https://stackoverflow.com/a/17918729/2655508
{ "domain": "codereview.stackexchange", "id": 8798, "tags": "c#, classes" }
How can I use the NP complexity Venn diagram to quickly see which class of NP problem can be poly reducible to another class?
Question: I'm so bad at solving the problem of the type: "If $A$ is an NP-complete problem, $B$ is reducible to $A$, then $B$ is..." That I have to come here and ask these silly questions each and every time I encounter them. Is there a good way of using the Venn Diagram shown below to tackle these kind of problem? For example, how can I prove that If $A$ is an NP-complete problem, $B$ is reducible to $A$, then $B$ can be NP-hard using the above diagram? If not possible, what would be another way to drill this into my head? Answer: As Raphael said the definitions are important, especially if you try to create proofs. These definitions are not always captured in the Venn diagram. The definition of NP-complete is not made clear from the diagram and will not help you to reason about "If A is an NP-complete problem, $B$ is reducible to $A$, then $B$ is...". You cannot see from the diagram that this must mean that $B$ is NP. To answer your question if it is not possible how you will be able to drill it into your head is much more complicated. But again the first step needs to be the definitions. If you know that NP-completeness implies two properties of $A$ by definition, then you can take the next step. What does $B$ reducible to $A$ means when regarding these two properties. Do the properties of $A$ tell anything about $B$? The second property says that any problem in NP reduces to $A$ in polynomial time. So if $B$ reduce to $A$ in polynomial time then $B$ is in NP.
{ "domain": "cs.stackexchange", "id": 3822, "tags": "complexity-theory, proof-techniques, complexity-classes, intuition" }
What does sensor_msgs::PointCloud2 mean?
Question: I want to ask a next question to somebody who is familiar with PCL in ROS. What does sensor_msgs::PointCloud2 mean? On this page (http://wiki.ros.org/pcl/Tutorials), Wiki instructs Hydro users to use pcl::PCLPointCloud2 instead of sensor_msgs::PointCloud2 in a callback function. Actually, I can't build execute file of example.cpp on /pcl/Tutorials if I use sensor_msgs::PointCloud2. Is sensor_msgs::PointCloud2 never used on Hydro? Thank you in advance! Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-04-24 Post score: 0 Answer: When it was first being developed, PCL had a very close relationship with ROS. It was only recently that PCL has become completely independent of ROS, after some code refactoring. ROS has always had its own point cloud data structure (sensor_msgs::PointCloud and then sensor_msgs::PointCloud2), which was initially used at least partially by PCL. So pcl::PCLPointCloud2 exists today mostly for compatibility with ROS. I am working with PCL in Hydro, so if you let me know where exactly you're having issues (either by asking a new question, which is better, or editing this one) I might be able to give you some pointers. Originally posted by georgebrindeiro with karma: 1264 on 2014-04-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Ken_in_JAPAN on 2014-04-24: @georgebrindeiro, Thank you for replying early! The one of thing I want to know is how to pass ROS topic (that means sensor infomation) to callback function. I have thought that the role is sensor_msg::PointCloud. My idea is wrong, isn't it? I will rely on you when I have new issues. Thanks! Comment by georgebrindeiro on 2014-04-27: When using ROS, you use sensor_msgs::PointCloud2. When using PCL, you use PCL types. To convert from ROS to PCL cloud type, use the pcl_conversions::toPCL function. To convert back, use pcl_conversions::fromPCL. See http://wiki.ros.org/hydro/Migration#PCL Comment by Athoesen on 2014-05-20: Have you had any luck saving PCD files with ROS and PCL? Comment by georgebrindeiro on 2014-05-22: @Athoesen yeah, no problem. if you're having problems with that, though, you should probably start a new thread.
{ "domain": "robotics.stackexchange", "id": 17773, "tags": "pcl, sensor-msgs, ros-hydro, pointcloud" }
getting video from canon vc-c50i camera and sensoray framegrabber
Question: I'm trying to get the video stream from a canon vc-c50i camera which connects to a sensoray 311 framegrabber on a MobileRobots Guiabot. Have tried gscam with GSCAM_CONFIG="v4l2src device=/dev/video0 ! ffmpegcolorspace ! video/x-raw-rgb ! identity name=ros" but image is scrambled. What should the pipeline be for this camera type? Originally posted by Liz Murphy on ROS Answers with karma: 83 on 2011-04-14 Post score: 0 Answer: Try video/x-raw-yuv or put ffmpegcolorspace as the last parameter. Originally posted by Charence with karma: 137 on 2011-07-28 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5361, "tags": "ros, gscam, image-view, camera-drivers, pioneer" }
Show the Lorentz Transformation Matrices Have an Inverse
Question: Assume the Lorentz transformations obey the relationship $$g_{uv}\Lambda^u_{p}\Lambda^v_\sigma = g_{p\sigma},$$ where $g_{uv}$ is the metric tensor of special relativity. How can one show, under that assumption, that the Lorentz matrix $\Lambda^a_b$ has an inverse? Answer: You may order the matrices like this: $$ \Lambda_\rho^\mu g_{\mu\nu} \Lambda^\nu_\sigma = g_{\rho\sigma} $$ I suppose all the letters should have been Greek. They're called mu, nu, rho, sigma, good to learn them. In my form, one may view $\mu$ as the summed over index in the first product on the left hand side and $\nu$ as the summed over index in the second product. So making a convention for a matrix $\Lambda$ so that its components are $\Lambda^\mu_\rho$ where $\rho$ is the row and $\mu$ is the column, the equation above is the matrix equation $$ \Lambda \cdot g \cdot \Lambda^T = g $$ where $T$ means transposition. The matrix on the right hand side is nonsingular, i.e. it has a nonzero determinant, so the factors on the left hand side must also have a nonzero determinant i.e. be invertible.
{ "domain": "physics.stackexchange", "id": 4691, "tags": "special-relativity, linear-algebra" }
Eratosthenes Sieve Implementation
Question: My friend and I were doing an Eratosthenes Sieve implementation, it works, but I think there is a mistake with "n" and with the first for condition ("i<=n"). I think that condition can cause a segmentation fault as there's no criba[n], am I right? Any suggestion to make it faster? #include <bits/stdc++.h> using namespace std; const long long int n=1000000; bool criba[n+1]; void gencriba(){ memset(criba,true,sizeof(criba)); criba[0]=criba[1]=false; for(long long int i=2;i<=n;++i){ ///Aquí es donde usa <= if(criba[i]){ for(long long int j=i;j<=n/i;++j){ criba[j*i]=false; } } } } int main() { gencriba(); int c; cin>>c; cout<<criba[c]<<endl; return 0; } Edit: Decided to change the array size to n+1, so that there's no possibility of getting an error because of invalid memory address Answer: Any suggestion to make it faster? No, not really, it looks good as it is. The inner loop termination condition involves division: j <= n / i. You're hoping the optimizing (-O3) compiler hoists the constant out of the loop. I'm willing to take the bet that it does get hoisted, that there's no need to explicitly assign a temp variable. The j * i expression doesn't need to use multiplication -- addition would suffice, using something like j or n += i. Depending on details of your CPU, that might help, or not.
{ "domain": "codereview.stackexchange", "id": 36545, "tags": "c++, sieve-of-eratosthenes" }
Log vs. linear frequency scales of Fourier and wavelet transforms
Question: I'm trying to understand the difference between the output of a Fourier transform and a wavelet transform. A Fourier transform is done via the following function: $$\hat{f}(\xi) = \int^\infty_{-\infty}\ f(t)\ e^{-2\pi i t \xi}\ dt$$ Whereas a wavelet transform is going to use this: $$F(a,b) = \int^\infty_{-\infty}\ f(x)\psi^*_{a,b}(x)\ dx$$ Now, spectrograms use a windowed Fourier transform. When looking at a signal over time, you can get a graph showing you which frequencies were present at any given time. The frequencies are scaled linearly. However, scalograms use a wavelet transform to obtain the same information. I've frequently seen the output scaled logarithmically (in frequency). Is there something about wavelets that make their output fundamentally logarithmic? I'm having a hard time seeing it in the above formulas. It seems like you set $\xi$ or $a$ & $b$ give you the frequency you want, and you could calculate results for any frequency you desire. Answer: And FFT can be considered a filter bank. The bandwidth of each filter is inversely proportional to the length of the FFT. The width of each filter sets the spacing such that there isn't either too much overlap or gaps between filters. In an STFT spectrogram, the window width is fixed for the entire FFT, thus the filter spacing is fixed, which turns out to be a linear spacing. With wavelets the length of each wavelet, being individually adjustable, can be shorter for the higher frequencies, thus have a wider bandwidth, which allows them to be spaced further apart in frequency without leaving gaps between them in the frequency response, thus allowing logarithmic spacing.
{ "domain": "dsp.stackexchange", "id": 5218, "tags": "fourier-transform, frequency-spectrum, wavelet" }
The math to come to this conclusion? If we lost all the dead space inside our atoms, we would each be able to fit into a particle of lead dust
Question: I found this on The particle physics of you on Symmetry Magazine: The size of an atom is governed by the average location of its electrons. Nuclei are around 100,000 times smaller than the atoms they’re housed in. If the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium. If we lost all the dead space inside our atoms, we would each be able to fit into a particle of lead dust. Could somebody show the math to come to this conclusion? I don't know a lot about physics but I like to learn. My question aims to asses how much I can trust the analogy in the article: The particle physics of you. Answer: Well, this is the sort of question that has two answers: this really makes no sense because this is all quantum-mechanical and what things like the 'radius of the nucleus / atom' is make limited sense; but you can just get some approximate numbers and do some kind of Fermi estimate and that will be fine. So, I'll do (2), grabbing values from Wikipedia or wherever I can find them. See the comments for why this is all bogus. Let's assume we're made of carbon: a carbon atom has a radius of about $7\times 10^{-11}\,\mathrm{m}$, and a carbon nucleus has a radius of about $2.2\times 10^{-15}\,\mathrm{m}$. Atoms, of course, are cubes as are nuclei so everything packs nicely and we don't have any annoying factors of $\pi$ and sphere-packing nonsense: the volume of a carbon atom is therefore about $2.7\times 10^{-30}\,\mathrm{m}^3$ (multiply radius by 2 to get side of the cube) and a nucleus is $8.3\times 10^{-44}\,\mathrm{m}^3$. OK, so: electrons are pointlike, so they take up no space at all. So if we collapse carbon down to its nuclear size (so there's no space in the atom outside the nucleus) then we can fit $$\frac{2.7\times 10^{-30}}{8.3\times 10^{-44}} \approx 3.3\times 10^{13}$$ atoms in the space previously occupied by one. Human beings have a volume of about $66\mathrm{l} = 6.6\times 10^{-2}\,\mathrm{m^3}$. And now if we take a human and remove all the space in their atoms we compress them by a factor of $3.3\times 10^{13}$: their final volume is thus about $2\times 10^{-15}\,\mathrm{m}^3$. Humans also are perfect cubes (at least all the ones I know are) and so this translates as a side length of $1.3\times 10^{-5}\,\mathrm{m}$. This is about 10 microns, which is well within the range of things we'd call 'dust'.
{ "domain": "physics.stackexchange", "id": 49242, "tags": "particle-physics" }
Work done by battery?
Question: Why the work done by a battery is Q*V where V is emf of battery and Q is charge that is made to flow in circuit?please explain detail? explain and write the formulas Answer: As a result of chemical reactions in the battery, if no circuit is connected to the battery charges will accumulate at the electrodes. The battery will have an open circuit voltage (its emf) between its terminals, and an electric field, $E$, inside the battery between anode and cathode. When a circuit is connected, using conventional current as the flow of positive charge, $Q$, the charge will go from the positive terminal to the negative terminal by way of the circuit, losing potential energy. In order to move the returning charge from its negative terminal to its positive terminal, the field exerts a force of $QE$. In so doing the battery does work moving positive charge from its negative terminal to its positive terminal (increasing the potential energy of the positive charge) equal to $QEd$ where $d$ is the distance in the battery between the electrodes. Voltage, or electrical potential, is defined as the work per unit charge required to move the charge between two points, or $V=\frac{QEd}{Q}=Ed$. Therefore the work done by the battery is $QV$. Hope this helps
{ "domain": "physics.stackexchange", "id": 58538, "tags": "electric-circuits, potential, work, potential-energy, batteries" }
Error in Indexing ROS Repository
Question: Hi ! The pull request for document indexing is merged. But even then, the package information is not displayed on ROS Wiki page ! Wiki page : http://wiki.ros.org/teleop_keyboard_omni3 Details added in distribution.yaml : teleop_keyboard_omni3: doc: type: git url: https://github.com/YugAjmera/teleop_keyboard_omni3.git version: master source: type: git url: https://github.com/YugAjmera/teleop_keyboard_omni3.git version: master status: maintained Can someone tell me what to do to get the Package Summary ? Originally posted by Yug Ajmera on ROS Answers with karma: 19 on 2019-03-06 Post score: 0 Answer: Your PR (ros/rosdistro#20480) was merged 4 hours ago. Doc jobs are only run once every three days. You'll have to have a little more patience. Originally posted by gvdhoorn with karma: 86574 on 2019-03-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Yug Ajmera on 2019-03-06: Ok! Thank You.
{ "domain": "robotics.stackexchange", "id": 32601, "tags": "ros, ros-kinetic, packages" }
Variational Algorithms - Is there a way to avoid discontinuities in optimal variational parameters?
Question: Often algorithms utilize an ansatz $V(a)$ and rely on a classical optimization scheme over a Hamiltonian loss function $L(V(a))$ in order to find the optimal parameters $a$. Due to many factors such as periodicity of $V(a)$, optimization path and non-convexity of $L$ (others maybe?), two problems with very similar input can result in very different optimal outputs i.e. $\forall~\delta>0 : |x_i - x_j| \leq \delta,~\exists~\epsilon>0$ such that $ | a^{\text{opt}}_i - a^{\text{opt}}_j| > \epsilon$. I am using non-gradient optimization schemes. I have tried methods such as use constant initial parameters $a^0$, transformations e.g. $\sin(a^{\text{opt}})$, but nothing worked. Is there a known way to avoid such discontinuities? Maybe e.g. gradient optimization schemes? EDIT: corrected discontinuity definition Answer: Let $C(\theta)$ be an arbitrary quantum circuit parametrized by $\theta \in \mathbb{R}^n$. And let $L(C(\theta))$ be a continuous non-convex objective function we would like to optimize. Given that the pair of initial parameters $\theta_1$ and $\theta_2$ are sufficiently close to a local minimum at $\theta^*$, then any deterministic gradient descent method with a sufficiently small step size will result in convergence to that local minimum for both initial $\theta_1$ and $\theta_2$. Both initial parameters don't even have to be close to each other. The only requirement for $\theta_1$ and $\theta_2$ is being in an $\epsilon$-neighborhood of the local minimum at $\theta^*$. In practice, the obtained gradients will be noisy because of circuit sampling. So, there is a probability that even if both $\theta_1$ and $\theta_2$ are very close to a local minimum, the final estimates of optimal parameters will be different. However, if you reduce the variance of the gradients, then we will get the convergence to the same local minimum. A nice example of a quantum natural gradient descent can be found on the Pennylane website. They provide illustrations, mathematical formulation and code.
{ "domain": "quantumcomputing.stackexchange", "id": 4606, "tags": "quantum-gate, circuit-construction, continuous-variable" }
Understanding the Bloch sphere
Question: It is usually said that the points on the surface of the Bloch sphere represent the pure states of a single 2-level quantum system. A pure state being of the form: $$ |\psi\rangle = a |0\rangle+b |1\rangle $$ And typically the north and south poles of this sphere correspond to the $|0\rangle$ and $|1\rangle$ states. Image: ("Bloch Sphere" by Glosser.ca - Own work. Licensed under CC BY-SA 3.0 via Commons - https://commons.wikimedia.org/wiki/File:Bloch_Sphere.svg#/media/File:Bloch_Sphere.svg) But isn't this very confusing? If the north and south poles are chosen, then both states are on the same line and not orthogonal anymore, so how can one choose an arbitrary point $p$ on the surface of the sphere and possibly decompose it in terms of $0,1$ states in order to find $a$ and $b$? Does this mean that one shouldn't regard the Bloch sphere as a valid basis for our system and that it's just a visualization aid? I have seen decompositions in terms of the internal angles of the sphere, in the form of: $a=\cos{\theta/2}$ and $b=e^{i\phi}\sin{\theta/2}$ with $\theta$ the polar angle and $\phi$ the azimuthal angle. But I am clueless as to how these are obtained when $0,1$ states are on the same line. Answer: The Bloch sphere is beautifully minimalist. Conventionally, a qubit has four real parameters; $$|\psi\rangle=a e^{i\chi} |0\rangle + b e^{i\varphi} |1\rangle.$$ However, some quick insight reveals that the a-vs-b tradeoff only has one degree of freedom due to the normalization a2 + b2 = 1, and some more careful insight reveals that, in the way we construct expectation values in QM, you cannot observe χ or φ themselves but only the difference χ – φ, which is 2π-periodic. (This is covered further in the comments below but briefly: QM only predicts averages $\langle \psi|\hat A|\psi\rangle$ and shifting the overall phase of a wave function by some $|\psi\rangle\mapsto e^{i\theta}|\psi\rangle$ therefore cancels itself out in every prediction.) So if you think at the most abstract about what you need, you just draw a line from 0 to 1 representing the a-vs-b tradeoff: how much is this in one of these two states? Then you draw circles around it: how much is the phase difference? What stops it from being a cylinder is that the phase difference ceases to matter when a = 1 or b = 1, hence the circles must shrink down to points. And voila, you have something which is topologically equivalent to a sphere. The sphere contains all of the information you need for experiments, and nothing else. It’s also physical, a real sphere in 3D space. This is the more shocking fact. Given only the simple picture above, you could be forgiven for thinking that this was all harmless mathematics: no! In fact the quintessential qubit is a spin-½ system, with the Pauli matrices indicating the way that the system is spinning around the x, y, or z axes. This is a system where we identify $$|0\rangle\leftrightarrow|\uparrow\rangle, \\ |1\rangle\leftrightarrow|\downarrow\rangle,$$ and the phase difference comes in by choosing the +x-axis via $$|{+x}\rangle = \sqrt{\frac 12} |0\rangle + \sqrt{\frac 12} |1\rangle.$$ The orthogonal directions of space are not Hilbert-orthogonal in the QM treatment, because that’s just not how the physics of this system works. Hilbert-orthogonal states are incommensurate: if you’re in this state, you’re definitely not in that one. But this system has a spin with a definite total magnitude of $\sqrt{\langle L^2 \rangle} = \sqrt{3/4} \hbar$, but only $\hbar/2$ of it points in the direction that it is “most pointed along,” meaning that it must be distributed on some sort of “ring” around that direction. Accordingly, when you measure that it’s in the +z-direction it turns out that it’s also sort-of half in the +x, half in the –x direction. (Here “sort-of” means: it is, if you follow up with an x-measurement, which will “collapse” the system to point → or ← with angular momentum $\hbar/2$ and then it will be in the corresponding “rings” around the x-axis.) Spherical coordinates from complex numbers So let’s ask “which direction is the general spin-½ $|\psi\rangle$ above, most spinning in?” This requires constructing an observable. To give an example observable, if the +z-direction is most-spun-in by a state $|\uparrow\rangle$ then the observable for $z$-spin is the Pauli matrix $$\sigma_z = |\uparrow\rangle\langle\uparrow| - |\downarrow\rangle\langle\downarrow|=\begin{bmatrix}1&0\\0&-1\end{bmatrix},$$which is +1 in the state it's in, -1 in the Hilbert-perpendicular state $\langle \downarrow | \uparrow \rangle = 0.$ Similarly if you look at $$\sigma_x = |\uparrow\rangle \langle \downarrow | + |\downarrow \rangle\langle \uparrow |=\begin{bmatrix}0&1\\1&0\end{bmatrix},$$ you will see that the $|{+x}\rangle$ state defined above is an eigenvector with eigenvalue +1 and similarly there should be a $|{-x}\rangle \propto |\uparrow\rangle - |\downarrow\rangle$ satisfying $\langle {+x}|{-x}\rangle = 0,$ and you can recover $\sigma_x = |{+x}\rangle\langle{+x}| - |{-x}\rangle\langle{-x}|.$ So, let’s now do it generally. The state orthogonal to $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$ is not too hard to calculate as $|\bar \psi\rangle = \beta^*|0\rangle - \alpha^* |1\rangle,$ so the observable which is +1 in that state or -1 in the opposite state is:$$ \begin{align} |\psi\rangle\langle\psi| - |\bar\psi\rangle\langle\bar\psi| &= \begin{bmatrix}\alpha\\\beta\end{bmatrix}\begin{bmatrix}\alpha^*&\beta^*\end{bmatrix} - \begin{bmatrix}\beta^*\\-\alpha^*\end{bmatrix} \begin{bmatrix}\beta & -\alpha\end{bmatrix}\\ &=\begin{bmatrix}|\alpha|^2 - |\beta|^2 & 2 \alpha\beta^*\\ 2\alpha^*\beta & |\beta|^2 - |\alpha|^2\end{bmatrix} \end{align}$$Writing this as $v_i \sigma_i$ where the $\sigma_i$ are the Pauli matrices we get:$$v_z = |\alpha|^2 - |\beta|^2,\\ v_x + i v_y = 2 \alpha^* \beta.$$ Now here's the magic, let's allow the Bloch prescription of writing $$\alpha=\cos\left(\frac\theta2\right),~~\beta=\sin\left(\frac\theta2\right)e^{i\varphi},$$ we find out that these are:$$\begin{align} v_z &= \cos^2(\theta/2) - \sin^2(\theta/2) &=&~ \cos \theta,\\ v_x &= 2 \cos(\theta/2)\sin(\theta/2) ~\cos(\phi) &=&~ \sin \theta~\cos\phi, \\ v_y &= 2 \cos(\theta/2)\sin(\theta/2) ~\sin(\phi) &=&~ \sin \theta~\sin\phi. \end{align}$$So the Bloch prescription uses a $(\theta, \phi)$ which are simply the spherical coordinates of the point on the sphere which such a $|\psi\rangle$ is “most spinning in the direction of.” So instead of being a purely theoretical visualization, we can say that the spin-½ system, the prototypical qubit, actually spins in the direction given by the Bloch sphere coordinates! (At least, insofar as a spin-up system spins up.) It is ruthlessly physical: you want to wave it away into a mathematical corner and it says, “no, for real systems I’m pointed in this direction in real 3D space and you have to pay attention to me.” How these answer your questions. Yes, N and S are spatially parallel but in the Hilbert space they are orthogonal. This Hilbert-orthogonality means that a system cannot be both spin-up and spin-down. Conversely the lack of Hilbert-orthogonality between, say, the z and x directions means that when you measure the z-spin you can still have nonzero measurements of the spin in the x-direction, which is a key feature of such systems. It is indeed a little confusing to have two different notions of “orthogonal,” one for physical space and one for the Hilbert space, but it comes from having two different spaces that you’re looking at. One way to see why the angles are physically very useful is given above. But as mentioned in the first section, you can also view it as a purely mathematical exercise of trying to describe the configuration space with a sphere: then you naturally have the polar angle as the phase difference, which is $2\pi$-periodic, so that is a naturally ‘azimuthal’ coordinate; therefore the way that the coordinate lies along 0/1 should be a ‘polar’ coordinate with 0 mapping to $|0\rangle$ and π mapping to $|1\rangle$. The obvious way to do this is with $\cos(\theta/2)$ mapping from 1 to 0 along this range, as the amplitude for the $|0\rangle$ state; the fact that $\cos^2 + \sin^2 = 1$ means that the $|1\rangle$ state must pick up a $\sin(\theta/2)$ amplitude to match it.
{ "domain": "physics.stackexchange", "id": 73948, "tags": "quantum-mechanics, hilbert-space, quantum-information, bloch-sphere" }
Was the universe a black hole at the beginning?
Question: Big bang cosmology, as far as I understand it, says that the universe was super hot and super dense and super small. It looks like that all the current matter, seen and unseen, were compressed to infinitesimal distance, which means it was a black hole. Is the big bang, and our universe expansion in this case, hence is nothing but a black evaporation via Hawking radiation? Are we living inside that primordial black hole explosion? Answer: Here is a copy of an answer I wrote some time ago for the Physics FAQ http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/universe.html Is the Big Bang a black hole? This question can be made into several more specific questions with different answers. Why did the universe not collapse and form a black hole at the beginning? Sometimes people find it hard to understand why the Big Bang is not a black hole. After all, the density of matter in the first fraction of a second was much higher than that found in any star, and dense matter is supposed to curve spacetime strongly. At sufficient density there must be matter contained within a region smaller than the Schwarzschild radius for its mass. Nevertheless, the Big Bang manages to avoid being trapped inside a black hole of its own making and paradoxically the space near the singularity is actually flat rather than curving tightly. How can this be? The short answer is that the Big Bang gets away with it because it is expanding rapidly near the beginning and the rate of expansion is slowing down. Space can be flat even when spacetime is not. Spacetime's curvature can come from the temporal parts of the spacetime metric which measures the deceleration of the expansion of the universe. So the total curvature of spacetime is related to the density of matter, but there is a contribution to curvature from the expansion as well as from any curvature of space. The Schwarzschild solution of the gravitational equations is static and demonstrates the limits placed on a static spherical body before it must collapse to a black hole. The Schwarzschild limit does not apply to rapidly expanding matter. What is the distinction between the Big Bang model and a black hole? The standard Big Bang models are the Friedmann-Robertson-Walker (FRW) solutions of the gravitational field equations of general relativity. These can describe open or closed universes. All of these FRW universes have a singularity at their beginning, which represents the Big Bang. Black holes also have singularities. Furthermore, in the case of a closed universe no light can escape, which is just the common definition of a black hole. So what is the difference? The first clear difference is that the Big Bang singularity of the FRW models lies in the past of all events in the universe, whereas the singularity of a black hole lies in the future. The Big Bang is therefore more like a "white hole": the time-reversed version of a black hole. According to classical general relativity white holes should not exist, since they cannot be created for the same (time-reversed) reasons that black holes cannot be destroyed. But this might not apply if they have always existed. But the standard FRW Big Bang models are also different from a white hole. A white hole has an event horizon that is the reverse of a black hole event horizon. Nothing can pass into this horizon, just as nothing can escape from a black hole horizon. Roughly speaking, this is the definition of a white hole. Notice that it would have been easy to show that the FRW model is different from a standard black- or white hole solution such as the static Schwarzschild solutions or rotating Kerr solutions, but it is more difficult to demonstrate the difference from a more general black- or white hole. The real difference is that the FRW models do not have the same type of event horizon as a black- or white hole. Outside a white hole event horizon there are world lines that can be traced back into the past indefinitely without ever meeting the white hole singularity, whereas in an FRW cosmology all worldlines originate at the singularity. Even so, could the Big Bang be a black- or white hole? In the previous answer I was careful only to argue that the standard FRW Big Bang model is distinct from a black- or white hole. The real universe may be different from the FRW universe, so can we rule out the possibility that it is a black- or white hole? I am not going to enter into such issues as to whether there was actually a singularity, and I will assume here that general relativity is correct. The previous argument against the Big Bang's being a black hole still applies. The black hole singularity always lies on the future light cone, whereas astronomical observations clearly indicate a hot Big Bang in the past. The possibility that the Big Bang is actually a white hole remains. The major assumption of the FRW cosmologies is that the universe is homogeneous and isotropic on large scales. That is, it looks the same everywhere and in every direction at any given time. There is good astronomical evidence that the distribution of galaxies is fairly homogeneous and isotropic on scales larger than a few hundred million light years. The high level of isotropy of the cosmic background radiation is strong supporting evidence for homogeneity. However, the size of the observable universe is limited by the speed of light and the age of the universe. We see only as far as about ten to twenty thousand million light years, which is about 100 times larger than the scales on which structure is seen in galaxy distributions. Homogeniety has always been a debated topic. The universe itself may well be many orders of magnitude larger than what we can observe, or it may even be infinite. Astronomer Martin Rees compares our view with looking out to sea from a ship in the middle of the ocean. As we look out beyond the local disturbances of the waves, we see an apparently endless and featureless seascape. From a ship the horizon will be only a few miles away, and the ocean may stretch for hundreds of miles before there is land. When we look out into space with our largest telescopes, our view is also limited to a finite distance. No matter how smooth it seems, we cannot assume that it continues like that beyond what we can see. So homogeneity is not certain on scales much larger than the observable universe. We might argue in favour of it on philosophical grounds, but we cannot prove it. In that case, we must ask if there is a white hole model for the universe that would be as consistent with observations as the FRW models. Some people initially think that the answer must be no, because white holes (like black holes) produce tidal forces that stretch and compress in different directions. Hence they are quite different from what we observe. This is not conclusive, because it applies only to the spacetime of a black hole in the absence of matter. Inside a star the tidal forces can be absent. A white hole model that fits cosmological observations would have to be the time reverse of a star collapsing to form a black hole. To a good approximation, we could ignore pressure and treat it like a spherical cloud of dust with no internal forces other than gravity. Stellar collapse has been intensively studied since the seminal work of Snyder and Oppenheimer in 1939 and this simple case is well understood. It is possible to construct an exact model of stellar collapse in the absence of pressure by gluing together any FRW solution inside the spherical star and a Schwarzschild solution outside. Spacetime within the star remains homogeneous and isotropic during the collapse. It follows that the time reversal of this model for a collapsing sphere of dust is indistinguishable from the FRW models if the dust sphere is larger than the observable universe. In other words, we cannot rule out the possibility that the universe is a very large white hole. Only by waiting many billions of years until the edge of the sphere comes into view could we know. It has to be admitted that if we drop the assumptions of homogeneity and isotropy then there are many other possible cosmological models, including many with non-trivial topologies. This makes it difficult to derive anything concrete from such theories. But this has not stopped some brave and imaginative cosmologists thinking about them. One of the most exciting possibilities was considered by C. Hellaby in 1987, who envisaged the universe being created as a string of beads of isolated while holes that explode independently and coalesce into one universe at a certain moment. This is all described by a single exact solution of general relativity. There is one final twist in the answer to this question. It has been suggested by Stephen Hawking that once quantum effects are accounted for, the distinction between black holes and white holes might not be as clear as it first seems. This is due to "Hawking radiation", a mechanism by which black holes can lose matter. (See the relativity FAQ article on Hawking radiation.) A black hole in thermal equilibrium with surrounding radiation might have to be time symmetric, in which case it would be the same as a white hole. This idea is controversial, but if true it would mean that the universe could be both a white hole and a black hole at the same time. Perhaps the truth is even stranger. In other words, who knows?
{ "domain": "physics.stackexchange", "id": 86937, "tags": "cosmology, black-holes, big-bang" }
What is the change in ratio of histone to protamine in men with infertility called?
Question: I am neither a biology researcher nor a student. In a paper (written by a biology researcher) I am translating into English, there is the following statement which, according to the text, must have a name and I fail to find the English name: It is showed that in men with infertility histone to protamine ratio changes and the change is called ... What is the change called? I guess it's something unnatural/abnormal. Answer: It is termed "abnormal packing". Source: "Sperm Chromatin: Biological and Clinical Applications in Male Infertility and Assisted Reproduction" by Armand Zini & Ashok Agarwal (2011), p 172.
{ "domain": "biology.stackexchange", "id": 4643, "tags": "reproduction, literature" }
What do black holes spin relative to?
Question: What do black holes spin relative to?In other words, what is black hole spin measured in relation to? Spinning black holes are different from non-spinning black holes. For instance, they have smaller event horizons. But what are the spins of black holes measured relative to? Let's first look at an example with common objects. Example Let's say there's a disc on a table that rotates at 60 rpm. When you are standing still it spins at 60 rpm. But if you start running around it, it will move faster or slower relative to you. In this case, the disc has a ground speed, 60 rpm, because it has something to spin in relation to, in this case, the table. Black Holes Now, let's say that there is a spinning black hole. Because there is no control for the black hole to spin relative to, its spin must be relative to an object, for example, you. If you stand still, it spins at a constant rate. But if you start moving around the black hole in the same direction as the rotation, according to Newtonian physics, the black hole would spin at a slower rate relative to you. Since a faster spinning black hole has a smaller event horizon, in the first case, there would be a smaller event horizon. Then how do scientists say that there are spinning and non-spinning black holes? Is that just in relation to Earth? Ideas First Idea My first idea is also one that is more intuitive. When I move around the black hole, the black hole spins slower relative to me and consequently has a larger event horizon. In this idea, the black hole would just behave like a normal object. This would mean that if you went really fast around a black hole, you could get a lot closer to the black hole that if you were standing still. This is kind of like a satellite that orbits Earth. The slower it moves, the easier it is to fall to the Earth. (I know this is a horrible analogy.) Nothing special happens here. Second Idea My second idea is that when you move faster around the black hole, the relative rotational speed of the black hole doesn't change. Because of how fast it is/how dense it is and special relativity, moving around the black hole doesn't affect its speed. This is like trying to accelerate past the speed of light. No matter how much energy you spend, your speed barely changes. I don't understand how this one would work. Why won't the rotational speed of the black hole stay the same? Conclusion What do black holes spin relative to? And what happens if you move around it? There are lots of questions that ask how black holes spin, or how fast they spin, but as far as I know, none of them address this question. Answer: But if you start running around it, it will move faster or slower relative to you. In this case, the disc has a ground speed, 60 rpm, because it has something to spin in relation to, in this case, the table. Actually, this is fundamentally incorrect. The spinning of the disk has nothing to do with the table in principle. Acceleration, including spinning, is not relative. It can be measured without reference to any external object. For example, using a ring interferometer, or a gyroscope. It does not matter if the object is a disk or a black hole or anything else, spinning is not relative like inertial motion is. When I move around the black hole, the black hole spins slower relative to me, and consequently has a larger event horizon. The event horizon is a global and invariant feature of the spacetime. Your motion does not change it. Of course, you can use whatever coordinates you like and make the coordinate size change as you wish. However, which events are on the event horizon is unchanged by your motion.
{ "domain": "physics.stackexchange", "id": 71379, "tags": "general-relativity, black-holes, angular-momentum, reference-frames, kerr-metric" }
How to turn the wheels on a robot that uses ros_control?
Question: I have made a robot with 4 wheels and it uses ros_control. At the moment I can use a python script to make it move forward and backward but I don't know how to make the wheels turn as it only has the /command topic which accepts a Float64 data input so no linear and angular axis inputs like /cmd_vel etc. Am I able to make the robot subscribe to another topic, if yes, how do I do that? Github code for the robot: here. The whole project can be launched using the following command if you want to run it on your own machine: roslaunch ford_bot launch_all.launch Some guidance on this would be very much appreciated. Answer: So I managed to fix the issue. I removed the ros_control controllers I was using and replaced the control for the robot with the skid steer plugin which I found out about in this video and now I can move my robot with the teleop keyboard node that's readily available to send messages on the cmd_vel topic which the robot is already subscribed to. The new code is available on my GitHub. The roslaunch ford_bot launch_all.launch command will launch the robot and gazebo etc. and then you can use the rosrun teleop_twist_keyboard teleop_twist_keyboard.py command to control the robot.
{ "domain": "robotics.stackexchange", "id": 38929, "tags": "ros-melodic, mobile-robot, wheeled-robot, ros-control, gazebo-ros-control" }
Show that $i/2m\int d^3\vec x\hat\pi(\vec x)\partial^2_i\hat\phi(\vec x)=1/(2\pi)^3\int d^3\vec p E(\vec p)\hat a(\vec p)^\dagger\hat a(\vec p)$
Question: Show that the quantum field for the Hamiltonian, $$\hat H=\frac{i}{2m}\int d^3 \vec x\hat{\pi}(\vec x)\partial^2_i\hat{\phi}(\vec x)\tag{1}$$ can be written as $$\int \frac{d^3\vec p}{(2\pi)^3}E(\vec p) \hat a(\vec p)^\dagger\hat a(\vec p)\tag{2}$$ where $$E(\vec p)=\frac{\vec p^2}{2m}$$ $$\hat{\phi}(\vec x)=\int\frac{d^3\vec p}{(2\pi)^3}e^{i\vec p \cdot \vec x}\hat a(\vec p)\tag{3}$$ $$\hat{\pi}(\vec x)=i\hat{\phi}(\vec x)^\dagger=i\int\frac{d^3\vec p}{(2\pi)^3}e^{-i\vec p \cdot \vec x}\hat a(\vec p)^\dagger\tag{4}$$ Differentiating $(3)$ wrt $\vec x$ twice gives $$\partial^2_i\hat\phi(\vec x)=-\int\frac{d^3\vec p}{(2\pi)^3}\vec p^2e^{i\vec p \cdot \vec x}\hat a(\vec p)\tag{5}$$ But now I can't seem to arrive at equation $(2)$ by direct substitution of $(4)$ and $(5)$ into $(1)$, as this leads to quite a mess: $$\begin{align}\hat H &= \frac{i}{2m}\int d^3 \vec x\hat{\pi}(\vec x)\partial^2_i\hat{\phi}(\vec x)\\&=\frac{1}{2m}\int d^3 \vec x \int\frac{d^3\vec p}{(2\pi)^3}e^{-i\vec p \cdot \vec x}\hat a(\vec p)^\dagger\int\frac{d^3\vec p}{(2\pi)^3}\vec p^2e^{i\vec p \cdot \vec x}\hat a(\vec p)\\&=\int d^3 \vec x \int\frac{d^3\vec p}{(2\pi)^3}\hat a(\vec p)^\dagger\int\frac{d^3\vec p}{(2\pi)^3}\frac{\vec p^2}{2m}\hat a(\vec p)\\&=\color{red}{\int d^3 \vec x \int\frac{d^3\vec p}{(2\pi)^3}\int\frac{d^3\vec p}{(2\pi)^3}E(\vec p)\hat a(\vec p)^\dagger\hat a(\vec p)}\\&\ne \int\frac{d^3\vec p}{(2\pi)^3}E(\vec p)\hat a(\vec p)^\dagger\hat a(\vec p)\end{align}$$ Which apart from the last integral after the fourth equality is identical to $(2)$. In the second to third equality I cancelled the exponentials and in the third to fourth equality I commuted the $\hat a(\vec p)^\dagger$ past $E(\vec p)$ $\left(=\frac{\vec p^2}{2m}\right)$ which I think is justified since it is a scalar. But it is this triple integral that is bothering me here, I'm simply not sure how to deal with this. If the logic demonstrated above is wrong then my other thought is to Fourier transform $(3)$ and $(4)$ to get explicit expressions for $\hat a(\vec p)$ and $\hat a(\vec p)^\dagger$, which, in the case of $(3)$ will give $$\hat a(\vec p)=\frac{1}{(2 \pi)^3}\int d^3 \vec x \hat \phi(\vec x)e^{-i\vec p \cdot \vec x}$$ but when trying to Fourier transform $(5)$ (to get an expression for $\hat a(\vec p)$) I get a different result, which suggests I am again doing something wrong. I deliberately chose to ask this on P.SE instead of M.SE as I thought it would be better placed in physics, but if this is felt to be a mistake then the question can be migrated. I have typeset all the details relevant for this question, but if specific sources from which I drew the information are requested then I will be more than happy to upload the required background info :-) In short, how do I obtain eqn. $(2)$ from $(1)$, please? Answer: $$\int~d^3\vec{x}\frac{1}{(2\pi)^3}e^{i(\vec{p}-\vec{q})\cdot\vec{x}}=\delta^3(\vec{p}-\vec{q}) \tag 1$$ \begin{align} H&=\int d^3\vec{x}\frac{d^3\vec{p}~d^3\vec{q}}{(2\pi)^3(2\pi)^3}\frac{i}{2m}(i)e^{-i\vec{q}\cdot \vec{x}}a^\dagger(\vec{q})(-1)\vec{p}^2e^{i\vec{p}\cdot\vec{x}}a(\vec{p}) \tag 2 \\ &=\int d^3\vec{p}~d^3\vec{q}\frac{1}{(2\pi)^3}\frac{\vec{p}^2}{2m}a^\dagger_{\vec{q}}a_{\vec{p}}\delta^3(\vec{p}-\vec{q}) \tag 3\\ &=\int \frac{d^3\vec{p}}{(2\pi)^3}~E_{\vec{p}}a^\dagger(\vec{p})a(\vec{p}) \tag 4 \end{align}
{ "domain": "physics.stackexchange", "id": 99012, "tags": "homework-and-exercises, quantum-field-theory, operators, hamiltonian, fourier-transform" }
2d Grid - Iterating by Rows / Cells
Question: The feature Here is a Grid class representing a 2d grid. The class will get templated once it reach a satisfactory state. At this time the cells are int values. This Grid class allow for iteration by rows using simple range-for syntax. Grid grid(8, 8); for (auto&& row_it : grid.Rows()) { for (auto&& cell : row_it) std::cout << std::setw(2) << std::setfill('0') << cell << ','; std::cout << '\n'; } The retrospective It works well, but it is my first time creating custom iterators and I do not think that its is nicely done. More in particular: Are the classes GridRows, RowsIterator and RowIterator really needed? The range-for will automatically call operator* on the type returned from begin(), which seems to force the existence of RowIterator instead of returning directly an int* from RowsIterator::begin How best to minimize and simplify? Standard library compatibility According to this article, custom iterators should include the following properties: https://www.internalpointers.com/post/writing-custom-iterators-modern-cpp iterator_category difference_type value_type pointer reference The complete code #include <iostream> #include <iomanip> struct RowIterator { int* _it, * _end; RowIterator(int* begin, int* end) : _it{ begin }, _end{ end } {} int& operator*() { return *_it; } int* operator->() { return _it; } RowIterator& operator++() { ++_it; return *this; } RowIterator operator++(int) { auto self = *this; ++* this; return *this; } friend bool operator==(RowIterator lhs, RowIterator rhs) { return lhs._it == rhs._it; } friend bool operator!=(RowIterator lhs, RowIterator rhs) { return lhs._it != rhs._it; } int* begin() { return _it; } int* end() { return _end; } }; struct RowsIterator { int* _it, * _end; int _row_length; RowIterator _row_begin, _row_end; RowsIterator(int* begin, int row_length) : _it{ begin }, _end{ begin + row_length }, _row_length{ row_length }, _row_begin{ _it, _end }, _row_end{ _end, _end } {} RowIterator& operator*() { return _row_begin; } RowIterator* operator->() { return &_row_begin; } RowsIterator& operator++() { _it += _row_length; _end += _row_length; _row_begin = { _it, _end }; _row_end = { _end, _end }; return *this; } RowsIterator operator++(int) { auto self = *this; ++* this; return *this; } friend bool operator==(RowsIterator lhs, RowsIterator rhs) { return lhs._it == rhs._it; } friend bool operator!=(RowsIterator lhs, RowsIterator rhs) { return lhs._it != rhs._it; } RowIterator begin() const { return _row_begin; } RowIterator end() const { return _row_end; } }; struct GridRows { RowsIterator _begin, _end; GridRows(int* begin, int* end, int row_length) : _begin{ begin, row_length }, _end{ end, row_length } {} RowsIterator begin() { return _begin; } RowsIterator end() { return _end; } }; struct Grid { Grid(int width, int height) : _width{ width }, _height{ height } { size_t size = width * height; _begin = new int[size]; _end = _begin + size; InitializeValues(); } ~Grid() { if (_begin) { delete[] _begin; _begin = nullptr; _end = nullptr; } } void InitializeValues() { for (int i = 0; auto && it : *this) it = ++i; } int* begin() const { return _begin; } int* end() const { return _end; } GridRows Rows() const { return GridRows{ _begin, _end, _width }; } int* _begin, * _end; int _width, _height; }; void IterateWithFor(const Grid& grid) { std::cout << "Iterate though all rows cells\n"; auto&& rows = grid.Rows(); for (auto&& row_it = rows.begin(); row_it != rows.end(); ++row_it) { for (auto&& cell = row_it.begin(); cell != row_it.end(); ++cell) { std::cout << std::setw(2) << std::setfill('0') << *cell << ','; } std::cout << '\n'; } std::cout << '\n'; } void IterateWithRangeFor(const Grid& grid) { std::cout << "Iterate though all rows cells\n"; for (auto&& row_it : grid.Rows()) { for (auto&& cell : row_it) std::cout << std::setw(2) << std::setfill('0') << cell << ','; std::cout << '\n'; } std::cout << '\n'; } int main() { Grid grid(8, 8); IterateWithFor(grid); IterateWithRangeFor(grid); return 0; } Thank you Thank you very much for your time and valuable feedback. Answer: RowIterator operator++(int) { auto self = *this; ++* this; return *this; } We should be returning the previous value (self), rather than *this. Grid owns storage using a raw pointer, so needs copy/move constructor and assignment operations. Or better, use a std::vector instead of int* to take care of that automatically. Use std::size_t for width and height, rather than int. That way, we don't need to check that they are not negative. The iterators are missing the necessary type names to be used with standard algorithms. You even mention that in the question, so fix that immediately. Iterator's operator== can be defined = default, and that will also default !=. RowIterator begin() const { return _row_begin; } That's dangerous - we should be returning a const-iterator when the object is const. Also consider implementing the other (const/reverse) begin/end functions. ~Grid() { if (_begin) … } I don't see any way that _begin could be false here. int* begin() const { return _begin; } int* end() const { return _end; } Again, we need to sort out the const-correctness here.
{ "domain": "codereview.stackexchange", "id": 42480, "tags": "c++, array, iterator, c++20" }
What chemical characteristics give types of steel their properties?
Question: I am a mechanical engineer, and just like everyone else I had classes on the crystal structure of metals, phase diagrams, and the various heat treatments. However, even after diving back into that recently and researching online, I can't make sense of how everything fits together. I know the high level, the low level, but can't link the two together. For example: How can the phase diagram of steel be linked to its physical properties? Say, given a certain carbon weight fraction, is it possible to tell from the phase diagram if the steel is going to be hard, resilient, ductile? E.g. cementite is hard and brittle, austenite is ductile and soft? How do heat treatments work? The way I see it, the phase diagram is "steady state", and heat treatments are about spending a specific amount of time in a phase region to add "a little bit of that" etc. (or quickly cooling it to stop the reactions), but I can't link it back to the phase diagram (there is a 3rd dimension, time). Apologies if this is not a very clear question, but this is exactly what I hope to find: clarification on a quite complex topic. Answer: (Source: Roy Beardmore, http://www.roymech.co.uk/ (defunct, via the Internet Archive) As carbon content increases, the ability to resist a sudden impact decreases, as measured by the Charpy Impact Test. As carbon content increases, ultimate tensile strength and Brinell Hardness increases. As far as a phase diagram: increasing $\ce{}$ (cementite) content makes the steel more brittle and hard. Near 0.76% carbon, the steel will be pearlite which is ductile (good for wires). Below 0.76% carbon, there will be increased alpha-iron (ferrite). Above 0.76% there can be Ledeburite. Austenite only exists at high temperature. Depending upon the cooling rate of Austenite, Bainite (slow cooling) or Martensite (rapid cooling) can form.
{ "domain": "chemistry.stackexchange", "id": 3482, "tags": "metal, materials" }
Hardware Requirements for Camera and sensor
Question: Hello, I'm trying to figure out how much computing power do I need for my robot. I plan to put 2 cameras that will use OpenCV for detection of colors, shapes, etc.. and a laser distance sensor (not a lidar sensor, but a simple single-dimension laser sensor). Some of the sensors and of the cameras will not be fixed, but will be mounted on moving parts, ROS will have to consider their physical location and insert it in a 3D environment with the model of the robot. I have no idea of the computing power required for these processes, i'm currently thought to this configuration: AMD A4-3400 dual-core 2.7GHz 2x 2GB RAM (total 4GB) Is it enough or do I need at least a tri-core processor with 8GB of RAM? PS: sorry for my bad english, i'm italian. Originally posted by Extar on ROS Answers with karma: 13 on 2013-09-04 Post score: 1 Answer: it depends on the cameras and your speed limit. but if the images are 640x480 and you don't do anything crazy you can achieve real-time with that setup. Originally posted by jcc with karma: 200 on 2013-09-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Extar on 2013-09-05: Thanks for the reply. The cameras will be 640x480 or less, and I hope I don't do anything crazy, so I suppose that setup is enough.
{ "domain": "robotics.stackexchange", "id": 15422, "tags": "ros, hardware, cpu, camera" }
What is reason for electronic compass calibration?
Question: Most GPS receivers and smart phones contain an "electronic compass", which I understand is generally a Hall effect magnetometer. These devices generally require "calibration", which involves waving the device in an 8-shaped figure. During the motion, the forward axis is tilted about 45 degrees in each direction and the sideways axis inverted there and back, initial direction does not matter. The device does not to be put in any special mode for the calibration. This calibration is needed each time the device is turned on. Why is it needed? The Hall effect sensor should measure magnitude of the magnetic field in given direction. So I'd expect a pair of sensors to be able to measure direction and magnitude of magnetic field in horizontal plane consistently each time and a needle compass isn't doing anything more. I can understand the other kind of calibration, turning the compass around twice slowly, as needed to compensate for magnetic bias generated by the device itself, but that's only needed once as expected (the device generates same bias each time). So what is this every-time calibration compensating for? Answer: The Hall effect acts at right angles to the applied magnetic lines of force. We want to us this effect to determine true north. But the Earth's magnetic field can be considered to consist of multiple components in different directions. So we need some way to disambiguate these so we can eliminate all but the components that guide us to north. E.g. there is a radial (vertical) component that we don't want to use. This is why we hold the device level and move it in a figure 8: from this motion the associated logic can tell which voltage corresponds to the vertical magnetic field component because its effect will be in the horizontal plane, which we defined for the device by holding it level. After calibration, the device knows to ignore that bit when determining true north. The figure 8 thing does this similarly for certain of the other components. EDITED TO ADD: See this site for a video showing the figure 8 motion in three dimensions. According to the site, by waving the device through all three dimensions, the device can tell the orientation of the earth's field (since the strongest component will be the vertical). END EDIT I think that this can vary depending on how accurate the device is and how many components it can measure. But I think you get the idea. We are giving the device inputs in a controlled fashion so that it can ignore them and focus on just the component we care about: in this case the declination. So yes, this is an engineering answer - but it relies on at least two bits of science: 1. (physics) the Hall effect acts at right angles to the applied lines of force 2. (geological) Earth's field may be considered to have 3+ components of which the strongest is actually the vertical (radial). wikipedia:Dipole Model of the Earth's magnetic field NOAA has a site that is related and kind of fun !
{ "domain": "physics.stackexchange", "id": 2052, "tags": "electromagnetism" }
What is the difference between following two finite automata?
Question: I'm solving problem of drawing the finite automata of (1+0)*(10). Which is any string ends with 10. I've drawn the following diagram for this one: But somewhere in solution it is given the following diagram: I can't understand what is the difference between these two? Is my solution is correct? If it is not than what changes in my solution will take me to this given solution? Answer: As many have pointed out in the comments, your automata is non-deterministic. There are two paths you could take from q0 when you read a 1. When running a non-deterministic automata, you need to keep track of all possible paths and see if any of them end up in an accepting state. If you don't keep track of all different possible states you could be in, your solution might never accept any string, if it choses the looping 1 all the time, never leaving q0. Converting a non-deterministic finite automata (NFA) to a deterministic finite automata (DFA) can be done using a simple algorithm. It's quite slow, but for small examples, it's no problem doing it by hand. There are good videos explaining how to do this in slightly different ways. The main idea is that you build a table of all possible states you could be in. For your NFA, I'll label the nodes q0, q1 and q2 from left to right, starting at q0. At q0 we can reach q0 or q1 when reading a 1. At q0 we can reach q0 when reading a 0. That's the first step. Now we have multiple states at the same time. We already did q0 so we have the (q0 or q1) state left. When reading a 1 from (q0 or q1) we can end up in (q0 or q1). q0 can reach (q0 or q1), and q1 cannot reach anywhere. When reading a 0 from (q0 or q1) we can end up in (q0 or q2). q0 can loop back to itself, and q1 can reach q2. And lastly, the (q0 or q2) state. When reading a 0 from (q0 or q2) we can end up in q0. q2 doesn't have any outgoing edges, and q0 can only reach itself. When reading a 1 from (q0 or q2) we can end up in (q0 or q1). q2 doesn't have any outgoing edges, and q0 can reach (q0 or q2). Those are all our possible state transitions in the NFA. Our list of states used in the transitions above are: q0 (q0 or q1) (q0 or q2) The edges are: q0 + 0 -> q0 q0 + 1 -> (q0 or q1) (q0 or q1) + 0 -> (q0 or q2) (q0 or q1) + 1 -> (q0 or q1) (q0 or q2) + 0 -> q0 (q0 or q2) + 1 -> (q0 or q1) You can see that this is the bottom graph in your question, with the states renamed. It's worth noting that in real life regex engines, NFA's are used because this conversion is very expensive for large automata. It's cheaper and safer to keep track of all states instead of doing the conversion. One notable example is the RE2 library. Most other implementations use backtracking, which is O(2^n) for an n-length string. RE2 is exponential in the expression length, but linear in input. Developers write the expressions, attackers write the input.
{ "domain": "cs.stackexchange", "id": 7148, "tags": "automata, finite-automata" }
Improved Sieve of Eratosthenes
Question: How can I get to a perfect coding for this algorithm? I made the mathematical theorem, which is a development of the Sieve of Eratosthenes algorithm. I might use some help in coding. You can find the description of the code at my paper: Development of Sieve of Eratosthenes and Sieve of Sundaram's proof For more understanding you can check this paper: SEQUENCE ELIMINATION FUNCTION AND THE FORMULAS OF PRIME NUMBERS For the next development see Next level Improved Sieve of Eratosthenes #include <iostream> #include <math.h> #include <cstdlib> using namespace std; int D2SOE(int n1_m) { int n1 = 0, g = 0, z = 0, p = 0,f1=0,f3=0; cout << "\n2 3 "; bool* array = new bool[n1_m]; // Initialising the D2SOE array with false values for (int i = 0; i < n1_m; i++) array[i] = false; // The main elimination theorem for (n1=1;n1<=ceil((sqrt(2*floor((3*n1_m+1)/2.0)+1)-2)/3.0);n1++) { if (array[n1] != 0) continue; z = ((3 * n1 + 1) / 2.0); p = ((2 * z) + 1); f1 = (ceil(((7*p)-2) / 3.0) - ceil((p - 2) / 3.0)); f3 = (ceil(((7*p)-2) / 3.0) - ceil(((5 * p) - 2) / 3.0)); for (g = ceil(((4*((z*z) + z)) - 1) / 3.0); g < n1_m;g+=f1) { array[g] = true; } if ((p +1) % 3 == 0) { for (g = ceil(((4*((z*z) + z)) - 1) / 3.0) + f3; g < n1_m;g+=f1) { array[g] = true; } } else { for (g = ceil(((4*((z*z) + z)) - 1) / 3.0) - f3; g<n1_m;g+=f1) { array[g] = true; }}} // printing for loop for (int n1 = 1; n1 < n1_m; n1++) if (!array[n1]) { z = (((3 * n1) + 1) / 2.0); cout << (2 * z) + 1 << " "; } return 0; } // driver program to test above int main() { int n1_m, N = 0; cout << "\n Enter limit : "; cin >> N; n1_m = ceil((N - 2) / 3.0); D2SOE(n1_m); return 0; } Answer: Separation of concerns We dont't sieve primes for the sake of sieving primes. We want primes because we want to do something with them, not just print. Let D2SOE return the array it computed, and int main() int n1_m, N = 0; cout << "\n Enter limit : "; cin >> N; n1_m = ceil((N - 2) / 3.0); bool array = D2SOE(n1_m); print_primes(array, n1_m); return 0; } Overall impression: Unreadable. Please, don't }}}. Indent your code properly. ceil(((4*((z*z) + z)) - 1) / 3.0) seems very important, as it is repeated 3 times. Figure out a good name for it, and compute it once. f1 and f3 deserve better names too. After sieving, I'd expect true for primes. Correctness Didn't check it. However, A floating point math in an elementary number-theoretical problem is totally out of place. It may bite you hard when n1_m1 grows large enough. Comparing a boolean array[n1] to 0, while technically correct, gives a yucky taste. c++ I don't see why to #include <cstdlib>. using namespace std; is always wrong. Don't new. Use std::vector.
{ "domain": "codereview.stackexchange", "id": 43785, "tags": "c++, algorithm, primes, sieve-of-eratosthenes" }
Calorimetry- Calculating change in temperature help
Question: The reaction: Zn(s) + 2 AgNO3 (aq) → 2 Ag(s) + Zn(NO3)2 (aq) takes place in a calorimeter. 30 cm3 of a 1.20 mol dm−3 solution of silver nitrate with an excess of zinc is used. If the value of ∆rH is −365.1 kJ mol−1 and all solutions are assumed to have a density of 1.00 g cm−3 and a specific heat capacity of 4.18 J K−1 g−1, what temperature change is produced? I attempted to calculate the value of Q and got 13,143.6 J. Then substituted this into Q = M x C x deltaT, to rearrange the equation giving delta T. The answer I got is 105 but this is incorrect. Answer: The $\pu{Delta H}$ value is $\pu{-365 kJ/mol}$. This is supposed to be "per mole of what happens in the equation with a coefficient $1$". So it is per mole of Zinc. It can also be rewritten so : $\ce{\Delta H = -365 kJ (mol Zn)^{-1} = -365 kJ (2 mol AgNO3)^{-1} = - 182.5 kJ (mol AgNO3)^{-1}}$ And here, you are using $\ce{0.036 mol AgNO3}$. So the heat effect is twice smaller. : $\pu{\Delta T = 52.3 K}$
{ "domain": "chemistry.stackexchange", "id": 15526, "tags": "calorimetry" }
An efficient way to get the dual graph from a triangulation
Question: An answer to a previous question I asked advised me to find the dual graph of a triangulation. I'm given the graph in the following form: How can I get the dual graph from a list of edges given this way? Answer: The way the triangulation is given to you encapsulates this information, since along with each edge you are given the triangles it is part of. This allows you to reconstruct the edges of the dual graph (and from them, the vertices). You can also recover this information, at higher cost, if the triangulation is given to you as a graph. In polynomial time you can, for example, find all triangles (the vertices of the dual graph) and all pairs of triangles intersecting at an edge (the edges of the dual graph). You can do it naively in $O(n^4)$, though I'm sure this can be brought down significantly, perhaps even all the way to $\tilde{O}(n)$.
{ "domain": "cs.stackexchange", "id": 10593, "tags": "algorithms, graphs" }
Definition of "structural underdominance"?
Question: In Stathos and Fishman (2014), the authors refer to the concept of structural underdominance. The first time they mention it is in the first paragraph of the second page (left column) and the term is followed by some kind of definition in parenthesis. It is written: Furthermore, artifical doubling of chromosomes in sterile plant hybrids can sometimes restore normal meiosis and fertility (Stebbins 1950), a pattern diagnostic of structural underdominance (heterozygote inferiority). In addition to how they use the term "Structural underdominance" in this paper, the above definition is confusing to me. Is structural underdominance a specific type of underdominance (heterozygote disadvantage)? Is it the evolutionary process by which a population is lead to diminish the average underdominance along their loci? Is it a predisposition of the genetic architecture to display underdominance when a chromosomal rearrangement occurs? In short the question is: What is the definition of "structural underdominance"? Answer: I understand heterozygote inferiority (also underdominance or heterozygote disadvantage) as the opposite of heterozygote advantage, that is, lower fitness of the heterozygous genotype than either homozygote (as reference, see Hedrick, 2009, p. 119). I haven't seen the term structural underdominance before. However, heterozygote disadvantage can sometimes be seen in species/subspecies hybrids and also chromosomal heterozygotes (see Hedrick, 2009, p 140), and this might be what the term structural underdominance is describing (structural as in chromosomal). This can then lead to unbalanced gametes with low viability, which means low fitness compared to homozygotes. The quote you have included also talks about plant hybrids, which makes this explanation likely, and it also makes sense since they mention artifical doubling as a solution. If I have understood the process correctly, chromosomal doubling should lead to better pairing of chromosomes and lower risk of unbalanced chromosome translocations. Also note that Hedrick labels heterozygote disadvantage as an unstable equilibrium (p. 140ff), but I haven't looked more closely at the dynamics behind this.
{ "domain": "biology.stackexchange", "id": 2828, "tags": "evolution, genetics, terminology, chromosome, speciation" }
Homomorphism Languages
Question: Let $h$ be a homomorphism and let $L$ be a language. Writing ${}^*$ for Kleene star, I want to show that $(h^{-1}(L))^* \neq h^{-1}(L^*)$. Can I prove this just by showing that we can have $h^{-1}(L)$ to be empty (because no strings are mapping into $L$) and therefore the right side is the empty set? Doing the same to the left side gives $\emptyset^* = \{\epsilon\}\neq\emptyset$. Answer: Unfortunately both sides contain the empty string. As you say, the Kleene star of the empty set also contains the empty string: $\varnothing^* = \{ \lambda\}$. The Kleene star of a language $L$, equals $L^* = \cup_{i\ge 0} L^i$, where $L^0 = \{\lambda\}$ and $L^i = L \cdot L^{i-1}$ for $i\ge 1$. On the other hand $h(\lambda) = \lambda$ for any morphism $h$, so $\lambda \in h^{-1}(K)$ whenever $\lambda \in K$. As $L^*$ contains the empty string this is the case here. However, this does not mean the two sets are equal. Consider $h:\{a,b\}^*\to \{a\}^*$ defined by $h(a) = a$ and $h(b) = aa$ and $L=\{a\}$.
{ "domain": "cs.stackexchange", "id": 5341, "tags": "formal-languages, correctness-proof" }
Why is an interleaver often placed between an outer block code and an inner convolution code?
Question: It seems that when doing FEC with a concatenated-code, an interleaver is often placed between the outer block code and the inner convolution code. The explanations I've seen say that this is because the convolution decoder often has errors in bursts. However, if we assume that we're using a Reed-Solomon decoder with 8-bit symbols, then spreading the bursts onto multiple blocks would seem to increase errors rather than decrease them. What am I missing that explains why this interleaving is good? Rough idea of the decode and encode process: Encode: $$\boxed{\textrm{Data}}{\longrightarrow}\boxed{\textrm{RS Encoder}}{\longrightarrow}\boxed{\textrm{Interleaver}}{\longrightarrow}\boxed{\textrm{Convolution Encoder}}{\longrightarrow} \boxed{\textrm{Modulator}}$$ Decode: $$\boxed{\textrm{Demodulator}}{\longrightarrow}\boxed{\textrm{Convolution Decoder}}{\longrightarrow}\boxed{\textrm{De-interleaver}}{\longrightarrow}\boxed{\textrm{RS Decoder}}{\longrightarrow}\boxed{\textrm{Data}}$$ Answer: As the other answers mention, the use of FEC results in post-decoding errors occurring in bursts. Indeed, this happens regardless of whether the code is a convolutional code or a block code. With a $(n,k)$ block code, the decoder output ($k$ bits) from the decoding of one received word is (hopefully with high probability) completely correct, or it has an unknown number of errors in it that can be regarded as a burst error of length $k$. With a convolutional code, the decoder output is mostly correct as the decoder finds the correct path through the trellis, but occasionally the decoder's chosen path deviates from the correct path and later rejoins the correct path during which time there is a burst of errors. In contrast to block codes, there can be multiple isolated burst errors (of variable lengths) in a single transmission using a convolutional code. Also, the number of data bits in a single transmission is far larger than the typical values of $k$ for a block code. The idea behind a concatenated coding scheme is that with an inner code suited to the physical channel, and an outer code over a very large symbol alphabet, we can make the burst errors in the inner decoder output look like a single symbol error to the outer code. This is important because the outer code should be a very high rate code because the net rate is product of the inner code rate (more or less determined by the channel and the link budget) and the outer code rate. Unfortunately, outer codes over very large alphabets are very expensive to implement, and so interleaved Reed-Solomon codes over smaller alphabets (often GF$(2^8)$ ) are used (with interleaving at the symbol level, as Jim Clay points out). Because of the interleaving, the burst errors in the inner decoder output become single symbol (byte) errors in the received words of the interleaved Reed-Solomon code. All the above is mostly a rehash of what the answers by Bryan and Jim Clay have already said, but I wish to point out the following. Interleaved Reed-Solomon codewords can be decoded much more efficiently and with smaller delay if they are not de-interleaved first. A Reed-Solomon decoder that can decode interleaved codewords is different from the off-the-shelf standard Reed-Solomon decoders that are available, and the use of such a decoder might not be feasible if the development team does not have control of this aspect of the design. But, if such a decoder is used, the de-interleaver can be moved from the between the inner decoder and outer decoder to just after the outer decoder. The de-interleaver is also smaller since it has to deinterleave a $K\times L$ array instead of a $N \times L$ array (for a $(N.K)$ Reed-Solomon code interleaved to depth $L$. If a delay-scaled Reed-Solomon encoder is used along with the Reed-Solomon decoder for interleaved codewords described above, the interleaver at the transmitter and the de-interleaver at the receiver can be eliminated entirely. The output of the delay-scaled encoder (see also this paper which is unfortunately behind a paywall) is a set of interleaved Reed-Solomon codewords, but is not the same sequence of bytes that one would get from doing a standard Reed-Solomon encoding followed by interleaving. So, no further interleaving is necessary. The output of the interleaved Reed-Solomon decoder is the same byte sequence in the same order that went into the delay-scaled Reed-Solomon encoder, and so no de-interleaving is necessary at the decoder, either.
{ "domain": "dsp.stackexchange", "id": 740, "tags": "digital-communications, channelcoding, forward-error-correction, reed-solomon" }
Huffman compressor in C++
Question: I have this Huffman compressor that can compress both text and binary files by treating each as binary files. (By the way, it is fully compatible with this Java implementation.) See what I have: Code main.cpp #include "bit_string.hpp" #include "byte_counts.hpp" #include "huffman_decoder.hpp" #include "huffman_deserializer.hpp" #include "huffman_encoder.hpp" #include "huffman_serializer.hpp" #include "huffman_tree.hpp" #include <algorithm> #include <cstdint> #include <fstream> #include <iostream> #include <iterator> #include <limits> #include <map> #include <random> #include <set> #include <stdexcept> #include <string> using std::cout; using std::cerr; using std::endl; #define ASSERT(C) if (!(C)) report(#C, __FILE__, __LINE__) void report(const char* condition, const char* file, size_t line) { cerr << "The condition \"" << condition << "\" failed in file \"" << file << "\", line: " << line << "." << endl; exit(1); } static std::string ENCODE_FLAG_SHORT = "-e"; static std::string ENCODE_FLAG_LONG = "--encode"; static std::string DECODE_FLAG_SHORT = "-d"; static std::string DECODE_FLAG_LONG = "--decode"; static std::string HELP_FLAG_SHORT = "-h"; static std::string HELP_FLAG_LONG = "--help"; static std::string VERSION_FLAG_SHORT = "-v"; static std::string VERSION_FLAG_LONG = "--version"; static std::string ENCODED_FILE_EXTENSION = "het"; static std::string BAD_CMD_FORMAT = "Bad command line format."; void test_append_bit(); void test_bit_string(); void test_all(); void exec(int argc, const char *argv[]); void print_help_message(std::string& image_name); void print_version(); std::string get_base_name(const char *arg1); void file_write(std::string& file_name, std::vector<int8_t>& data); std::vector<int8_t> file_read(std::string& file_name); int main(int argc, const char * argv[]) { try { exec(argc, argv); } catch (std::runtime_error& error) { cerr << "Error: " << error.what() << endl; return 1; } //test_all(); return 0; } void file_write(std::string& file_name, std::vector<int8_t>& data) { std::ofstream file(file_name, std::ios::out | std::ofstream::binary); std::size_t size = data.size(); char* byte_data = new char[size]; std::copy(data.begin(), data.end(), byte_data); file.write(byte_data, size); file.close(); } std::vector<int8_t> file_read(std::string& file_name) { std::ifstream file(file_name, std::ios::in | std::ifstream::binary); std::vector<int8_t> ret; std::filebuf* pbuf = file.rdbuf(); std::size_t size = pbuf->pubseekoff(0, file.end, file.in); pbuf->pubseekpos(0, file.in); char* buffer = new char[size]; pbuf->sgetn(buffer, size); for (std::size_t i = 0; i != size; ++i) { ret.push_back((int8_t) buffer[i]); } delete[] buffer; file.close(); return std::move(ret); } void do_decode(int argc, const char * argv[]) { if (argc != 4) { throw std::runtime_error{BAD_CMD_FORMAT}; } std::string flag = argv[1]; if (flag != DECODE_FLAG_SHORT and flag != DECODE_FLAG_LONG) { throw std::runtime_error{BAD_CMD_FORMAT}; } std::string source_file = argv[2]; std::string target_file = argv[3]; std::vector<int8_t> encoded_data = file_read(source_file); huffman_deserializer deserializer; huffman_deserializer::result decode_result = deserializer.deserialize(encoded_data); huffman_tree decoder_tree(decode_result.count_map); huffman_decoder decoder; std::vector<int8_t> text = decoder.decode(decoder_tree, decode_result.encoded_text); file_write(target_file, text); } void do_encode(int argc, const char * argv[]) { if (argc != 3) { throw std::runtime_error{BAD_CMD_FORMAT}; } std::string flag = argv[1]; if (flag != ENCODE_FLAG_SHORT and flag != ENCODE_FLAG_LONG) { throw std::runtime_error{BAD_CMD_FORMAT}; } std::string source_file = argv[2]; std::vector<int8_t> text = file_read(source_file); std::map<int8_t, uint32_t> count_map = compute_byte_counts(text); huffman_tree tree(count_map); std::map<int8_t, bit_string> encoder_map = tree.infer_encoder_map(); huffman_encoder encoder; bit_string encoded_text = encoder.encode(encoder_map, text); huffman_serializer serializer; std::vector<int8_t> encoded_data = serializer.serialize(count_map, encoded_text); std::string out_file_name = source_file; out_file_name += "."; out_file_name += ENCODED_FILE_EXTENSION; file_write(out_file_name, encoded_data); } void exec(int argc, const char *argv[]) { std::set<std::string> command_line_argument_set; std::for_each(argv + 1, argv + argc, [&command_line_argument_set](const char *s) { command_line_argument_set.insert(std::string{s});}); std::string image_name = get_base_name(argv[0]); auto args_end = command_line_argument_set.end(); if (command_line_argument_set.find(HELP_FLAG_SHORT) != args_end || command_line_argument_set.find(HELP_FLAG_LONG) != args_end) { print_help_message(image_name); exit(0); } if (command_line_argument_set.find(VERSION_FLAG_SHORT) != args_end || command_line_argument_set.find(VERSION_FLAG_LONG) != args_end) { print_version(); exit(0); } if (command_line_argument_set.find(DECODE_FLAG_SHORT) != args_end && command_line_argument_set.find(DECODE_FLAG_LONG) != args_end) { print_help_message(image_name); exit(1); } if (command_line_argument_set.find(ENCODE_FLAG_SHORT) != args_end && command_line_argument_set.find(ENCODE_FLAG_LONG) != args_end) { print_help_message(image_name); exit(0); } bool decode = false; bool encode = false; if (command_line_argument_set.find(DECODE_FLAG_SHORT) != args_end || command_line_argument_set.find(DECODE_FLAG_LONG) != args_end) { decode = true; } if (command_line_argument_set.find(ENCODE_FLAG_SHORT) != args_end || command_line_argument_set.find(ENCODE_FLAG_LONG) != args_end) { encode = true; } if ((!decode and !encode) or (decode and encode)) { print_help_message(image_name); exit(0); } if (decode) { do_decode(argc, argv); } else { do_encode(argc, argv); } } std::string get_base_name(const char *cmd_line) { std::string tmp = cmd_line; if (tmp.empty()) { throw std::runtime_error{"Empty base name string."}; } char file_separator; #ifdef _WIN32 file_separator = '\\'; #else file_separator = '/'; #endif int index = (int) tmp.length() - 1; for (; index >= 0; --index) { if (tmp[index] == file_separator) { std::string ret; while (++index < tmp.length()) { ret += tmp[index]; } return ret; } } return tmp; } std::string get_indent(size_t len) { std::string ret; for (size_t i = 0; i != len; ++i) { ret += ' '; } return ret; } void print_help_message(std::string& process_image_name) { std::string preamble = "usage: " + process_image_name + " "; size_t preamble_length = preamble.length(); std::string indent = get_indent(preamble_length); cout << preamble; cout << "[" << HELP_FLAG_SHORT << " | " << HELP_FLAG_LONG << "]\n"; cout << indent << "[" << VERSION_FLAG_SHORT << " | " << VERSION_FLAG_LONG << "]\n"; cout << indent << "[" << ENCODE_FLAG_SHORT << " | " << ENCODE_FLAG_LONG << "] FILE\n"; cout << indent << "[" << DECODE_FLAG_SHORT << " | " << DECODE_FLAG_LONG << "] FILE_FROM FILE_TO\n"; cout << "Where:" << endl; cout << HELP_FLAG_SHORT << ", " << HELP_FLAG_LONG << " Print this message and exit.\n"; cout << VERSION_FLAG_SHORT << ", " << VERSION_FLAG_LONG << " Print the version info and exit.\n"; cout << ENCODE_FLAG_SHORT << ", " << ENCODE_FLAG_LONG << " Encode the text from file.\n"; cout << DECODE_FLAG_SHORT << ", " << DECODE_FLAG_LONG << " Decode the text from file.\n"; } void print_version() { cout << "Huffman compressor C++ tool, version 1.6 (Nov 29, 2016)" << endl; cout << "By Rodion \"rodde\" Efremov" << endl; } void test_append_bit() { bit_string b; ASSERT(b.length() == 0); for (int i = 0; i < 100; ++i) { ASSERT(b.length() == i); b.append_bit(false); } for (int i = 0; i < 30; ++i) { ASSERT(b.length() == 100 + i); b.append_bit(true); } ASSERT(b.length() == 130); for (int i = 0; i < 100; ++i) { ASSERT(b.read_bit(i) == false); } for (int i = 100; i < 130; ++i) { ASSERT(b.read_bit(i) == true); } } void test_append_bits_from() { bit_string b; bit_string c; for (int i = 0; i < 200; ++i) { b.append_bit(false); } for (int i = 0; i < 100; ++i) { c.append_bit(true); } ASSERT(b.length() == 200); ASSERT(c.length() == 100); b.append_bits_from(c); ASSERT(b.length() == 300); ASSERT(c.length() == 100); for (int i = 0; i < 200; ++i) { ASSERT(b.read_bit(i) == false); } for (int i = 200; i < 300; ++i) { ASSERT(b.read_bit(i) == true); } } void test_read_bit() { bit_string b; b.append_bit(true); b.append_bit(false); b.append_bit(false); b.append_bit(true); b.append_bit(false); ASSERT(b.length() == 5); ASSERT(b.read_bit(0) == true); ASSERT(b.read_bit(3) == true); ASSERT(b.read_bit(1) == false); ASSERT(b.read_bit(2) == false); ASSERT(b.read_bit(4) == false); } void test_read_bit_bad_index_throws() { bit_string b; b.append_bit(true); try { b.read_bit(1); ASSERT(false); } catch (std::runtime_error& err) { } try { b.read_bit(-1); ASSERT(false); } catch (std::runtime_error& err) { } } void test_read_bit_throws_on_empty_string() { bit_string b; try { b.read_bit(0); ASSERT(false); } catch (std::runtime_error& err) { } } void test_remove_last_bit() { bit_string b; b.append_bit(true); b.append_bit(true); b.append_bit(false); b.append_bit(true); b.append_bit(false); ASSERT(b.read_bit(0) == true); ASSERT(b.read_bit(1) == true); ASSERT(b.read_bit(2) == false); ASSERT(b.read_bit(3) == true); ASSERT(b.read_bit(4) == false); for (int i = 5; i > 0; --i) { ASSERT(b.length() == i); b.remove_last_bit(); ASSERT(b.length() == i - 1); } } void test_remove_last_bit_throws_on_empty_string() { bit_string b; try { b.remove_last_bit(); ASSERT(false); } catch (std::runtime_error& err) { } } void test_bit_string_clear() { bit_string b; for (int i = 0; i < 1000; ++i) { ASSERT(b.length() == i); b.append_bit(true); ASSERT(b.length() == i + 1); } b.clear(); ASSERT(b.length() == 0); } void test_bit_string_get_number_of_occupied_bytes() { bit_string b; ASSERT(0 == b.get_number_of_occupied_bytes()); for (int i = 0; i < 100; ++i) { ASSERT(b.get_number_of_occupied_bytes() == i); for (int j = 0; j < 8; ++j) { b.append_bit(true); ASSERT(i + 1 == b.get_number_of_occupied_bytes()); } ASSERT(i + 1 == b.get_number_of_occupied_bytes()); } } void test_bit_string_to_byte_array() { bit_string b; for (int i = 0; i < 40; ++i) { b.append_bit(i % 2 == 1); } for (int i = 0; i < 80; ++i) { b.append_bit(i % 2 == 0); } std::vector<int8_t> bytes{b.to_byte_array()}; for (int i = 0; i < 5; ++i) { ASSERT(0xaa == (uint8_t) bytes[i]); } for (int i = 5; i < 15; ++i) { ASSERT(0x55 == bytes[i]); } } void test_bit_string() { test_append_bit(); test_append_bits_from(); test_read_bit(); test_read_bit_bad_index_throws(); test_read_bit_throws_on_empty_string(); test_remove_last_bit(); test_remove_last_bit_throws_on_empty_string(); test_bit_string_clear(); test_bit_string_get_number_of_occupied_bytes(); test_bit_string_to_byte_array(); } std::vector<int8_t> random_text() { std::random_device rd; std::default_random_engine engine(rd()); std::uniform_int_distribution<size_t> uniform_dist_length(0, 10 * 100); std::uniform_int_distribution<int8_t> uniform_dist(std::numeric_limits<int8_t>::min(), std::numeric_limits<int8_t>::max()); size_t len = uniform_dist_length(engine); std::vector<int8_t> ret; for (size_t i = 0; i != len; ++i) { ret.push_back(uniform_dist(engine)); } return ret; } void test_brute_force() { std::vector<int8_t> text = random_text(); std::map<int8_t, uint32_t> count_map = compute_byte_counts(text); huffman_tree tree(count_map); std::map<int8_t, bit_string> encoder_map = tree.infer_encoder_map(); huffman_encoder encoder; bit_string text_bit_string = encoder.encode(encoder_map, text); huffman_serializer serializer; std::vector<int8_t> encoded_data = serializer.serialize(count_map, text_bit_string); huffman_deserializer deserializer; huffman_deserializer::result hdr = deserializer.deserialize(encoded_data); huffman_tree decoder_tree(hdr.count_map); huffman_decoder decoder; ASSERT(hdr.count_map.size() == count_map.size()); for (auto& e : hdr.count_map) { auto iter = count_map.find(e.first); ASSERT(iter != count_map.end()); ASSERT(e.second == iter->second); } ASSERT(text_bit_string.length() == hdr.encoded_text.length()); std::vector<int8_t> recovered_text = decoder.decode(decoder_tree, hdr.encoded_text); ASSERT(text.size() == recovered_text.size()); ASSERT(std::equal(text.begin(), text.end(), recovered_text.begin())); } void test_simple_algorithm() { std::string text = "hello world"; std::vector<int8_t> text_vector; for (char c : text) { text_vector.push_back((int8_t) c); } std::map<int8_t, uint32_t> count_map = compute_byte_counts(text_vector); huffman_tree tree(count_map); std::map<int8_t, bit_string> encoder_map = tree.infer_encoder_map(); huffman_encoder encoder; bit_string text_bit_string = encoder.encode(encoder_map, text_vector); huffman_serializer serializer; std::vector<int8_t> encoded_text = serializer.serialize(count_map, text_bit_string); huffman_deserializer deserializer; huffman_deserializer::result hdr = deserializer.deserialize(encoded_text); huffman_tree decoder_tree(hdr.count_map); huffman_decoder decoder; std::vector<int8_t> recovered_text = decoder.decode(decoder_tree, hdr.encoded_text); ASSERT(text.size() == recovered_text.size()); ASSERT(std::equal(text.begin(), text.end(), recovered_text.begin())); } void test_one_byte_text() { std::vector<int8_t> text = { 0x01, 0x01 }; std::map<int8_t, uint32_t> count_map = compute_byte_counts(text); huffman_tree tree(count_map); std::map<int8_t, bit_string> encoder_map = tree.infer_encoder_map(); huffman_encoder encoder; bit_string text_bit_string = encoder.encode(encoder_map, text); huffman_serializer serializer; std::vector<int8_t> serialized_text = serializer.serialize(count_map, text_bit_string); huffman_deserializer deserializer; huffman_deserializer::result hdr = deserializer.deserialize(serialized_text); huffman_tree decoder_tree(hdr.count_map); huffman_decoder decoder; std::vector<int8_t> recovered_text = decoder.decode(decoder_tree, hdr.encoded_text); ASSERT(recovered_text.size() == 2); ASSERT(recovered_text[0] = 0x1); ASSERT(recovered_text[1] = 0x1); } void test_algorithms() { test_simple_algorithm(); test_one_byte_text(); for (int iter = 0; iter != 100; ++iter) { test_brute_force(); } } void test_all() { test_bit_string(); test_algorithms(); cout << "All tests passed." << endl; } bit_string.hpp #pragma once #ifndef CODERODDE_BIT_STRING #define CODERODDE_BIT_STRING #include <cstdint> #include <iostream> #include <vector> class bit_string { public: constexpr static size_t DEFAULT_NUMBER_OF_UINT64S = 2; constexpr static size_t MODULO_MASK = 0x3F; constexpr static size_t BITS_PER_UINT64 = 64; /********************************** * Constructs an empty bit string. * **********************************/ explicit bit_string(); /*********************************** * Copy constructs this bit string. * ***********************************/ explicit bit_string(const bit_string& to_copy); /******************************** * The move assignment operator. * ********************************/ bit_string& operator=(bit_string&& other); /************************ * The move constructor. * ************************/ bit_string(bit_string&& other); /************************************ * Appends 'bit' to this bit string. * ************************************/ void append_bit(bool bit); /************************************************* * Returns the number of bits in this bit string. * *************************************************/ size_t length() const; /*********************************************** * Appends 'bs' to the tail of this bit string. * ***********************************************/ void append_bits_from(const bit_string& bs); /******************************** * Reads a bit at index 'index'. * ********************************/ bool read_bit(size_t index) const; /********************************************* * Removes the last bit from this bit string. * *********************************************/ void remove_last_bit(); /******************************************* * Removes all the bits in this bit string. * *******************************************/ void clear(); /*************************************************************************** * Returns the number of bytes it takes to accommodate all the bits in this * * bit string. * ***************************************************************************/ size_t get_number_of_occupied_bytes() const; /*********************************************************************** * Returns an array of bytes holding all the bits from this bit string. * ***********************************************************************/ std::vector<int8_t> to_byte_array() const; /*************************************************************************** * Used for printing the bits in the output stream. Note that for each long * * its bits are printed starting from the lowest bit, which implies that * * the actual longs are printed correctly, yet the bits within each long * * are printed "backwards." * ***************************************************************************/ friend std::ostream& operator<<(std::ostream& out, bit_string& b) { for (size_t i = 0; i != b.length(); ++i) { out << (b.read_bit(i) ? '1' : '0'); } return out; } private: // The actual vector of longs storing the bits. std::vector<uint64_t> storage_longs; // The maximum number of bits this bit string can hold without enlarging the // 'storage_longs'. size_t storage_capacity; // The actual number of bits this string is composed of. size_t size; // Makes sure that the index is within the range. void check_access_index(size_t index) const; // An implementation of the reading operation. Does not check the index. bool read_bit_impl(size_t index) const; // An implementation of the writing operation. Does not check the index. void write_bit_impl(size_t index, bool bit); // Makes sure that this bit string can fit 'requested_capacity' bits. void check_bit_array_capacity(size_t requested_capacity); }; #endif // CODERODDE_BIT_STRING bit_string.cpp #include "bit_string.hpp" #include <climits> #include <stdexcept> #include <sstream> #include <iostream> bit_string::bit_string() : storage_longs{DEFAULT_NUMBER_OF_UINT64S, 0}, storage_capacity{BITS_PER_UINT64 * DEFAULT_NUMBER_OF_UINT64S}, size{0} {} bit_string::bit_string(const bit_string& to_copy) : size{to_copy.size}, storage_longs{to_copy.storage_longs}, storage_capacity{to_copy.storage_capacity} {} bit_string& bit_string::operator=(bit_string &&other) { storage_longs = std::move(other.storage_longs); storage_capacity = other.storage_capacity; size = other.size; return *this; } void bit_string::append_bit(bool bit) { check_bit_array_capacity(size + 1); write_bit_impl(size, bit); ++size; } size_t bit_string::length() const { return size; } void bit_string::append_bits_from(const bit_string& bs) { check_bit_array_capacity(size + bs.size); size_t other_size = bs.size; for (size_t i = 0; i != other_size; ++i) { append_bit(bs.read_bit_impl(i)); } } bool bit_string::read_bit(size_t index) const { check_access_index(index); return read_bit_impl(index); } void bit_string::remove_last_bit() { if (size == 0) { throw std::runtime_error{"Removing a bit from empty bit string."}; } --size; } void bit_string::clear() { size = 0; } size_t bit_string::get_number_of_occupied_bytes() const { return size / CHAR_BIT + ((size % CHAR_BIT == 0) ? 0 : 1); } std::vector<int8_t> bit_string::to_byte_array() const { size_t number_of_bytes = get_number_of_occupied_bytes(); std::vector<int8_t> ret(number_of_bytes); for (size_t i = 0; i != number_of_bytes; ++i) { ret[i] = (int8_t)((storage_longs[i / sizeof(uint64_t)] >> CHAR_BIT * (i % sizeof(uint64_t)))); } return std::move(ret); } void bit_string::check_access_index(size_t index) const { if (size == 0) { throw std::runtime_error{"Accesing an empty bit string."}; } if (index >= size) { std::stringstream ss; ss << "The access index is out of range. Index = " << index << ", length: " << size << "."; throw std::runtime_error{ss.str()}; } } bool bit_string::read_bit_impl(size_t index) const { size_t word_index = index / BITS_PER_UINT64; size_t bit_index = index & MODULO_MASK; uint64_t mask = 1ULL << bit_index; return (storage_longs[word_index] & mask) != 0; } void bit_string::write_bit_impl(size_t index, bool bit) { size_t word_index = index / BITS_PER_UINT64; size_t bit_index = index & MODULO_MASK; if (bit) { uint64_t mask = (1ULL << bit_index); storage_longs[word_index] |= mask; } else { uint64_t mask = ~(1ULL << bit_index); storage_longs[word_index] &= mask; } } void bit_string::check_bit_array_capacity(size_t requested_capacity) { if (requested_capacity > storage_capacity) { size_t requested_words_1 = requested_capacity / BITS_PER_UINT64 + (((requested_capacity & MODULO_MASK) == 0) ? 0 : 1); size_t requested_words_2 = (3 * storage_capacity / 2) / BITS_PER_UINT64; size_t selected_requested_words = std::max(requested_words_1, requested_words_2); storage_longs.resize(selected_requested_words); storage_capacity = storage_longs.size() * BITS_PER_UINT64; } } byte_counts.hpp #pragma once #ifndef BYTE_COUNTS_HPP #define BYTE_COUNTS_HPP #include "huffman_tree.hpp" #include <cstdint> #include <map> /*********************************************************************** * Counts relative frequencies of each character represented by a byte. * ***********************************************************************/ std::map<int8_t, uint32_t> compute_byte_counts(std::vector<int8_t>& text); #endif // BYTE_WEIGHTS_HPP byte_counts.cpp #include "huffman_tree.hpp" #include <cstdint> #include <map> #include <vector> using std::map; using std::vector; std::map<int8_t, uint32_t> compute_byte_counts(std::vector<int8_t>& text) { std::map<int8_t, uint32_t> map; for (auto byte : text) { map[byte] += 1; } return map; } huffman_encoder.hpp #ifndef HUFFMAN_ENCODER_HPP #define HUFFMAN_ENCODER_HPP #include "bit_string.hpp" #include <map> #include <vector> class huffman_encoder { public: /*************************************************************************** * Encodes the input "text" using the encoder map 'encoder_map' and returns * * the result bit string. * ***************************************************************************/ bit_string encode(std::map<int8_t, bit_string>& encoder_map, std::vector<int8_t>& text); }; #endif // HUFFMAN_ENCODER_HPP huffman_encoder.cpp #include "bit_string.hpp" #include "huffman_encoder.hpp" #include <map> bit_string huffman_encoder::encode(std::map<int8_t, bit_string>& encoder_map, std::vector<int8_t>& text) { bit_string output_bit_string; size_t text_length = text.size(); for (size_t index = 0; index != text_length; ++index) { int8_t current_byte = text[index]; output_bit_string.append_bits_from(encoder_map[current_byte]); } return output_bit_string; } huffman_tree.hpp #ifndef HUFFMAN_TREE_HPP #define HUFFMAN_TREE_HPP #include "bit_string.hpp" #include <cstdint> #include <map> class huffman_tree { public: /****************************************************** * Build this Huffman tree using the character counts. * ******************************************************/ explicit huffman_tree(std::map<int8_t, uint32_t>& count_map); ~huffman_tree(); /***************************************** * Infers the encoder map from this tree. * *****************************************/ std::map<int8_t, bit_string> infer_encoder_map(); /*************************************************************************** * Decodes the next character from the bit string starting at bit with * * index 'start_index'. This method will advance the value of 'start_index' * * by the code word length read from the tree. * ***************************************************************************/ int8_t decode_bit_string(size_t& start_index, bit_string& bits); private: // The actual Huffman tree node type: struct huffman_tree_node { int8_t character; // The character of this node. Ignore if // not a leaf node. uint32_t count; // If a leaf, the count of the character. // Otherwise, the sum of counts of its // left and right child nodes. bool is_leaf; // This node is leaf? huffman_tree_node* left; // The left child node. huffman_tree_node* right; // The right child node. // Construct a new Huffman tree node. huffman_tree_node(int8_t character, uint32_t count, bool is_leaf) : character {character}, count {count}, is_leaf {is_leaf}, left {nullptr}, right {nullptr} {} }; // The root node of this Huffman tree: huffman_tree_node* root; // Merges the two input into a new node that has 'node1' and 'node2' as its // children. The node with lower ... huffman_tree_node* merge(huffman_tree_node* node1, huffman_tree_node* node2); // The recursive implementation of the routine that builds the encoder map: void infer_encoder_map_impl(bit_string& bit_string_builder, huffman_tree_node* current_node, std::map<int8_t, bit_string>& map); // Checks that the input count is positive: uint32_t check_count(uint32_t count); // Used for deallocating the memory occupied by the tree nodes: void recursive_node_delete(huffman_tree_node* node); public: // Used for the priority queue: class huffman_tree_node_comparator { public: bool operator()(const huffman_tree_node *const lhs, const huffman_tree_node *const rhs) { if (lhs->count == rhs->count) { // If same weights, order by char value: return lhs->character > rhs->character; } // Otherwise, compare by weights: return lhs->count > rhs->count; } }; }; #endif // HUFFMAN_TREE_HPP huffman_tree.cpp #include "bit_string.hpp" #include "huffman_tree.hpp" #include <sstream> #include <stdexcept> #include <cstdint> #include <queue> #include <unordered_map> #include <utility> #include <vector> huffman_tree::huffman_tree(std::map<int8_t, uint32_t>& count_map) { if (count_map.empty()) { std::stringstream ss; ss << "Compressor requires a non-empty text."; throw std::runtime_error{ss.str()}; } std::priority_queue<huffman_tree_node*, std::vector<huffman_tree_node*>, huffman_tree::huffman_tree_node_comparator> queue; std::for_each(count_map.cbegin(), count_map.cend(), [&queue](std::pair<int8_t, uint32_t> p) { queue.push(new huffman_tree_node(p.first, p.second, true)); }); while (queue.size() > 1) { huffman_tree_node* node1 = queue.top(); queue.pop(); huffman_tree_node* node2 = queue.top(); queue.pop(); queue.push(merge(node1, node2)); } root = queue.top(); queue.pop(); } void huffman_tree::recursive_node_delete(huffman_tree::huffman_tree_node* node) { if (node == nullptr) { return; } recursive_node_delete(node->left); recursive_node_delete(node->right); delete node; } huffman_tree::~huffman_tree() { recursive_node_delete(root); } std::map<int8_t, bit_string> huffman_tree::infer_encoder_map() { std::map<int8_t, bit_string> map; if (root->is_leaf) { root->is_leaf = false; root->left = new huffman_tree_node(root->character, 1, true); bit_string b; b.append_bit(false); map[root->character] = std::move(b); return map; } bit_string code_word; infer_encoder_map_impl(code_word, root, map); return map; } int8_t huffman_tree::decode_bit_string(size_t& index, bit_string& bits) { if (root->is_leaf) { index++; return root->character; } huffman_tree_node* current_node = root; while (!current_node->is_leaf) { bool bit = bits.read_bit(index++); current_node = (bit ? current_node->right : current_node->left); } return current_node->character; } void huffman_tree::infer_encoder_map_impl( bit_string& current_code_word, huffman_tree::huffman_tree_node* node, std::map<int8_t, bit_string>& map) { if (node->is_leaf) { map[node->character] = bit_string(current_code_word); return; } current_code_word.append_bit(false); infer_encoder_map_impl(current_code_word, node->left, map); current_code_word.remove_last_bit(); current_code_word.append_bit(true); infer_encoder_map_impl(current_code_word, node->right, map); current_code_word.remove_last_bit(); } huffman_tree::huffman_tree_node* huffman_tree::merge(huffman_tree_node* node1, huffman_tree_node* node2) { huffman_tree_node* new_node = new huffman_tree_node(0, node1->count + node2->count, false); if (node1->count < node2->count) { new_node->left = node1; new_node->right = node2; } else { new_node->left = node2; new_node->right = node1; } new_node->character = std::max(node1->character, node2->character); return new_node; } uint32_t huffman_tree::check_count(uint32_t count) { if (count == 0) { throw std::runtime_error{"The input count is zero."}; } return count; } huffman_decoder.hpp #ifndef HUFFMAN_DECODER_HPP #define HUFFMAN_DECODER_HPP #include "bit_string.hpp" #include "huffman_tree.hpp" #include <vector> class huffman_decoder { public: std::vector<int8_t> decode(huffman_tree& tree, bit_string& encoded_text); }; #endif // HUFFMAN_DECODER_HPP huffman_decoder.cpp #include "huffman_decoder.hpp" std::vector<int8_t> huffman_decoder::decode(huffman_tree& tree, bit_string& encoded_text) { size_t index = 0; size_t bit_string_length = encoded_text.length(); std::vector<int8_t> decoded_text; while (index < bit_string_length) { int8_t character = tree.decode_bit_string(index, encoded_text); decoded_text.push_back(character); } return decoded_text; } huffman_serializer.hpp #ifndef HUFFMAN_SERIALIZER_HPP #define HUFFMAN_SERIALIZER_HPP #include "bit_string.hpp" #include <cstdint> #include <cstdlib> #include <map> #include <vector> class huffman_serializer { public: static const int8_t MAGIC[4]; static const size_t BYTES_PER_WEIGHT_MAP_ENTRY; static const size_t BYTES_PER_CODE_WORD_COUNT_ENTRY; static const size_t BYTES_PER_BIT_COUNT_ENTRY; std::vector<int8_t> serialize(std::map<int8_t, uint32_t>& count_map, bit_string& encoded_text); }; #endif // HUFFMAN_SERIALIZER_HPP huffman_serializer.cpp #include "huffman_serializer.hpp" #include <algorithm> const int8_t huffman_serializer::MAGIC[4] = { (int8_t) 0xC0, (int8_t) 0xDE, (int8_t) 0x0D, (int8_t) 0xDE }; const size_t huffman_serializer::BYTES_PER_WEIGHT_MAP_ENTRY = 5; const size_t huffman_serializer::BYTES_PER_CODE_WORD_COUNT_ENTRY = 4; const size_t huffman_serializer::BYTES_PER_BIT_COUNT_ENTRY = 4; static size_t compute_byte_list_size(std::map<int8_t, uint32_t>& count_map, bit_string& encoded_text) { return sizeof(huffman_serializer::MAGIC) + huffman_serializer::BYTES_PER_CODE_WORD_COUNT_ENTRY + huffman_serializer::BYTES_PER_BIT_COUNT_ENTRY + count_map.size() * huffman_serializer::BYTES_PER_WEIGHT_MAP_ENTRY + encoded_text.get_number_of_occupied_bytes(); } std::vector<int8_t> huffman_serializer::serialize(std::map<int8_t, uint32_t>& count_map, bit_string& encoded_text) { std::vector<int8_t> byte_list; byte_list.reserve(compute_byte_list_size(count_map, encoded_text)); // Emit the file type signature magic: for (int8_t magic_byte : huffman_serializer::MAGIC) { byte_list.push_back(magic_byte); } union { uint32_t num; int8_t bytes[4]; } t; t.num = (uint32_t) count_map.size(); byte_list.push_back(t.bytes[0]); byte_list.push_back(t.bytes[1]); byte_list.push_back(t.bytes[2]); byte_list.push_back(t.bytes[3]); t.num = (uint32_t) encoded_text.length(); byte_list.push_back(t.bytes[0]); byte_list.push_back(t.bytes[1]); byte_list.push_back(t.bytes[2]); byte_list.push_back(t.bytes[3]); union { uint32_t count; int8_t bytes[4]; } count_bytes; // Emit the code words: for (const auto& entry : count_map) { int8_t byte = entry.first; byte_list.push_back(byte); uint32_t count = entry.second; count_bytes.count = count; byte_list.push_back(count_bytes.bytes[0]); byte_list.push_back(count_bytes.bytes[1]); byte_list.push_back(count_bytes.bytes[2]); byte_list.push_back(count_bytes.bytes[3]); } std::vector<int8_t> encoded_text_byte_vector = encoded_text.to_byte_array(); std::copy(encoded_text_byte_vector.begin(), encoded_text_byte_vector.end(), std::back_inserter(byte_list)); return byte_list; } huffman_deserializer.hpp #ifndef HUFFMAN_DESERIALIZER_HPP #define HUFFMAN_DESERIALIZER_HPP #include "bit_string.hpp" #include <map> #include <cstdint> #include <vector> class huffman_deserializer { public: struct result { bit_string encoded_text; std::map<int8_t, uint32_t> count_map; }; /******************************************************************** * Returns a struct holding the encoded text and the weight map that * * produced it. * ********************************************************************/ result deserialize(std::vector<int8_t>& data); private: // Make sure that the data contains the magic signature: void check_signature(std::vector<int8_t>& data); // Make sure that the data describes the number of code words in the stream // and returns that number: size_t extract_number_of_code_words(std::vector<int8_t>& data); // Make sure that the data describes the number of encoded text bits in the // stream and returns that number: size_t extract_number_of_encoded_text_bits(std::vector<int8_t>& data); // Extracts the actual encoder map from the stream: std::map<int8_t, uint32_t> extract_count_map(std::vector<int8_t>& data, size_t number_of_code_words); // Extracts the actual encoded text from the stream: bit_string extract_encoded_text( const std::vector<int8_t>& data, const std::map<int8_t, uint32_t>& weight_map, const size_t number_of_encoded_text_bits); }; #endif // HUFFMAN_DESERIALIZER_HPP huffman_deserializer.cpp #include "huffman_deserializer.hpp" #include "huffman_serializer.hpp" #include "file_format_error.h" #include <sstream> #include <string> huffman_deserializer::result huffman_deserializer::deserialize(std::vector<int8_t> &data) { check_signature(data); // The number of code words is the same as the number of mappings in the // deserialized weight map. size_t number_of_code_words = extract_number_of_code_words(data); size_t number_of_text_bits = extract_number_of_encoded_text_bits(data); std::map<int8_t, uint32_t> count_map = extract_count_map(data, number_of_code_words); bit_string encoded_text = extract_encoded_text(data, count_map, number_of_text_bits); result ret; ret.count_map = std::move(count_map); ret.encoded_text = std::move(encoded_text); return ret; } void huffman_deserializer::check_signature(std::vector<int8_t>& data) { if (data.size() < sizeof(huffman_serializer::MAGIC)) { std::stringstream ss; ss << "The data is too short to contain " "the mandatory signature. Data length: " << data.size() << "."; std::string err_msg = ss.str(); throw file_format_error(err_msg.c_str()); } for (size_t i = 0; i != sizeof(huffman_serializer::MAGIC); ++i) { if (data[i] != huffman_serializer::MAGIC[i]) { throw file_format_error("Bad file type signature."); } } } size_t huffman_deserializer ::extract_number_of_code_words(std::vector<int8_t>& data) { if (data.size() < 8) { std::stringstream ss; ss << "No number of code words in the data. The file is too short: "; ss << data.size() << " bytes."; std::string err_msg = ss.str(); throw file_format_error{err_msg.c_str()}; } union { size_t num; int8_t bytes[sizeof(size_t)]; } t; t.num = 0; t.bytes[0] = data[4]; t.bytes[1] = data[5]; t.bytes[2] = data[6]; t.bytes[3] = data[7]; return t.num; } size_t huffman_deserializer ::extract_number_of_encoded_text_bits(std::vector<int8_t>& data) { if (data.size() < 12) { std::stringstream ss; ss << "No number of encoded text bits. The file is too short: "; ss << data.size() << " bytes."; std::string err_msg = ss.str(); throw file_format_error{err_msg.c_str()}; } union { size_t num; int8_t bytes[8]; } t; t.num = 0; t.bytes[0] = data[8]; t.bytes[1] = data[9]; t.bytes[2] = data[10]; t.bytes[3] = data[11]; return t.num; } std::map<int8_t, uint32_t> huffman_deserializer:: extract_count_map(std::vector<int8_t>& data, size_t number_of_code_words) { std::map<int8_t, uint32_t> count_map; try { size_t data_byte_index = sizeof(huffman_serializer::MAGIC) + huffman_serializer::BYTES_PER_BIT_COUNT_ENTRY + huffman_serializer::BYTES_PER_CODE_WORD_COUNT_ENTRY; union { uint32_t count; int8_t bytes[4]; } count_bytes; for (size_t i = 0; i != number_of_code_words; ++i) { int8_t byte = data.at(data_byte_index++); count_bytes.count = 0; count_bytes.bytes[0] = data.at(data_byte_index++); count_bytes.bytes[1] = data.at(data_byte_index++); count_bytes.bytes[2] = data.at(data_byte_index++); count_bytes.bytes[3] = data.at(data_byte_index++); count_map[byte] = count_bytes.count; } } catch (std::out_of_range& error) { std::stringstream ss; ss << "The input data is too short in order to recover the encoding " "map. " << error.what(); std::string err_msg = ss.str(); throw file_format_error{err_msg.c_str()}; } return count_map; } bit_string huffman_deserializer ::extract_encoded_text(const std::vector<int8_t>& data, const std::map<int8_t, uint32_t>& count_map, const size_t number_of_encoded_text_bits) { size_t omitted_bytes = sizeof(huffman_serializer::MAGIC) + huffman_serializer::BYTES_PER_BIT_COUNT_ENTRY + huffman_serializer::BYTES_PER_CODE_WORD_COUNT_ENTRY; omitted_bytes += count_map.size() * huffman_serializer::BYTES_PER_WEIGHT_MAP_ENTRY; bit_string encoded_text; size_t current_byte_index = omitted_bytes; size_t current_bit_index = 0; try { for (size_t bit_index = 0; bit_index != number_of_encoded_text_bits; bit_index++) { bool bit = (data.at(current_byte_index) & (1 << current_bit_index)) != 0; encoded_text.append_bit(bit); if (++current_bit_index == CHAR_BIT) { current_bit_index = 0; current_byte_index++; } } } catch (std::out_of_range& error) { std::stringstream ss; ss << "The input data is too short in order to recover encoded text. " << error.what(); std::string err_msg = ss.str(); throw file_format_error{err_msg.c_str()}; } return encoded_text; } file_format_error.h #ifndef FILE_FORMAT_ERROR_H #define FILE_FORMAT_ERROR_H #include <stdexcept> class file_format_error : public std::runtime_error { public: explicit file_format_error(const char* err_msg) : std::runtime_error{err_msg} {} }; #endif // FILE_FORMAT_ERROR_H Critique request Please tell me anything you have in mind. In particular: Efficiency (correct use of move semantics), Coding and naming conventions, Modularity, API design. Answer: Minor stuffs 1) std::string get_indent(size_t len) could use a constructor for basic_string. Like this: std::string s(len, ' '); 2) byte_counts.hpp and bit_string.hpp use #pragma once before the include guards. 3) Instead of: bool decode = false; if (command_line_argument_set.find(DECODE_FLAG_SHORT) != args_end || command_line_argument_set.find(DECODE_FLAG_LONG) != args_end) { decode = true; } You can directly write: bool decode = command_line_argument_set.find(DECODE_FLAG_SHORT) != args_end || command_line_argument_set.find(DECODE_FLAG_LONG) != args_end 4) if ((!decode and !encode) or (decode and encode)) could be if (decode == encode) OOP Your main.cpp is a bit messy. A 700 lines long file with C functions only should not appear in C++ project, even for testing purposes. Move In std::vector<int8_t> file_read(std::string& file_name) you use return std::move(ret); return ret; is a case of NRVO, so copy elision is permitted. As ret is an lvalue, the move constructor of std::vector<int8_t> will be used to obtain the return value of file_read. A simple rule for this: if you return a local variable by value, the compiler knows that this variable won't be use later so it'll use the move constructor. If you want to return an lvalue expression or a non local variable, using std::move allows you to tell the constructor to move instead of copy. So why not use std::move all the time (even when the compiler does the optimization)? Using std::move on a return value is considered harmful as it can prevent elision. Containers You used a lot of c++11 and c++14 features and it's really nice, I love it. You could have use an std::array instead of static const int8_t MAGIC[4]; to be even better at this. In huffman_serializer::serialize(std::map<int8_t, uint32_t>& count_map, bit_string& encoded_text) you can replace this // Emit the file type signature magic: for (int8_t magic_byte : huffman_serializer::MAGIC) { byte_list.push_back(magic_byte); } by this: byte_list.insert(byte_list.end(), &huffman_serializer::MAGIC[0], &huffman_serializer::MAGIC[4]); Using std::vector::insert is clearer and it generaly is the best solution to add an array in an std::vector. You wrote this: union { uint32_t num; int8_t bytes[4]; } t; t.num = (uint32_t) count_map.size(); byte_list.push_back(t.bytes[0]); byte_list.push_back(t.bytes[1]); byte_list.push_back(t.bytes[2]); byte_list.push_back(t.bytes[3]); I never encountered this use of a Union and I find it interesting. Thanks to you I learned that this is a perfectly correct way to use Unions at least since C++11. Modularity You copy pasted your code chunk with std::stringstream here and there, It looks like you could do something to encapsulate your error messages.
{ "domain": "codereview.stackexchange", "id": 23219, "tags": "c++, console, compression" }
What is the polynomial time reduction between these two Hamiltonian cycle problems?
Question: Problem 1: Given an undirected graph, return the edges of a Hamiltonian cycle, or correctly decide that the graph has no such cycle. Problem 2: Given an undirected graph, decide whether or not the graph contains at least one Hamiltonian cycle. What is the polynomial-time reduction of problem 1 to problem 2? Let TSP1 denote the following problem: given a TSP instance in which all edge costs are positive integers, compute the value of an optimal TSP tour. Let TSP2 denote: given a TSP instance in which all edge costs are positive integers, and a positive integer T, decide whether or not there is a TSP tour with total length at most T. Let HAM1 denote: given an undirected graph, either return the edges of a Hamiltonian cycle (a cycle that visits every vertex exactly once), or correctly decide that the graph has no such cycle. Let HAM2 denote: given an undirected graph, decide whether or not the graph contains at least one Hamiltonian cycle. From Roughgarden's online algorithms course The solution: If TSP2 is polynomial-time solvable, then so is TSP1. If HAM2 is polynomial-time solvable, then so is HAM1. Answer: First the reduction from P2 to P1 is easy cause if you can decide whether there is one cycle you can also decide whether there is at least one cycle. The other way around is more tricky. Notice that P1 can be solved in polynomial time if we have an oracle for P2 (an oracle for P2 means that we can use a subroutine that solves P2). algorithm for P1 with input G = (V,E): E' = {} # edges of "Hamilton Cycle" run subroutine for P2 on G if there is no Ham. Cycle: report that there is no Ham. Cycle for e in E: run P2 on (V,E - {e}) if (V,E - {e}) contains "Hamilton Cycle": E <- E - {e} # remove e else: E' <- E' + {e} # add e to cycle return E' It uses the procedure for P2 $O(|E|)$ times, thus we obtained a polynomial reduction.
{ "domain": "cs.stackexchange", "id": 17000, "tags": "algorithms, graphs, reductions, hamiltonian-path, polynomial-time-reductions" }
Questions related to distinguishing dimagnetism and paramagnetism and when is there magnetic properties
Question: If a transition metal ion has both paired and unpaired electrons, would it be considered as showcasing paramagnetism or dimagnetism? I know that Fe2+, which has an abbreviated electronic configuration of [Ar]3d6 shows paramagnetism even though 1 of the d-orbitals have paired electrons. How about a transition metal ion with 7 / 8 electrons in d orbitals? Do we determine paramagnetism / dimagnetism based on whether there is greater number of paired / unpaired electrons or if a transition metal ion has unpaired electrons, it must be paramagnetic? I was thinking that there are many other orbitals in Fe2+ like the 3s, 3p orbitals and all of which contain paired electrons, so won't that outweigh the unpaired electrons in 3d orbitals? Also, the teacher taught the idea paramagnetism and diamagnetism in the context of transition metals and said that transition metals show magnetic properties. Why don't other species show magnetic properties cause they can also have paired / unpaired electrons? Answer: Do we determine paramagnetism / diamagnetism based on whether there is greater number of paired / unpaired electrons or if a transition metal ion has unpaired electrons, it must be paramagnetic? HyperPhysics writes: "Diamagnetism is a property of all materials and opposes applied magnetic fields, but is very weak. Paramagnetism, when present, is stronger than diamagnetism... That means the diamagnetism from any paired electrons is almost negligible compared to the paramagnetism from any unpaired electrons present - even if it's just one unpaired electron. So, as long as there are unpaired electrons, the material will be observed to be paramagnetic. Watch out with your $\ce{Fe^2+}$ example. You are right in saying it is a $\mathrm{d^6}$ ion, but the ligands attached to it will determine whether it is diamagnetic (the low-spin case, e.g. with $\ce{CN-}$ ligands) or paramagnetic (the high-spin case, e.g. with $\ce{H2O}$ ligands). This phenomenon is explained by crystal field theory. Why don't other species show magnetic properties cause they can also have paired / unpaired electrons? All species do show magnetic properties. The most famous example is diatomic oxygen, which is paramagnetic because of its unpaired electrons. Water, which has all its electrons paired, can be shown to be diamagnetic (again, just search Google - there are even YouTube videos on it).
{ "domain": "chemistry.stackexchange", "id": 4838, "tags": "transition-metals, magnetism" }
Path difference between two rays
Question: I came across the concept of optical path length a few days ago ; a case where parallel beams of light passing through a rectangular prism generate a path difference with respect to the rays not falling on the prism but going parallel sideways. I know that the time lag between the two sets of rays cause the path difference ... 1.But how do we apply the same to a triangular prism (how is the formula for path difference as stated in the above figure derived).. 2.Also what is actually optical path length Guys please respond a bit early ..I have my exams coming in a few days Answer: You need to count the waves. Let the wavelengths be $\lambda_{\rm air}$ and $\lambda_{\rm glass}$ To have the same number of waves in $\rm AB$ in air as $\rm CD$ in glass the following equation must be true $\dfrac {\rm AB}{\lambda_{\rm air}} = \dfrac {\rm CD}{\lambda_{\rm glass}} $ However $\lambda_{\rm glass} = \dfrac {\lambda_{\rm air}}{\mu_{\rm glass}} $ where $\mu_{\rm glass}$ is the refractive index of glass. Putting this into the equation produces $\rm AB = \mu_{\rm glass} \,CD$ $\mu_{\rm glass} \,\rm CD$ is called the optical path length and contains the same number of waves as a length $\rm AB$ of air. So your $\Delta x$ is the difference between: the length of air which contains the same number of waves as a length of glass $\rm QS$ and the length of air $\rm PR$. If there two happened to be the same then if the waves left $P$ and $Q$ in phase then they must have arrived at $R$ and $S$ in phase.
{ "domain": "physics.stackexchange", "id": 39821, "tags": "optics, visible-light, waves, geometric-optics, superposition" }