title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
How do I create a "dummy" module object?
38,687,075
<p>Suppose I have a function that takes a module object as an argument and does something to it -- maybe monkeypatches it. For testing purposes, how do I create a dummy module object to pass to it?</p>
2
2016-07-31T18:20:47Z
38,687,085
<p>Like this (<a href="https://docs.python.org/3/library/types.html#types.ModuleType" rel="nofollow">docs</a>)</p> <pre><code>&gt;&gt;&gt; import types &gt;&gt;&gt; types.ModuleType('my_module') &lt;module 'my_module'&gt; </code></pre>
1
2016-07-31T18:21:59Z
[ "python", "python-3.x" ]
Program to read multiple URLs and generate a word (Should only contains alphanumeric) frequency table
38,687,095
<p>I am writing a Python program to read multiple URLs and generate a word (a word only contains letters A-Za-z0-9) frequency table. Output can be stored in files with names url1.txt, url2.txt</p> <p>This is what i have so far:</p> <pre><code>import urllib2 import obo url = 'sample url' response = urllib2.urlopen(url) html = response.read() text = obo.stripTags(html).lower() wordlist = obo.stripNonAlphaNum(text) for s in sorteddict: print str(s) </code></pre>
-1
2016-07-31T18:22:37Z
38,687,169
<p>You can use boilerpipe to easily extract the text: <a href="https://github.com/misja/python-boilerpipe" rel="nofollow">https://github.com/misja/python-boilerpipe</a>.</p> <p>The code might look like this: </p> <pre><code>from boilerpipe.extract import Extractor from collections import Counter urls = ['url1', 'url2', ... ] # A list of the urls you want to fetch # Ask boilerpipe to fetch the data extractors = [Extractor(extractor='ArticleExtractor', url=url) for url in urls] # Ask boilerpipe to extract the text raw_texts = [extractor.getText() for extractor in extractors] # count the occurrences of words in each text word_counts = [Counter(text.split(" ")) for text in raw_texts] </code></pre>
0
2016-07-31T18:31:54Z
[ "python" ]
how does Python handle "from __future__ import division"?
38,687,131
<p>I've looked at the <a href="https://hg.python.org/cpython/file/2.7/Lib/__future__.py" rel="nofollow">source code for __future__.py</a> and it makes no sense to me -- how does this actually work, to change the behavior of division?</p>
1
2016-07-31T18:26:13Z
38,687,330
<p>That module only serves a documentary / introspection purpose; none of the code in it actually <em>does</em> anything.</p> <p>Rather, when Python is compiling a module, it calls <a href="https://hg.python.org/cpython/file/2.7/Python/compile.c#l272" rel="nofollow">PyFuture_FromAST</a> on the module, which checks for <code>from __future__ import</code> statements, and assuming they're valid, <a href="https://hg.python.org/cpython/file/2.7/Python/future.c#l23" rel="nofollow">sets the appropriate flags</a> on a <code>PyFutureFeatures</code> object. The compiler then goes and <a href="https://hg.python.org/cpython/file/2.7/Python/compile.c#l279" rel="nofollow">sets those flags in the compiler context</a> before going ahead and actually compiling the module.</p> <p>For comparison, you can see that in Python 3, <code>__future__.py</code> is still the same and contains all of the same information, but in <code>future.c</code>, <a href="https://hg.python.org/cpython/file/3.0/Python/future.c#l23" rel="nofollow">none of the features actually set any flags</a> because all of those features are enabled by default in Python 3.</p>
3
2016-07-31T18:48:12Z
[ "python" ]
Why python is "shy" to use memory in docker?
38,687,134
<p>I'm running a memory intensive python script (pandas, numpy, machine learning) in docker and performance is terrible.</p> <p>In my host machine the script uses more than 10GB of RAM. The dockerized python script uses only 3Gb (com.docker.hyperkit process). I have already changer my docker memory preferences to 10gb (in Mac OS Docker GUI) and run the container with explicit memory limit:</p> <pre><code>docker run -m 10g ... </code></pre> <p>Why the container don't use 10gb as the host application does?</p>
-1
2016-07-31T18:26:59Z
38,687,403
<p>Computer programs use other resources besides memmory. There's CPU, I/O devices and information. I would guess that this behavior you're seeing is the result of other resource been exausted. For instance, your I/O device could be causing a bottleneck before memmory is filled up. This is just a guess, because I have no other information.</p>
1
2016-07-31T18:57:50Z
[ "python", "docker" ]
How to process python dictionary items in the order of value item?
38,687,261
<p>I get the following JSON from a web service. How to get an ordered collection from the following JSON in Python. I want to process key value pairs in the ascending order of "order" number.</p> <pre><code>{ "key1": { "order": 10, "name": "somenameZ" }, "key2": { "order": 3, "name": "somenameY" }, "key3": { "order": 8, "name": "somenameX" } } </code></pre>
0
2016-07-31T18:39:27Z
38,687,286
<p>Sort the <code>dict.items()</code> result with a custom key:</p> <pre><code>ordered = sorted(outerdict.items(), key=lambda kv: kv[1]['order']) for key, item in ordered: # ... </code></pre> <p>If you only need the nested dictionaries, and not the outer keys, you could do the same for <code>dict.values()</code>:</p> <pre><code>ordered = sorted(outerdict.values(), key=lambda v: v['order']) for item in ordered: # ... </code></pre>
3
2016-07-31T18:42:06Z
[ "python", "json", "sorting", "dictionary" ]
Counting how many times a row occurs in a matrix (numpy)
38,687,292
<p>Is there a better way to count how many times a given row appears in a numpy 2D array than</p> <pre><code>def get_count(array_2d, row): count = 0 # iterate over rows, compare for r in array_2d[:,]: if np.equal(r, row).all(): count += 1 return count # let's make sure it works array_2d = np.array([[1,2], [3,4]]) row = np.array([1,2]) count = get_count(array_2d, row) assert(count == 1) </code></pre>
4
2016-07-31T18:42:54Z
38,687,313
<p>One simple way would be with <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p> <pre><code>(array_2d == row).all(-1).sum() </code></pre> <hr> <p>Considering memory efficiency, here's one approach considering each row from <code>array_2d</code> as an indexing tuple on an <code>n-dimensional</code> grid and assuming positive numbers in the inputs -</p> <pre><code>dims = np.maximum(array_2d.max(0),row) + 1 array_1d = np.ravel_multi_index(array_2d.T,dims) row_scalar = np.ravel_multi_index(row,dims) count = (array_1d==row_scalar).sum() </code></pre> <p><a href="http://stackoverflow.com/a/38674038/3293881"><strong>Here</strong></a>'s a post discussing the various aspects related to it.</p> <p><strong>Note:</strong> Using <code>np.count_nonzero</code> could be much faster to count booleans instead of summation with <code>.sum()</code>. So, do consider using it for both the above mentioned aproaches. </p> <p>Here's a quick runtime test -</p> <pre><code>In [74]: arr = np.random.rand(10000)&gt;0.5 In [75]: %timeit arr.sum() 10000 loops, best of 3: 29.6 µs per loop In [76]: %timeit np.count_nonzero(arr) 1000000 loops, best of 3: 1.21 µs per loop </code></pre>
3
2016-07-31T18:46:07Z
[ "python", "numpy" ]
Text file does not get opened in output window.Python (Pycharm IDE)
38,687,319
<p>So I just began learning Python and after making a file named myfile.txt I tried running this bit of code</p> <pre><code>def Main(): f=open("myfile.txt","r") for line in f: print(line) f.close() if __name__ =="__main__": Main() </code></pre> <p>The file doesn't open , I'm just left with a blank output window.Any ideas what I did wrong? I'm using the Pycharm IDE on Windows.Please help.</p>
1
2016-07-31T18:47:05Z
38,687,337
<p>deindent the <code>if</code></p> <pre><code>def Main(): f=open("myfile.txt","r") for line in f: print(line) f.close() if __name__ =="__main__": Main() </code></pre> <p>The <code>if</code> was part of your <code>Main</code> function, so you had a recursive function when the the condition was <code>True</code></p> <p>I believe you want the <code>if</code> to be tested after you define the function <code>Main</code></p>
1
2016-07-31T18:49:18Z
[ "python", "file-io", "pycharm" ]
Text file does not get opened in output window.Python (Pycharm IDE)
38,687,319
<p>So I just began learning Python and after making a file named myfile.txt I tried running this bit of code</p> <pre><code>def Main(): f=open("myfile.txt","r") for line in f: print(line) f.close() if __name__ =="__main__": Main() </code></pre> <p>The file doesn't open , I'm just left with a blank output window.Any ideas what I did wrong? I'm using the Pycharm IDE on Windows.Please help.</p>
1
2016-07-31T18:47:05Z
38,687,385
<p>First you need to make sure you indentions are correct. You should not close the file when inside the loop, so it's to much indented. And to run the program your <code>if __name__ =="__main__":</code> must be defined outside the function. Try this: </p> <pre><code>def main(): f = open("myfile.txt","r") for line in f: print(line) f.close() if __name__ =="__main__": main() </code></pre> <p>Usually you use the keyword <code>with</code> when handling files. It manages the opening and closing for you. Everything indented inside the <code>with</code> statement is done with the file being open. Try this, it's doing exactly the same:</p> <pre><code>def main(): with open("myfile.txt", "r") as f: for line in f: print(line) if __name__ == '__main__': main() </code></pre>
3
2016-07-31T18:55:49Z
[ "python", "file-io", "pycharm" ]
How to run custom seq2seq learning (using pre-calculated word embeddings) encoder-decoder in Tensorflow?
38,687,369
<p>I need to run a encoder-decoder model in Tensorflow. I see that using the available APIs <code>basic_rnn_seq2seq(encoder_input_data, decoder_input_data, lstm_cell)</code> etc, a encoder-decoder system can be created.</p> <ol> <li>How can we enter the embeddings such as word2vec in such model? I am aware that we can do embedding lookup but as per the API <code>encoder_input_data</code> is a list of 2D Tensor of size batch_size x input_size. How can each word be represented using its respective word embedding in this setup? Even <code>embedding_rnn_seq2seq</code> internally extracts the embeddings. How to give pre-calculated word embeddings as input? </li> <li>How can we get the cost/perplexity through the API?</li> <li>In case of test instances, we may not know the corresponding decoder inputs. How to handle such case?</li> </ol>
0
2016-07-31T18:53:56Z
38,700,227
<p>First question: Probably not the best way, but what I did was, after building the model, before training starts:</p> <pre><code>for v in tf.trainable_variables(): if v.name == 'embedding_rnn_seq2seq/RNN/EmbeddingWrapper/embedding:0': assign_op = v.assign(my_word2vec_matrix) session.run(assign_op) # or `assign_op.op.run()` </code></pre> <p>my_word2vec_matrix is a matrix of shape vocabularysize x embedding size and filled in my precomputed embedding-vectors. Use this (or something similar) if you believe your embeddings are really good. Otherwise the seq2seq-Model, over time, will come up with its own trained embedding.</p> <p>Second question: In seq2seq.py there is a call to model_with_buckets() which you can find in python/ops/seq2seq.py. From there the loss is returned.</p> <p>Third question: In the test case each decoder input is the decoder output from the timestep before (i.e. the first decoder input is a special GO-symbol, the second decoder input is the decoder output of the first timestep, the third decoder input is the decoder output of the second timestep, and so on)</p>
2
2016-08-01T13:31:13Z
[ "python", "tensorflow", "deep-learning", "lstm", "language-model" ]
Tkinter Loop Never Exits Cleanly
38,687,396
<p>Python 2.7</p> <p>I have written a <code>run</code> method for my Tkinter GUI rather than using the standard <code>mainloop</code>, and it always exits on an error when I close the window, even after implementing a <code>WM_DELETE_WINDOW</code> protocol as advised elsewhere on SO. I tried invoking <code>exit</code> in the protocol callback and <code>return</code>ing from the loop, but Python always goes through the loop one last time. Why is this?</p> <pre><code>class FrameApp(object): def __init__(self): ... self.rootWin.protocol("WM_DELETE_WINDOW", self.callback_destroy) self.winRunning = False def callback_destroy(self): self.winRunning = False self.rootWin.destroy() # go away, window exit() # GET OUT </code></pre> <p>Here is the run loop:</p> <pre><code> def run(self): last = -infty self.winRunning = True ... while self.winRunning: # 4.a. Calc geometry self.calcFunc( self.get_sliders_as_list() ) # 4.b. Send new coords to segments self.simFrame.transform_contents() # 4.d. Wait remainder of 40ms elapsed = time.time() * 1000 - last if elapsed &lt; 40: time.sleep( (40 - elapsed) / 1000.0 ) # 4.e. Mark beginning of next loop last = time.time() * 1000 # 4.f. Update window if not self.winRunning: # This does not solve the problem return # still tries to call 'update', # and never exits cleanly self.canvas.update() # don't know how to prevent these from being called # again after the window is destroyed self.rootWin.update_idletasks() </code></pre> <p>Result:</p> <blockquote> <p>File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 972, in update_idletasks self.tk.call('update', 'idletasks') _tkinter.TclError: can't invoke "update" command: application has been destroyed</p> </blockquote>
0
2016-07-31T18:56:52Z
38,729,490
<p>Without the mainloop, tkinter cannot get the <code>WM_DELETE_WINDOW</code> message to call your exit function. (Or rather, it can only catch anything during the ~millisecond of the <code>update_idletasks</code> call, as it won't queue since tkinter doesn't have an event loop (and thus queue) going since you never start one.) It can't catch signals if it can't communicate with the Window Manager (system), and it can't communicate if it isn't looping.</p> <p>To solve it, just use the event/main loop. Make your <code>run</code> function save any state it needs and call itself <code>after</code> whatever interval you wish.</p> <hr> <p>On another note, don't use <code>time.sleep</code> with tkinter- it prevents it from doing anything (and also the 40ms remaining of sleep is probably longer than the rest of the loop, so you'd have 41 ms of waiting and 0.5 ms of clickability). Instead, just carefully configure your <code>root.after</code> statements (you can calculate things in them, too)</p>
1
2016-08-02T20:03:29Z
[ "python", "python-2.7", "tkinter", "tkinter-canvas", "event-loop" ]
I try to create and SQL table that contains 10000 rows from random elements. It gives me an error
38,687,476
<p>I'm trying to create an SQL table, it runs by this (this is in and SQL file, what I call in the end of my code '1-fake-mentor-candidates.sql')</p> <pre><code>CREATE TABLE "mentor_candidates" ( first_name varchar(255) NOT NULL, last_name varchar(255) NOT NULL, phone_number varchar(100) NOT NULL, email varchar(255) NOT NULL, city varchar(255) NOT NULL, level integer NOT NULL, birth_year integer NOT NULL ); </code></pre> <p>That's my class, that should create my table, but it gives me back an error message: Traceback (most recent call last):</p> <p>File "fake-mentor-candidates.py", line 39, in </p> <p>FakeMentors.write_sql_file('1-fake-mentor-candidates.sql')</p> <p>File "fake-mentor-candidates.py", line 35, in write_sql_file + cls.level + ");"</p> <p>AttributeError: type object 'FakeMentors' has no attribute 'first_name'</p> <pre><code>class FakeMentors: first_name_to_pick = ['Attila', 'Prezmek', 'John', 'Tim', 'Matthew', 'Andy', 'Giancarlo'] last_name_to_pick = ['Monoczki', 'Szodoray', 'Ciacka', 'Carrey', 'Obama', 'Lebron', 'Hamilton', 'Fisichella'] city_to_pick = ['Budapest', 'Miskolc', 'Krakow', 'Barcelona', 'New York'] phonenumber_to_pick = ['30', '20', '70'] def __init__(self): self.first_name = random.choice(self.first_name_to_pick) self.last_name = random.choice(self.last_name_to_pick) self.birth_year = random.randint(1960, 1995) self.email = self.first_name + self.last_name + str(random.randint(1, 100)) + '@codecool.com' self.city = random.choice(self.city_to_pick) self.phone_number = '+36' + self.random.choice(phonenumber_to_pick) + str(random. randint(100000, 999999)) self.level = random.randint(1, 10) @classmethod def write_sql_file(cls, sql_file): with open(sql_file, 'w') as my_file: my_file.write('TRUNCATE TABLE mentor_candidates;\nBEGIN TRANSACTION;\n') for row in range(0, 10000): sql_line = "INSERT INTO \"mentor_candidates\" "\ + "(first_name,last_name,birth_year,email,city,phone_number,level) "\ + "VALUES"\ + "(\'" + cls.first_name + "\',"\ + "\'" + cls.last_name + "\',"\ + cls.birth_year + ","\ + "\'" + cls.email + "\',"\ + "\'" + cls.city + "\',"\ + "\'" + cls.phone_number + "\',"\ + cls.level + ");" my_file.write(line + '\n') my_file.write("END TRANSACTION;\n") FakeMentors.write_sql_file('1-fake-mentor-candidates.sql') </code></pre> <p>What am I doing wrong here? :/</p>
0
2016-07-31T19:05:50Z
38,687,590
<p>You tried to access instance attributes in a classmethod. You need to create instances:</p> <pre><code>class FakeMentors: first_name_to_pick = ['Attila', 'Prezmek', 'John', 'Tim', 'Matthew', 'Andy', 'Giancarlo'] last_name_to_pick = ['Monoczki', 'Szodoray', 'Ciacka', 'Carrey', 'Obama', 'Lebron', 'Hamilton', 'Fisichella'] city_to_pick = ['Budapest', 'Miskolc', 'Krakow', 'Barcelona', 'New York'] phonenumber_to_pick = ['30', '20', '70'] def __init__(self): self.first_name = random.choice(self.first_name_to_pick) self.last_name = random.choice(self.last_name_to_pick) self.birth_year = random.randint(1960, 1995) self.email = self.first_name + self.last_name + str(random.randint(1, 100)) + '@codecool.com' self.city = random.choice(self.city_to_pick) self.phone_number = '+36' + self.random.choice(phonenumber_to_pick) + str(random. randint(100000, 999999)) self.level = random.randint(1, 10) @classmethod def write_sql_file(cls, sql_file): with open(sql_file, 'w') as my_file: my_file.write('TRUNCATE TABLE mentor_candidates;\nBEGIN TRANSACTION;\n') for row in range(0, 10000): entry = cls() my_file.write(('INSERT INTO "mentor_candidates" ' + "(first_name,last_name,birth_year,email,city,phone_number,level) " + "VALUES" + "('{0.first_name}', '{0.last_name}', {0.birth_year}," + "'{0.email}','{0.city}','{0}.phone_number}',{0.level});\n").format(entry)) my_file.write("END TRANSACTION;\n") FakeMentors.write_sql_file('1-fake-mentor-candidates.sql') </code></pre>
1
2016-07-31T19:18:59Z
[ "python", "postgresql" ]
Add current element in array + next element in array while iterating through array in Python
38,687,480
<p>What's the best way to add the first element in an array to the next element in the same array, then add the result to the next element of the array, and so on? For example, I have an array:</p> <pre><code>s=[50, 1.2658, 1.2345, 1.2405, 1.2282, 1.2158, 100] </code></pre> <p>I would like the end array to look like the following:</p> <pre><code>new_s=[50, 51.2658, 52.5003, 53.7408, 54.969, 56.1848, 100] </code></pre> <p>Thus leaving the minimum and maximum elements of the array unchanged.</p> <p>I started going this route:</p> <pre><code>arr_length=len(s) new_s=[50] for i, item in enumerate(s): if i == 0: new_s.append(new_s[i]+s[i+1]) elif 0&lt;i&lt;=(arr_length-2): new_s.append(new_s[i]+s[i+1]) </code></pre> <p>Currently I get the following list:</p> <pre><code>new_s=[50, 51.2658, 52.5003, 53.7408, 54.969, 56.1848, 156.1848] </code></pre> <p>What am I doing wrong that isn't leaving the last item unchanged? </p>
1
2016-07-31T19:06:09Z
38,687,502
<p>The beset way is using <code>numpy.cumsum()</code> for all of your items except the last one then append the last one to the result of <code>cumsum()</code>:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; s=[50, 1.2658, 1.2345, 1.2405, 1.2282, 1.2158, 100] &gt;&gt;&gt; &gt;&gt;&gt; np.append(np.cumsum(s[:-1]), s[-1]) array([ 50. , 51.2658, 52.5003, 53.7408, 54.969 , 56.1848, 100. ]) </code></pre> <p>Or with python (3.X) use <code>itertools.accumulate()</code>:</p> <pre><code>&gt;&gt;&gt; import itertools as it &gt;&gt;&gt; &gt;&gt;&gt; list(it.accumulate(s[:-1])) + s[-1:] [50, 51.2658, 52.500299999999996, 53.74079999999999, 54.968999999999994, 56.184799999999996, 100] </code></pre>
1
2016-07-31T19:08:54Z
[ "python", "arrays" ]
Add current element in array + next element in array while iterating through array in Python
38,687,480
<p>What's the best way to add the first element in an array to the next element in the same array, then add the result to the next element of the array, and so on? For example, I have an array:</p> <pre><code>s=[50, 1.2658, 1.2345, 1.2405, 1.2282, 1.2158, 100] </code></pre> <p>I would like the end array to look like the following:</p> <pre><code>new_s=[50, 51.2658, 52.5003, 53.7408, 54.969, 56.1848, 100] </code></pre> <p>Thus leaving the minimum and maximum elements of the array unchanged.</p> <p>I started going this route:</p> <pre><code>arr_length=len(s) new_s=[50] for i, item in enumerate(s): if i == 0: new_s.append(new_s[i]+s[i+1]) elif 0&lt;i&lt;=(arr_length-2): new_s.append(new_s[i]+s[i+1]) </code></pre> <p>Currently I get the following list:</p> <pre><code>new_s=[50, 51.2658, 52.5003, 53.7408, 54.969, 56.1848, 156.1848] </code></pre> <p>What am I doing wrong that isn't leaving the last item unchanged? </p>
1
2016-07-31T19:06:09Z
38,687,539
<p>You can use <code>numpy.cumsum()</code>:</p> <pre><code>import numpy as np np.append(np.cumsum(s[:-1]), s[-1]) # array([50., 51.2658, 52.5003, 53.7408, 54.969 , 56.1848, 100.]) </code></pre>
1
2016-07-31T19:12:14Z
[ "python", "arrays" ]
How to use Faker from Factory_boy
38,687,492
<p><code>Factory_boy</code> uses <code>fake-factory (Faker)</code> to generate random values, I would like to generate some random values in my Django tests using Faker directly.</p> <p>Factory_boy docs suggests using <code>factory.Faker</code> and its provider as :</p> <pre><code>class RandomUserFactory(factory.Factory): class Meta: model = models.User first_name = factory.Faker('first_name') </code></pre> <p>But this isn't generating any name:</p> <pre><code>&gt;&gt;&gt; import factory &gt;&gt;&gt; factory.Faker('name') &lt;factory.faker.Faker object at 0x7f1807bf5278&gt; &gt;&gt;&gt; type(factory.Faker('name')) &lt;class 'factory.faker.Faker'&gt; </code></pre> <p>From <code>factory_boy</code> <code>faker.py</code> class <code>factory.Faker('ean', length=10)</code> calls <code>faker.Faker.ean(length=10)</code> but <code>Faker</code> docs says it should show a name:</p> <pre><code>from faker import Faker fake = Faker() fake.name() # 'Lucy Cechtelar' </code></pre> <p>Is there any other way to use <code>Faker</code> instead of setting an instance directly from <code>Faker</code>?</p> <pre><code>from faker import Factory fake = Factory.create() fake.name() </code></pre>
0
2016-07-31T19:07:09Z
38,720,172
<p>You can use faker with factory_boy like this:</p> <pre><code>class RandomUserFactory(factory.Factory): class Meta: model = models.User first_name = factory.Faker('first_name') user = RandomUserFactory() print user.first_name # 'Emily </code></pre> <p>So you need to instantiate a user with factory_boy and it will call Faker for you.</p> <p>I don't know if you are trying to use this with Django or not, but if you want the factory to save the created user to the database, then you need to extend factory.django.DjangoModelFactory instead of factory.Factory.</p> <p>Hope this helps.</p> <p>Laszlo</p>
0
2016-08-02T12:15:47Z
[ "python", "factory", "django-testing", "faker", "factory-boy" ]
ImportError: No module named rl.algorithms.deepq
38,687,495
<p>I cloned the repo from here: <a href="https://github.com/wingedsheep/rl" rel="nofollow">https://github.com/wingedsheep/rl</a></p> <p>I now tried to run the code, </p> <pre><code>cd rl python examples/runner_lunarlander.py </code></pre> <p>I get the error:</p> <pre><code>Traceback (most recent call last): File "examples/runner_lunarlander.py", line 10, in &lt;module&gt; from rl.algorithms.deepq import DeepQ ImportError: No module named rl.algorithms.deepq </code></pre> <p>The error comes from line 10:</p> <pre><code>from rl.algorithms.deepq import DeepQ </code></pre> <p>DeepQ is a class in the file deepq.py.</p> <p>I saw init file present in all the folders. </p> <p>I am using anaconda with python 2.7.</p> <p>I can't get how to resolve this. Please help. Thanks.</p>
0
2016-07-31T19:07:37Z
38,687,832
<p>Your are getting this error because the module (code) you're trying to run is not on your python's PYTHONPATH. PYTHONPATH environment variable is responsible to update and inform python where to look for imports. There are lot of ways for setting it. </p> <p>You could add this in your ~/.bashrc file for more permanent user-wise setup:</p> <pre><code>export PYTHONPATH="${PYTHONPATH}:/home/sie/src/ </code></pre> <p>or for this particular (bash I assume) session, just run:</p> <pre><code>export PYTHONPATH="${PYTHONPATH}:/home/sie/src/ python examples/runner_lunarlander.py </code></pre> <p>Don't use /home/sie/src/rl, where the root of the clone relies, the parent folder should do the job for you.</p>
0
2016-07-31T19:49:27Z
[ "python", "python-2.7", "import", "anaconda" ]
Django1.9: No model matches the given query
38,687,572
<p>I am following a tutorial to create a blog. The code is correct according to the tutorial. The only difference is that I use Django 1.9 instead of 1.8</p> <p>Calling the <code>Post</code> model in the view without </p> <pre><code>publish__year=year, publish__month=month, publish__day=day) </code></pre> <p>doesn't return a 404 Error - <code>No Post matches the given query.</code></p> <p>This is <code>view.py</code></p> <pre><code>def post_detail(request, year, month, day, post): post = get_object_or_404(Post, slug=post, status='published',) #publish__year=year, #publish__month=month, #publish__day=day) return render(request, 'blog/post/detail.html', {'post': post}) </code></pre> <p>The model part looks like that <code>models.py</code></p> <pre><code>class Post(models.Model): ... publish = models.DateTimeField(default = timezone.now) ... </code></pre> <p>Any ideas why the query is not found?</p> <p>EDIT: </p> <p>The URL looks like <code>localhost/blog/2016/07/30/second-post-entry/</code></p> <pre><code>def get_absolute_url(self): return reverse('blog:post_detail', args=[self.publish.year, self.publish.strftime('%m'), self.publish.strftime('%d'), self.slug]) </code></pre> <p>It seems that those are the problem:</p> <pre><code>self.publish.strftime('%m'), # eg. == 07, but publish__month == 7 self.publish.strftime('%d') # eg. == 30, publish__day == 30 </code></pre>
0
2016-07-31T19:17:35Z
38,688,330
<p>As I got it, you can either change <code>self.publish.strftime('%m')</code> and <code>self.publish.strftime('%d')</code> to <code>self.publish.month</code> and <code>self.publish.day</code>, either you can convert passed data into int <code>publish__year=int(year), publish__month=int(month), publish__day=int(day)</code>. That should do the trick.</p>
0
2016-07-31T20:54:32Z
[ "python", "django", "view", "model" ]
UnicodeDecodeError for Reading files in Python
38,687,591
<pre><code>pythonNotes = open('E:\\Python Notes.docx','r') read_it_now = pythonNotes.read() print(read_it_now.encode('utf-16')) </code></pre> <p>When I try this code, I get:</p> <p><code>UnicodeDecodeError: 'charmap' can't decode byte 0x8f in position 591 character maps to &lt;undefined&gt;</code></p> <p>I am running this in visual studio with python tools - starting without debugging.</p> <p>I have tried putting <code>enc='utf-8'</code> at the top, throwing it in as a parameter, I've looked at other questions and just couldn't find a solution to this simple issue.</p> <p>Please assist.</p>
-1
2016-07-31T19:19:00Z
38,687,769
<p>This error <a href="http://www.i18nqa.com/debug/bug-double-conversion.html" rel="nofollow">can occur</a> when text that is already in utf-8 format is read in as an 8-bit encoding, and python tries to "decode" it to Unicode: Bytes that have no meaning in the supposed encoding throw a <code>UnicodeDecodeError</code>. But you'll always get an error if you try to read a file as <code>utf-8</code> that is not in the <code>utf-8</code> encoding.</p> <p>In your case, the problem is that a docx file is not a regular text file; no single text encoding can meaningfully import it. See this <a href="http://stackoverflow.com/a/116217/699305">SO answer</a> for directions on how to read it on a low level, or use <a href="https://python-docx.readthedocs.io/en/latest/" rel="nofollow">python-docx</a> to get access to the document in a way that resembles what you see in Word.</p>
0
2016-07-31T19:42:06Z
[ "python" ]
Download entire directory via FTP using Python
38,687,599
<p>Trying to put together my first useful Python program, with the aim of automating my website backups. I watched a tutorial on how to download a single file, but when it comes to a folder I'm less clear. I'd like to create a local backup of an entire folder from my website via FTP.</p> <p>So far I have come up with this, with some help from <a href="http://stackoverflow.com/questions/5230966/python-ftp-download-all-files-in-directory/12622208#12622208">this question</a>:</p> <pre><code>from ftplib import FTP import os ftp=FTP("ftp.xxxxxxxxx.com") ftp.login("xxxxxxxxxxx","xxxxxxxxxx") #login to FTP account print "Successfully logged in" ftp.cwd("public_html") #change working directory to \public_html\ filenames = ftp.nlst() #create variable to store contents of \public_html\ os.makedirs("C:\\Users\\xxxxxx\\Desktop\\Backup")#create local backup directory os.chdir("C:\\Users\\xxxxxx\\Desktop\\Backup")#change working directory to local backup directory #for loop to download each file individually for a in filenames: ftp.retrbinary("RETR " + a, file.write) file.close() ftp.close() #CLOSE THE FTP CONNECTION print "FTP connection closed. Goodbye" </code></pre> <p>I'm reluctant to run it as I don't want to create a problem on my website if it's wrong. I should note that the filename &amp; extension of the local file should exactly match that of the remote file being downloaded. </p> <p>Any guidance appreciated!</p>
0
2016-07-31T19:19:57Z
38,687,671
<p>You don't need to change your working directory just save your files in your intended path.</p> <p>And for downloading the files you first need to get the list of file names:</p> <pre><code>file_list = [] ftp.retrlines('LIST', lambda x: file_list.append(x.split())) </code></pre> <p>Then separate the files, from directories, and download them:</p> <pre><code>for info in file_list: ls_type, name = info[0], info[-1] if not ls_type.startswith('d'): with open(name, 'wb') as f: ftp.retrbinary('RETR {}'.format(f), f.write) </code></pre>
1
2016-07-31T19:30:05Z
[ "python", "ftp" ]
Trying to position plots next to each other
38,687,737
<p>I am trying to position two plots next to each other, but dont know how do it right. Could someone pls help me. </p> <p>This is the first plot</p> <pre><code>x_new = np.linspace(dsa[0], dsa[-1], num=len(dsa)*10) coefs = poly.polyfit(dsa, Wechsel, 1) ffit = poly.polyval(x_new, coefs) plt.grid(True) plt.xlabel("Druck p in mbar") plt.ylabel("Minimawechsel N") plt.plot(x_new, ffit, color="red", linestyle="solid", linewidth=1) plt.plot(dsa,Wechsel,'ro', label="Sauerstoff" ) plt.legend(loc=1) </code></pre> <p>This is the secend one:</p> <pre><code>x_new1 = np.linspace(dar[0], dar[-1], num=len(dar)*10) coefs1 = poly.polyfit(dar, Wechsel, 1) ffit1 = poly.polyval(x_new1, coefs1) plt.grid(True) plt.xlabel("Druck p in mbar") plt.ylabel("Minimawechsel N") plt.plot(x_new1, ffit1, color="blue", linestyle="solid", linewidth=1) plt.plot(dar,Wechsel, 'ro',color='blue', label="Argon") plt.legend(loc=1) </code></pre> <p>Regards, Alex</p>
0
2016-07-31T19:37:13Z
38,687,790
<p>You can use <code>plt.subplots</code>:</p> <pre><code>f, (ax1, ax2) = plt.subplots(1, 2) x_new = np.linspace(dsa[0], dsa[-1], num=len(dsa)*10) coefs = poly.polyfit(dsa, Wechsel, 1) ffit = poly.polyval(x_new, coefs) ax1.grid(True) ax1.set_xlabel("Druck p in mbar") ax1.set_ylabel("Minimawechsel N") ax1.plot(x_new, ffit, color="red", linestyle="solid", linewidth=1) ax1.plot(dsa,Wechsel,'ro', label="Sauerstoff" ) ax1.legend(loc=1) </code></pre> <p>and similarly with ax2:</p> <pre><code>x_new1 = np.linspace(dar[0], dar[-1], num=len(dar)*10) coefs1 = poly.polyfit(dar, Wechsel, 1) ffit1 = poly.polyval(x_new1, coefs1) ax2.grid(True) ax2.set_xlabel("Druck p in mbar") ax2.set_ylabel("Minimawechsel N") ax2.plot(x_new1, ffit1, color="blue", linestyle="solid", linewidth=1) ax2.plot(dar,Wechsel, 'ro',color='blue', label="Argon") ax2.legend(loc=1) </code></pre>
0
2016-07-31T19:44:01Z
[ "python", "matplotlib" ]
Issue sending mail with smtp python
38,687,747
<p>I'm working in an app that sends an icalendar file to a mail and I've an issue with it. The main fact is that the app works in a properly way in all the cases except one. I've been testing the app with the wifi of my university (only students have acces to this wifi) and It couldn't be sent (the app enter in a loop and the mail can't be sent). I attach here the code that I use to send the mail. I think that maybe the problem is with the ports (maybe aren't openend all the ports in the free wifi of my university). If anyone knows a better way that couldn't fail it would be nice, because the only problem that I have with the app is sending the mail with this special wifi (with other wifi works properly). Code: </p> <pre><code>import smtplib from email.MIMEMultipart import MIMEMultipart from email.MIMEText import MIMEText from email.MIMEBase import MIMEBase from email import encoders def send_mail(mail): fromaddr = "adress@gmail.com" toaddr = mail.strip() msg = MIMEMultipart('alternative') msg['From'] = "Contact &lt;adress@gmail.com&gt;" msg['To'] = toaddr msg['Subject'] = u"Subject" body = """Body""" msg.attach(MIMEText(body, "html") filename = "fileattached.ics" part = MIMEBase('application', 'octet-stream',name=filename) part.set_payload(cal.to_ical()) encoders.encode_base64(part) part.add_header('Content-Disposition', "attachment; filename= %s" % filename) server = smtplib.SMTP('smtp.gmail.com', 587) server.starttls() server.login(fromaddr, "password") text = msg.as_string() server.sendmail(fromaddr, toaddr, text) server.quit() </code></pre> <p>I don't know if the problem is the port that I'm using to send the mail, but I've been told that maybe the issue is produced by that. </p>
0
2016-07-31T19:38:36Z
38,688,154
<p>Turn on SMTP session debugging. It should provide some clues.</p> <p><a href="https://docs.python.org/3/library/smtplib.html#smtplib.SMTP.set_debuglevel" rel="nofollow">https://docs.python.org/3/library/smtplib.html#smtplib.SMTP.set_debuglevel</a></p>
0
2016-07-31T20:31:18Z
[ "python", "email", "smtp", "port", "icalendar" ]
Time difference in python not working for some dates
38,687,780
<p>I have this function to count difference between two timestamps in seconds.</p> <pre><code>def diffdates(d1, d2): diff = (time.mktime(time.strptime(d2,"%Y-%m-%dT%H:%M:%S")) - time.mktime(time.strptime(d1, "%Y-%m-%dT%H:%M:%S"))) pprint(d2) pprint(d1) pprint(diff) return diff diffdates(diffdates('2016-10-11T11:10:00','2016-10-11T16:00:00') </code></pre> <p>When I call it with given parameters, I expect the result to be</p> <pre><code>4*3600-10*60 = 13800 </code></pre> <p>but I get</p> <pre><code>17400 = 5*3600 - 10*60 </code></pre> <p>I was checking, if I have some problem with the formatting strings in documentation, but I can not find any. Also there is no changes in time like summer/winter time in that date.</p>
0
2016-07-31T19:43:11Z
38,687,831
<p>Your math is wrong. The delta between 16:00 and 11:10 is 4 hours 50 minutes. Your computation of <code>4*3600-10*60</code> is suggesting 3 hours 50 minutes. You're off by an hour.</p> <p><code>16:00</code> is <code>60*16</code> minutes into the day: <code>960</code></p> <p><code>11:10</code> is <code>60*11+10</code> minutes into the day <code>670</code></p> <p><code>960 - 670</code> is <code>290</code> minutes</p> <p><code>290 minutes * 60</code> is <code>17400</code> seconds.</p>
1
2016-07-31T19:49:20Z
[ "python", "datetime" ]
Delete first row in Dataframe each day for certain value only
38,687,786
<p>Is there a way to delete the first row in a Dataframe, each day, for certain value only. So for example:</p> <pre><code>2014-03-04 10:00:00 -1.0 2014-03-04 10:04:00 1.0 2014-03-04 10:42:00 -1.0 2014-03-05 09:57:00 1.0 2014-03-05 10:05:00 -1.0 2014-03-05 10:30:00 1.0 </code></pre> <p>For each day above if 1.0 is the first value the row should be deleted. So in the example above this would see row <code>2014-03-05 10:00:00</code> deleted.</p> <p>I can't think of a way to do this without iterating through the dataframe rows using something like <code>for day in df.index:</code> which is slow to process a large dataset.</p>
1
2016-07-31T19:43:41Z
38,687,889
<p>You can first <code>groupby</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.year.html" rel="nofollow"><code>DatetimeIndex.year</code></a> and aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.head.html" rel="nofollow"><code>head</code></a>. Then find all first indexes where value of column is <code>1</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a> them: </p> <p><em>This solution works nice, if datetimes are not duplicated.</em></p> <pre><code>print (df) col 2014-03-04 10:00:00 -1.0 2014-03-04 10:04:00 1.0 2014-03-04 10:42:00 -1.0 2014-03-05 09:57:00 1.0 2014-03-05 10:05:00 -1.0 2014-03-05 10:30:00 1.0 df1 = df.col.groupby(df.index.date).head(1) print (df1) 2014-03-04 10:00:00 -1.0 2014-03-05 09:57:00 1.0 Name: col, dtype: float64 print (df1[df1 == 1].index) DatetimeIndex(['2014-03-05 09:57:00'], dtype='datetime64[ns]', freq=None) print (df.drop(df1[df1 == 1].index)) col 2014-03-04 10:00:00 -1.0 2014-03-04 10:04:00 1.0 2014-03-04 10:42:00 -1.0 2014-03-05 10:05:00 -1.0 2014-03-05 10:30:00 1.0 </code></pre>
2
2016-07-31T19:57:59Z
[ "python", "pandas" ]
Delete first row in Dataframe each day for certain value only
38,687,786
<p>Is there a way to delete the first row in a Dataframe, each day, for certain value only. So for example:</p> <pre><code>2014-03-04 10:00:00 -1.0 2014-03-04 10:04:00 1.0 2014-03-04 10:42:00 -1.0 2014-03-05 09:57:00 1.0 2014-03-05 10:05:00 -1.0 2014-03-05 10:30:00 1.0 </code></pre> <p>For each day above if 1.0 is the first value the row should be deleted. So in the example above this would see row <code>2014-03-05 10:00:00</code> deleted.</p> <p>I can't think of a way to do this without iterating through the dataframe rows using something like <code>for day in df.index:</code> which is slow to process a large dataset.</p>
1
2016-07-31T19:43:41Z
38,688,534
<p>Here is another method of creating a mask variable using <code>apply</code> method to check each group and pick up the condition of the first element, and then use the <code>mask</code> for subsetting:</p> <pre><code>import pandas as pd import numpy as np df['date_time'] = pd.to_datetime(df.date_time) df # date_time value #0 2014-03-04 10:00:00 -1 #1 2014-03-04 10:04:00 1 #2 2014-03-04 10:42:00 -1 #3 2014-03-05 09:57:00 1 #4 2014-03-05 10:05:00 -1 #5 2014-03-05 10:30:00 1 # group by the date of the column `date_time` groups = df.groupby(df.date_time.apply(lambda dt: dt.date()))['value'] # create a mask that returns true if the first element of every group is one mask = groups.apply(lambda g: pd.Series((np.arange(g.size) == 0) &amp; (g == 1))) mask # 0 False # 1 False # 2 False # 3 True # 4 False # 5 False # dtype: bool df[~mask] # date_time value #0 2014-03-04 10:00:00 -1 #1 2014-03-04 10:04:00 1 #2 2014-03-04 10:42:00 -1 #4 2014-03-05 10:05:00 -1 #5 2014-03-05 10:30:00 1 </code></pre>
0
2016-07-31T21:25:32Z
[ "python", "pandas" ]
Serialize a string without changes in Django Rest Framework?
38,687,810
<p>I'm using Python's json.dumps() to convert an array to a string and then store it in a Django Model. I'm trying to figure out how I can get Django's REST framework to ignore this field and send it 'as is' without serializing it a second time.</p> <p>For example, if the model looks like this(Both fields are CharFields):</p> <blockquote> <p>name = "E:\" </p> <p>path_with_ids= "[{"name": "E:\", "id": 525}]"</p> </blockquote> <p>I want the REST framework to ignore 'path_with_ids' when serializing so the JSON output will look like this:</p> <blockquote> <p>{ "name": "E:\", "path_with_ids": [ {"name": "E:\", "id": 525} ] }</p> </blockquote> <p>and not like this:</p> <blockquote> <p>{ "name": "E:\", "path_with_ids": "[{\"name\": \"E:\\\", \"id\": 525}]" }</p> </blockquote> <p>I've tried to make another serializer class that spits out the input it gets 'as is' without success:</p> <p><strong>Serializers.py:</strong></p> <pre><code>class PathWithIds(serializers.CharField): def to_representation(self, value): return value.path_with_ids class FolderSerializer(serializers.ModelSerializer): field_to_ignore = PathWithIds(source='path_with_ids') class Meta: model = Folder fields = ['id', 'field_to_ignore'] </code></pre> <p>Please help!</p>
0
2016-07-31T19:46:38Z
38,696,435
<p>I ended up using a wasteful and sickening method of deserializing the array before serializing it again with the REST framework:</p> <p><strong>Serializers.py:</strong></p> <pre><code>import json class PathWithIds(serializers.CharField): def to_representation(self, value): x = json.loads(value) return x class FolderSerializer(serializers.ModelSerializer): array_output = PathWithIds(source='field_to_ignore') class Meta: model = Folder fields = ['id', 'array_output'] </code></pre> <p><strong>Output in the rest API:</strong></p> <blockquote> <p>{ "name": "E:\", "array_output": [ { "name": "E:\", "id": 525 } ] }</p> </blockquote>
1
2016-08-01T10:23:18Z
[ "python", "json", "django", "serialization", "django-rest-framework" ]
Python, create shortcut with two paths and argument
38,687,822
<p>I'm trying to create a shortcut through python that will launch a file in another program with an argument. E.g:</p> <pre><code>"C:\file.exe" "C:\folder\file.ext" argument </code></pre> <p>The code I've tried messing with:</p> <pre><code>from win32com.client import Dispatch import os shell = Dispatch("WScript.Shell") shortcut = shell.CreateShortCut(path) shortcut.Targetpath = r'"C:\file.exe" "C:\folder\file.ext"' shortcut.Arguments = argument shortcut.WorkingDirectory = "C:\" #or "C:\folder\file.ext" in this case? shortcut.save() </code></pre> <p>But i get an error thrown my way:</p> <pre><code>AttributeError: Property '&lt;unknown&gt;.Targetpath' can not be set. </code></pre> <p>I've tried different formats of the string and google doesn't seem to know the solution to this problem</p>
3
2016-07-31T19:48:42Z
38,688,725
<pre><code>from comtypes.client import CreateObject from comtypes.gen import IWshRuntimeLibrary shell = CreateObject("WScript.Shell") shortcut = shell.CreateShortCut(path).QueryInterface(IWshRuntimeLibrary.IWshShortcut) shortcut.TargetPath = "C:\file.exe" args = ["C:\folder\file.ext", argument] shortcut.Arguments = " ".join(args) shortcut.Save() </code></pre> <p><a href="https://github.com/noamraph/dreampie/blob/master/create-shortcuts.py#L134" rel="nofollow">Reference</a></p>
2
2016-07-31T21:57:46Z
[ "python", "shortcut", "wscript" ]
How to define zorder when using 2 y-axis?
38,687,887
<p>I plot using two y-axis, on the left and the right of a matplotlib figure and use <code>zorder</code> to control the position of the plots. I need to define the <code>zorder</code> <em>across</em> axes in the same figure.</p> <hr> <p><strong>Problem</strong></p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.arange(-10,10,0.01) fig, ax1 = plt.subplots( 1, 1, figsize=(9,3) ) ax1.plot( x, np.sin(x), color='red', linewidth=10, zorder=1 ) ax2 = ax1.twinx() ax2.plot( x, x, color='blue', linewidth=10, zorder=-1) </code></pre> <p><a href="http://i.stack.imgur.com/A2oeO.png" rel="nofollow"><img src="http://i.stack.imgur.com/A2oeO.png" alt="enter image description here"></a></p> <p>In the previous diagram, I would expect the blue line to appear <em>behind</em> the red plot. </p> <p><strong>How do I control the <code>zorder</code> when using twin axes?</strong></p> <hr> <p>I am using:</p> <p>python: 3.4.3 + numpy: 1.11.0 + matplotlib: 1.5.1</p>
0
2016-07-31T19:57:52Z
38,688,010
<p>It looks like the two axes have separate z-stacks. The axes are z-ordered with the most recent axis on top, so you need to move the curve you want on top to the last axis you create:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.arange(-10,10,0.01) fig, ax1 = plt.subplots( 1, 1, figsize=(9,3) ) ax1.plot( x, x, color='blue', linewidth=10 ) ax2 = ax1.twinx() ax2.plot( x, np.sin(x), color='red', linewidth=10 ) </code></pre>
0
2016-07-31T20:11:44Z
[ "python", "numpy", "matplotlib" ]
order list of tuples in revlex
38,687,935
<p>I am trying to generate the list of all k tuples on the numbers 0 through n-1, but I want this list to be ordered in revlex. For example, </p> <pre><code>import itertools list(itertools.combinations(range(0, 6), 3)) </code></pre> <p>outputs these tuples in lexicographic ordering:</p> <p>[(0, 1, 2), (0, 1, 3), (0, 1, 4), (0, 1, 5), (0, 2, 3), (0, 2, 4), (0, 2, 5), (0, 3, 4), (0, 3, 5), (0, 4, 5), (1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), (2, 3, 4), (2, 3, 5), (2, 4, 5), (3, 4, 5)]</p> <p>whereas I would like the output to be ordered by reverse lexicographic:</p> <p>[(0,1,2), (0,1,3), (0,2,3), (1,2,3), (0,1,4), (0,2,4), (1,2,4), (0,3,4), (1,3,4), (2,3,4), (0,1,5), (0,2,5), (1,2,5), (0,3,5), (1,3,5), (2,3,5), (0,4,5), (1,4,5), (2,4,5), (3,4,5)]</p> <p>Thanks!</p>
-1
2016-07-31T20:03:05Z
38,688,954
<p>Your reverse lexicographic order sorts by last item, then next-to-last, etc. One way to do that is to take the range, reverse that range, use itertools to make all combinations from that, reverse each item in that combinations list, then finally reverse the overall list. A list or tuple can be reversed by slicing with <code>[::-1]</code>, so the reversal of <code>mylist</code> is <code>mylist[::-1]</code>. Using this, we can get one complicated expression</p> <pre><code>[i[::-1] for i in itertools.combinations(range(6)[::-1],3)][::-1] </code></pre> <p>The <code>range</code> function has a built-in way to get a decreasing sequence. If we use that, we get</p> <pre><code>[i[::-1] for i in itertools.combinations(range(5,-1,-1),3)][::-1] </code></pre> <p>which does not look any easier. Either of those expressions give the result</p> <pre><code>[(0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3), (0, 1, 4), (0, 2, 4), (1, 2, 4), (0, 3, 4), (1, 3, 4), (2, 3, 4), (0, 1, 5), (0, 2, 5), (1, 2, 5), (0, 3, 5), (1, 3, 5), (2, 3, 5), (0, 4, 5), (1, 4, 5), (2, 4, 5), (3, 4, 5)] </code></pre> <p>which is what you want.</p> <p>There are routines that can do this, and you could break up that expression into multiple lines with intermediate variables. Either would be more clear than that expression. The multiple lines could be:</p> <pre><code>r = range(6)[::-1] c = itertools.combinations(r, 3) l = [i[::-1] for i in c] rl = l[::-1] </code></pre> <p>Now the variable <code>rl</code> holds your desired list.</p> <p>All this was tested in Python 2.7.12. In Python 3 you probably need to put a <code>list()</code> around <code>r</code>.</p>
1
2016-07-31T22:37:46Z
[ "python", "list", "sorting" ]
Remove everything besides a certain html tag and its content in Python
38,687,974
<p>I've search around the internet and I cannot find anything that will exclude everything besides a certain tag and its content inside it.</p> <p>How can I do this with Python (beautifulsoup 4)?</p> <p>I have this html:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;p&gt;&lt;iframe width="1000" height="500" allowfullscreen="allowfullscreen" class="embed" src="#"&gt; &lt;/iframe&gt;&lt;/p&gt; &lt;p&gt;sdkjasdkljasldjad;j dadas dasdadada&lt;/p&gt;</code></pre> </div> </div> </p> <p>I need to remove all other so the output is like this:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;iframe width="1000" height="500" allowfullscreen="allowfullscreen" class="embed" src="#"&gt; &lt;/iframe&gt;</code></pre> </div> </div> </p> <p>I've come up with this but it don't know how to go further:</p> <pre><code>@register.filter(name='only_iframe') def only_iframe(content): soup = BeautifulSoup(content) for tag in soup.find_all('p', 'strong'): tag.replaceWith('') return soup.get_text() </code></pre>
0
2016-07-31T20:06:49Z
38,688,029
<p>Why don't locate the <code>iframe</code> and get its <em>string representation</em>:</p> <pre><code>iframe = soup.find("iframe", class_="embed") print(str(iframe)) </code></pre>
0
2016-07-31T20:13:52Z
[ "python", "django", "python-3.x" ]
Converting a 1.2GB list of edges into a sparse matrix
38,688,062
<p>I have a 1.2GB list of edges from a graph in a text file. My ubuntu PC has 8GB of RAM. Each line in the input looks like</p> <pre><code>287111206 357850135 </code></pre> <p>I would like to convert it into a sparse adjacency matrix and output that to a file.</p> <p>Some statistics for my data: </p> <pre><code>Number of edges: around 62500000 Number of vertices: around 31250000 </code></pre> <p>I asked much the same question before at <a href="http://stackoverflow.com/a/38667644/2179021">http://stackoverflow.com/a/38667644/2179021</a> and got a great answer. The problem is that I can't get it to work.</p> <p>I first tried np.loadtxt to load in the file but it was very slow and used a huge amount of memory. So instead I moved to pandas.read_csv which is very fast but this caused it own problems. This is my current code:</p> <pre><code>import pandas import numpy as np from scipy import sparse data = pandas.read_csv("edges.txt", sep=" ", header= None, dtype=np.uint32) A = data.as_matrix() print type(A) k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) rows,cols=k3.reshape(A.shape).T M=sparse.coo_matrix((np.ones(rows.shape,int),(rows,cols))) print type(M) </code></pre> <p>The problem is that the pandas dataframe <code>data</code> is huge and I am effectively making a copy in A which is inefficient. However things are even worse as the code crashes with </p> <pre><code>&lt;type 'instancemethod'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 13, in &lt;module&gt; rows,cols=k3.reshape(A.shape).T AttributeError: 'function' object has no attribute 'shape' raph@raph-desktop:~/python$ python make-sparse-matrix.py &lt;type 'numpy.ndarray'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 12, in &lt;module&gt; k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/arraysetops.py", line 209, in unique iflag = np.cumsum(flag) - 1 File "/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2115, in cumsum return cumsum(axis, dtype, out) MemoryError </code></pre> <p>So my questions are:</p> <ol> <li>Can I avoid having both the 1.2GB pandas dataframe and the 1.2GB numpy array copy in memory?</li> <li>Is there some way to get the code to complete in 8GB of RAM?</li> </ol> <p>You can reproduce a test input of the size I am trying to process with:</p> <pre><code>import random #Number of edges, vertices m = 62500000 n = m/2 for i in xrange(m): fromnode = str(random.randint(0, n-1)).zfill(9) tonode = str(random.randint(0, n-1)).zfill(9) print fromnode, tonode </code></pre> <p><strong>Update</strong></p> <p>I have now tried a number of different approaches, all of which have failed. Here is a summary.</p> <ol> <li>Using <a href="http://igraph.org/" rel="nofollow">igraph</a> with <code>g = Graph.Read_Ncol('edges.txt')</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>Using <a href="https://networkit.iti.kit.edu/" rel="nofollow">networkit</a> with <code>G= networkit.graphio.readGraph("edges.txt", networkit.Format.EdgeList, separator=" ", continuous=False)</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>The code above in this question but using np.loadtxt("edges.txt") instead of pandas. This uses a huge amount of RAM which crashes my computer.</li> </ol> <p>I then wrote separate code which remapped all the vertex names to number from 1..|V| where |V| is the total number of vertices. This should save the code that imports the edge list from having to build up a table that maps the vertex names. Using this I tried:</p> <ol start="4"> <li>Using this new remapped edge list file I used igraph again with <code>g = Graph.Read_Edgelist("edges-contig.txt")</code>. This now works although it takes 4GB of RAM (which is way more than the theoretical amount it should). However, there is no igraph function to write out a sparse adjacency matrix from a graph. The recommended solution is to <a href="http://stackoverflow.com/a/28179024/2179021">convert the graph to a coo_matrix</a>. Unfortunately this uses a huge amount of RAM which crashes my computer.</li> <li>Using the remapped edge list file I used networkit with <code>G = networkit.readGraph("edges-contig.txt", networkit.Format.EdgeListSpaceOne)</code>. This also works using less than the 4GB that igraph needs. networkit also comes with a function to write Matlab files (which is a form of sparse adjacency matrix that scipy can read). However <code>networkit.graphio.writeMat(G,"test.mat")</code> uses a huge amount of RAM which crashes my computer.</li> </ol> <p>Finally sascha's answer below does complete but takes about 40 minutes.</p>
5
2016-07-31T20:18:19Z
38,688,464
<h2>Updated version</h2> <p>As indicated in the comments, the approach did not fit your use-case. Let's make some changes:</p> <ul> <li>use pandas for reading in the data (instead of numpy: i'm quite surprised np.loadtxt is performing that bad!)</li> <li>use external library <a href="http://www.grantjenks.com/docs/sortedcontainers/" rel="nofollow">sortedcontainers</a> for a more memory-efficient approach (instead of a dictionary)</li> <li>the basic approach is the same</li> </ul> <p>This approach will take <strong>~45 minutes</strong> (which is slow; but you could pickle/save the result so you need to <strong>do it only once</strong>) and <strong>~5 GB</strong> of memory to prepare the sparse-matrix for your data, generated with:</p> <pre><code>import random N = 62500000 for i in xrange(N): print random.randint(10**8,10**9-1), random.randint(10**8,10**9-1) </code></pre> <h3>Code</h3> <pre><code>import numpy as np from scipy.sparse import coo_matrix import pandas as pd from sortedcontainers import SortedList import time # Read data # global memory usage after: one big array df = pd.read_csv('EDGES.txt', delimiter=' ', header=None, dtype=np.uint32) data = df.as_matrix() df = None n_edges = data.shape[0] # Learn mapping to range(0, N_VERTICES) # N_VERTICES unknown # global memory usage after: one big array + one big searchtree print('fit mapping') start = time.time() observed_vertices = SortedList() mappings = np.arange(n_edges*2, dtype=np.uint32) # upper bound on vertices for column in range(data.shape[1]): for row in range(data.shape[0]): # double-loop: slow, but easy to understand space-complexity val = data[row, column] if val not in observed_vertices: observed_vertices.add(val) mappings = mappings[:len(observed_vertices)] n_vertices = len(observed_vertices) end = time.time() print(' secs: ', end-start) print('transform mapping') # Map original data (in-place !) # global memory usage after: one big array + one big searchtree(can be deleted!) start = time.time() for column in range(data.shape[1]): for row in range(data.shape[0]): # double-loop: slow, but easy to understand space-complexity val = data[row, column] mapper_pos = observed_vertices.index(val) data[row, column] = mappings[mapper_pos] end = time.time() print(' secs: ', end-start) observed_vertices = None # if not needed anymore mappings = None # if not needed anymore # Create sparse matrix (only caring about a single triangular part for now) # if needed: delete dictionary before as it's not needed anymore! sp_mat = coo_matrix((np.ones(n_edges, dtype=bool), (data[:, 0], data[:, 1])), shape=(n_vertices, n_vertices)) </code></pre> <h2>First version</h2> <p>Here is a <strong>very simple</strong> and <strong>very inefficient</strong> (in regards to time and space) code to build this sparse matrix. I post this code, because i believe it is important to understand the core parts if one is using these in something bigger.</p> <p>Let's see, if this code is efficient enough for your use-case or if it needs work. From distance it's hard to tell, because we don't have your data.</p> <p>The dictionary-part, used for the mapping, is a candidate to blow up your memory. But it's pointless to optimize this without knowing if it's needed at all. Especially because this part of the code is dependent on the number of vertices in your graph (and i don't have any knowledge of this cardinality).</p> <pre><code>""" itertools.count usage here would need changes for py2 """ import numpy as np from itertools import count from scipy.sparse import coo_matrix # Read data # global memory usage after: one big array data = np.loadtxt('edges.txt', np.uint32) n_edges = data.shape[0] #print(data) #print(data.shape) # Learn mapping to range(0, N_VERTICES) # N_VERTICES unknown # global memory usage after: one big array + one big dict index_gen = count() mapper = {} for column in range(data.shape[1]): for row in range(data.shape[0]): # double-loop: slow, but easy to understand space-complexity val = data[row, column] if val not in mapper: mapper[val] = next(index_gen) n_vertices = len(mapper) # Map original data (in-place !) # global memory usage after: one big array + one big dict (can be deleted!) for column in range(data.shape[1]): for row in range(data.shape[0]): # double-loop: slow, but easy to understand space-complexity data[row, column] = mapper[data[row, column]] #print(data) # Create sparse matrix (only caring about a single triangular part for now) # if needed: delete dictionary before as it's not needed anymore! sp_mat = coo_matrix((np.ones(n_edges, dtype=bool), (data[:, 0], data[:, 1])), shape=(n_vertices, n_vertices)) #print(sp_mat) </code></pre> <p><strong>Output for edges-10.txt</strong>:</p> <pre><code>[[287111206 357850135] [512616930 441657273] [530905858 562056765] [524113870 320749289] [149911066 964526673] [169873523 631128793] [646151040 986572427] [105290138 382302570] [194873438 968653053] [912211115 195436728]] (10, 2) [[ 0 10] [ 1 11] [ 2 12] [ 3 13] [ 4 14] [ 5 15] [ 6 16] [ 7 17] [ 8 18] [ 9 19]] (0, 10) True (1, 11) True (2, 12) True (3, 13) True (4, 14) True (5, 15) True (6, 16) True (7, 17) True (8, 18) True (9, 19) True </code></pre>
3
2016-07-31T21:16:01Z
[ "python", "pandas", "numpy", "optimization", "scipy" ]
Converting a 1.2GB list of edges into a sparse matrix
38,688,062
<p>I have a 1.2GB list of edges from a graph in a text file. My ubuntu PC has 8GB of RAM. Each line in the input looks like</p> <pre><code>287111206 357850135 </code></pre> <p>I would like to convert it into a sparse adjacency matrix and output that to a file.</p> <p>Some statistics for my data: </p> <pre><code>Number of edges: around 62500000 Number of vertices: around 31250000 </code></pre> <p>I asked much the same question before at <a href="http://stackoverflow.com/a/38667644/2179021">http://stackoverflow.com/a/38667644/2179021</a> and got a great answer. The problem is that I can't get it to work.</p> <p>I first tried np.loadtxt to load in the file but it was very slow and used a huge amount of memory. So instead I moved to pandas.read_csv which is very fast but this caused it own problems. This is my current code:</p> <pre><code>import pandas import numpy as np from scipy import sparse data = pandas.read_csv("edges.txt", sep=" ", header= None, dtype=np.uint32) A = data.as_matrix() print type(A) k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) rows,cols=k3.reshape(A.shape).T M=sparse.coo_matrix((np.ones(rows.shape,int),(rows,cols))) print type(M) </code></pre> <p>The problem is that the pandas dataframe <code>data</code> is huge and I am effectively making a copy in A which is inefficient. However things are even worse as the code crashes with </p> <pre><code>&lt;type 'instancemethod'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 13, in &lt;module&gt; rows,cols=k3.reshape(A.shape).T AttributeError: 'function' object has no attribute 'shape' raph@raph-desktop:~/python$ python make-sparse-matrix.py &lt;type 'numpy.ndarray'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 12, in &lt;module&gt; k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/arraysetops.py", line 209, in unique iflag = np.cumsum(flag) - 1 File "/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2115, in cumsum return cumsum(axis, dtype, out) MemoryError </code></pre> <p>So my questions are:</p> <ol> <li>Can I avoid having both the 1.2GB pandas dataframe and the 1.2GB numpy array copy in memory?</li> <li>Is there some way to get the code to complete in 8GB of RAM?</li> </ol> <p>You can reproduce a test input of the size I am trying to process with:</p> <pre><code>import random #Number of edges, vertices m = 62500000 n = m/2 for i in xrange(m): fromnode = str(random.randint(0, n-1)).zfill(9) tonode = str(random.randint(0, n-1)).zfill(9) print fromnode, tonode </code></pre> <p><strong>Update</strong></p> <p>I have now tried a number of different approaches, all of which have failed. Here is a summary.</p> <ol> <li>Using <a href="http://igraph.org/" rel="nofollow">igraph</a> with <code>g = Graph.Read_Ncol('edges.txt')</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>Using <a href="https://networkit.iti.kit.edu/" rel="nofollow">networkit</a> with <code>G= networkit.graphio.readGraph("edges.txt", networkit.Format.EdgeList, separator=" ", continuous=False)</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>The code above in this question but using np.loadtxt("edges.txt") instead of pandas. This uses a huge amount of RAM which crashes my computer.</li> </ol> <p>I then wrote separate code which remapped all the vertex names to number from 1..|V| where |V| is the total number of vertices. This should save the code that imports the edge list from having to build up a table that maps the vertex names. Using this I tried:</p> <ol start="4"> <li>Using this new remapped edge list file I used igraph again with <code>g = Graph.Read_Edgelist("edges-contig.txt")</code>. This now works although it takes 4GB of RAM (which is way more than the theoretical amount it should). However, there is no igraph function to write out a sparse adjacency matrix from a graph. The recommended solution is to <a href="http://stackoverflow.com/a/28179024/2179021">convert the graph to a coo_matrix</a>. Unfortunately this uses a huge amount of RAM which crashes my computer.</li> <li>Using the remapped edge list file I used networkit with <code>G = networkit.readGraph("edges-contig.txt", networkit.Format.EdgeListSpaceOne)</code>. This also works using less than the 4GB that igraph needs. networkit also comes with a function to write Matlab files (which is a form of sparse adjacency matrix that scipy can read). However <code>networkit.graphio.writeMat(G,"test.mat")</code> uses a huge amount of RAM which crashes my computer.</li> </ol> <p>Finally sascha's answer below does complete but takes about 40 minutes.</p>
5
2016-07-31T20:18:19Z
38,692,211
<p>You might want to take a look at the <a href="http://igraph.org/python/" rel="nofollow">igraph</a> project, this is a GPL library of C code which is designed for this kind of thing, and has a nice Python API. I think in your case you the Python code would be something like </p> <pre><code>from igraph import Graph g = Graph.Read_Edgelist('edges.txt') g.write_adjacency('adjacency_matrix.txt') </code></pre>
0
2016-08-01T06:25:36Z
[ "python", "pandas", "numpy", "optimization", "scipy" ]
Converting a 1.2GB list of edges into a sparse matrix
38,688,062
<p>I have a 1.2GB list of edges from a graph in a text file. My ubuntu PC has 8GB of RAM. Each line in the input looks like</p> <pre><code>287111206 357850135 </code></pre> <p>I would like to convert it into a sparse adjacency matrix and output that to a file.</p> <p>Some statistics for my data: </p> <pre><code>Number of edges: around 62500000 Number of vertices: around 31250000 </code></pre> <p>I asked much the same question before at <a href="http://stackoverflow.com/a/38667644/2179021">http://stackoverflow.com/a/38667644/2179021</a> and got a great answer. The problem is that I can't get it to work.</p> <p>I first tried np.loadtxt to load in the file but it was very slow and used a huge amount of memory. So instead I moved to pandas.read_csv which is very fast but this caused it own problems. This is my current code:</p> <pre><code>import pandas import numpy as np from scipy import sparse data = pandas.read_csv("edges.txt", sep=" ", header= None, dtype=np.uint32) A = data.as_matrix() print type(A) k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) rows,cols=k3.reshape(A.shape).T M=sparse.coo_matrix((np.ones(rows.shape,int),(rows,cols))) print type(M) </code></pre> <p>The problem is that the pandas dataframe <code>data</code> is huge and I am effectively making a copy in A which is inefficient. However things are even worse as the code crashes with </p> <pre><code>&lt;type 'instancemethod'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 13, in &lt;module&gt; rows,cols=k3.reshape(A.shape).T AttributeError: 'function' object has no attribute 'shape' raph@raph-desktop:~/python$ python make-sparse-matrix.py &lt;type 'numpy.ndarray'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 12, in &lt;module&gt; k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/arraysetops.py", line 209, in unique iflag = np.cumsum(flag) - 1 File "/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2115, in cumsum return cumsum(axis, dtype, out) MemoryError </code></pre> <p>So my questions are:</p> <ol> <li>Can I avoid having both the 1.2GB pandas dataframe and the 1.2GB numpy array copy in memory?</li> <li>Is there some way to get the code to complete in 8GB of RAM?</li> </ol> <p>You can reproduce a test input of the size I am trying to process with:</p> <pre><code>import random #Number of edges, vertices m = 62500000 n = m/2 for i in xrange(m): fromnode = str(random.randint(0, n-1)).zfill(9) tonode = str(random.randint(0, n-1)).zfill(9) print fromnode, tonode </code></pre> <p><strong>Update</strong></p> <p>I have now tried a number of different approaches, all of which have failed. Here is a summary.</p> <ol> <li>Using <a href="http://igraph.org/" rel="nofollow">igraph</a> with <code>g = Graph.Read_Ncol('edges.txt')</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>Using <a href="https://networkit.iti.kit.edu/" rel="nofollow">networkit</a> with <code>G= networkit.graphio.readGraph("edges.txt", networkit.Format.EdgeList, separator=" ", continuous=False)</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>The code above in this question but using np.loadtxt("edges.txt") instead of pandas. This uses a huge amount of RAM which crashes my computer.</li> </ol> <p>I then wrote separate code which remapped all the vertex names to number from 1..|V| where |V| is the total number of vertices. This should save the code that imports the edge list from having to build up a table that maps the vertex names. Using this I tried:</p> <ol start="4"> <li>Using this new remapped edge list file I used igraph again with <code>g = Graph.Read_Edgelist("edges-contig.txt")</code>. This now works although it takes 4GB of RAM (which is way more than the theoretical amount it should). However, there is no igraph function to write out a sparse adjacency matrix from a graph. The recommended solution is to <a href="http://stackoverflow.com/a/28179024/2179021">convert the graph to a coo_matrix</a>. Unfortunately this uses a huge amount of RAM which crashes my computer.</li> <li>Using the remapped edge list file I used networkit with <code>G = networkit.readGraph("edges-contig.txt", networkit.Format.EdgeListSpaceOne)</code>. This also works using less than the 4GB that igraph needs. networkit also comes with a function to write Matlab files (which is a form of sparse adjacency matrix that scipy can read). However <code>networkit.graphio.writeMat(G,"test.mat")</code> uses a huge amount of RAM which crashes my computer.</li> </ol> <p>Finally sascha's answer below does complete but takes about 40 minutes.</p>
5
2016-07-31T20:18:19Z
38,734,771
<p>Here's my solution:</p> <pre><code>import numpy as np import pandas as pd import scipy.sparse as ss def read_data_file_as_coo_matrix(filename='edges.txt'): "Read data file and return sparse matrix in coordinate format." data = pd.read_csv(filename, sep=' ', header=None, dtype=np.uint32) rows = data[0] # Not a copy, just a reference. cols = data[1] ones = np.ones(len(rows), np.uint32) matrix = ss.coo_matrix((ones, (rows, cols))) return matrix </code></pre> <p>Pandas does the heavy lifting of parsing using <code>read_csv</code>. And Pandas is already storing the data in columnar format. The <code>data[0]</code> and <code>data[1]</code> just get references, no copies. Then I feed those to <code>coo_matrix</code>. Benchmarked locally:</p> <pre><code>In [1]: %timeit -n1 -r5 read_data_file_as_coo_matrix() 1 loop, best of 5: 14.2 s per loop </code></pre> <p>Then to save a csr-matrix to a file:</p> <pre><code>def save_csr_matrix(filename, matrix): """Save compressed sparse row (csr) matrix to file. Based on http://stackoverflow.com/a/8980156/232571 """ assert filename.endswith('.npz') attributes = { 'data': matrix.data, 'indices': matrix.indices, 'indptr': matrix.indptr, 'shape': matrix.shape, } np.savez(filename, **attributes) </code></pre> <p>Benchmarked locally:</p> <pre><code>In [3]: %timeit -n1 -r5 save_csr_matrix('edges.npz', matrix.tocsr()) 1 loop, best of 5: 13.4 s per loop </code></pre> <p>And later load it back from a file:</p> <pre><code>def load_csr_matrix(filename): """Load compressed sparse row (csr) matrix from file. Based on http://stackoverflow.com/a/8980156/232571 """ assert filename.endswith('.npz') loader = np.load(filename) args = (loader['data'], loader['indices'], loader['indptr']) matrix = ss.csr_matrix(args, shape=loader['shape']) return matrix </code></pre> <p>Benchmarked locally:</p> <pre><code>In [4]: %timeit -n1 -r5 load_csr_matrix('edges.npz') 1 loop, best of 5: 881 ms per loop </code></pre> <p>And finally test it all:</p> <pre><code>def test(): "Test data file parsing and matrix serialization." coo_matrix = read_data_file_as_coo_matrix() csr_matrix = coo_matrix.tocsr() save_csr_matrix('edges.npz', csr_matrix) loaded_csr_matrix = load_csr_matrix('edges.npz') # Comparison based on http://stackoverflow.com/a/30685839/232571 assert (csr_matrix != loaded_csr_matrix).nnz == 0 if __name__ == '__main__': test() </code></pre> <p>When running <code>test()</code>, it takes about 30 seconds:</p> <pre><code>$ time python so_38688062.py real 0m30.401s user 0m27.257s sys 0m2.759s </code></pre> <p>And the memory high-water mark was ~1.79 GB.</p> <p>Note that once you've converted the "edges.txt" to "edges.npz" in CSR-matrix format, loading it will take less than a second.</p>
5
2016-08-03T05:04:02Z
[ "python", "pandas", "numpy", "optimization", "scipy" ]
Converting a 1.2GB list of edges into a sparse matrix
38,688,062
<p>I have a 1.2GB list of edges from a graph in a text file. My ubuntu PC has 8GB of RAM. Each line in the input looks like</p> <pre><code>287111206 357850135 </code></pre> <p>I would like to convert it into a sparse adjacency matrix and output that to a file.</p> <p>Some statistics for my data: </p> <pre><code>Number of edges: around 62500000 Number of vertices: around 31250000 </code></pre> <p>I asked much the same question before at <a href="http://stackoverflow.com/a/38667644/2179021">http://stackoverflow.com/a/38667644/2179021</a> and got a great answer. The problem is that I can't get it to work.</p> <p>I first tried np.loadtxt to load in the file but it was very slow and used a huge amount of memory. So instead I moved to pandas.read_csv which is very fast but this caused it own problems. This is my current code:</p> <pre><code>import pandas import numpy as np from scipy import sparse data = pandas.read_csv("edges.txt", sep=" ", header= None, dtype=np.uint32) A = data.as_matrix() print type(A) k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) rows,cols=k3.reshape(A.shape).T M=sparse.coo_matrix((np.ones(rows.shape,int),(rows,cols))) print type(M) </code></pre> <p>The problem is that the pandas dataframe <code>data</code> is huge and I am effectively making a copy in A which is inefficient. However things are even worse as the code crashes with </p> <pre><code>&lt;type 'instancemethod'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 13, in &lt;module&gt; rows,cols=k3.reshape(A.shape).T AttributeError: 'function' object has no attribute 'shape' raph@raph-desktop:~/python$ python make-sparse-matrix.py &lt;type 'numpy.ndarray'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 12, in &lt;module&gt; k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/arraysetops.py", line 209, in unique iflag = np.cumsum(flag) - 1 File "/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2115, in cumsum return cumsum(axis, dtype, out) MemoryError </code></pre> <p>So my questions are:</p> <ol> <li>Can I avoid having both the 1.2GB pandas dataframe and the 1.2GB numpy array copy in memory?</li> <li>Is there some way to get the code to complete in 8GB of RAM?</li> </ol> <p>You can reproduce a test input of the size I am trying to process with:</p> <pre><code>import random #Number of edges, vertices m = 62500000 n = m/2 for i in xrange(m): fromnode = str(random.randint(0, n-1)).zfill(9) tonode = str(random.randint(0, n-1)).zfill(9) print fromnode, tonode </code></pre> <p><strong>Update</strong></p> <p>I have now tried a number of different approaches, all of which have failed. Here is a summary.</p> <ol> <li>Using <a href="http://igraph.org/" rel="nofollow">igraph</a> with <code>g = Graph.Read_Ncol('edges.txt')</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>Using <a href="https://networkit.iti.kit.edu/" rel="nofollow">networkit</a> with <code>G= networkit.graphio.readGraph("edges.txt", networkit.Format.EdgeList, separator=" ", continuous=False)</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>The code above in this question but using np.loadtxt("edges.txt") instead of pandas. This uses a huge amount of RAM which crashes my computer.</li> </ol> <p>I then wrote separate code which remapped all the vertex names to number from 1..|V| where |V| is the total number of vertices. This should save the code that imports the edge list from having to build up a table that maps the vertex names. Using this I tried:</p> <ol start="4"> <li>Using this new remapped edge list file I used igraph again with <code>g = Graph.Read_Edgelist("edges-contig.txt")</code>. This now works although it takes 4GB of RAM (which is way more than the theoretical amount it should). However, there is no igraph function to write out a sparse adjacency matrix from a graph. The recommended solution is to <a href="http://stackoverflow.com/a/28179024/2179021">convert the graph to a coo_matrix</a>. Unfortunately this uses a huge amount of RAM which crashes my computer.</li> <li>Using the remapped edge list file I used networkit with <code>G = networkit.readGraph("edges-contig.txt", networkit.Format.EdgeListSpaceOne)</code>. This also works using less than the 4GB that igraph needs. networkit also comes with a function to write Matlab files (which is a form of sparse adjacency matrix that scipy can read). However <code>networkit.graphio.writeMat(G,"test.mat")</code> uses a huge amount of RAM which crashes my computer.</li> </ol> <p>Finally sascha's answer below does complete but takes about 40 minutes.</p>
5
2016-07-31T20:18:19Z
38,770,170
<p>In my answer I consider the case where the ids of the nodes are given by 9 characters long strings each character from <code>[0-9A-Za-z]</code>. <code>n</code> of these node ids should be mapped on the values <code>[0,n-1]</code> (which might be not necessary for your application, but still of general interest).</p> <p>The next considerations, I'm sure you are aware of, are here for the sake of completeness:</p> <ol> <li>Memory is the bottle neck.</li> <li>There are around <code>10^8</code> strings in the file.</li> <li>a 9 character long <code>string + int32</code> value pair costs around <code>120</code> bytes in a dictionary, resulting in 12GB memory usage for the file.</li> <li>a string id from file can be mapped onto an <code>int64</code>: there are 62 different characters -> can be encoded with 6 bits, 9 characters in the string -> 6*9=54&lt;64 bit. See also <code>toInt64()</code> method further below.</li> <li>there are int64+int32=12 byte "real" information => ca. 1.2 GB could be enough, however the cost for such a pair in a dictionary is around 60 bytes (around 6 GB RAM needed).</li> <li>Creating small objects (on the heap) results in a lot of memory overhead, thus bundling these objects in arrays is advantageous. Interesting information about memory used by python objects can be found in his tutorial stile <a href="http://code.tutsplus.com/tutorials/understand-how-much-memory-your-python-objects-use--cms-25609" rel="nofollow">article</a>. Interesting experiences with reducing the memory usage are made public in this <a href="https://guillaume.segu.in/blog/code/487/optimizing-memory-usage-in-python-a-case-study/" rel="nofollow">blog entry</a>.</li> <li>python-list is out of question as data structure as well as dictionary. <code>array.array</code> could be alternative, but we use <code>np.array</code> (because there are sorting algorithms for <code>np.array</code> but not <code>array.array</code>).</li> </ol> <p><strong>1. step:</strong> reading file and mapping strings to <code>int64</code>. It is a pain to let a <code>np.array</code> grow dynamically, so we assume we now the number of edges in the file (it would be nice to have it in the header, but it can also be deduced from the file size):</p> <pre><code>import numpy as np def read_nodes(filename, EDGE_CNT): nodes=np.zeros(EDGE_CNT*2, dtype=np.int64) cnt=0 for line in open(filename,"r"): nodes[cnt:cnt+2]=map(toInt64, line.split()) # use map(int, line.split()) for cases without letters return nodes </code></pre> <p><strong>2. step:</strong> converting the int64-values into values <code>[0,n-1]</code>:</p> <p><em>Possibility A</em>, needs 3*0.8GB: </p> <pre><code>def maps_to_ids(filename, EDGE_CNT): """ return number of different node ids, and the mapped nodes""" nodes=read_nodes(filename, EDGE_CNT) unique_ids, nodes = np.unique(nodes, return_index=True) return (len(unique_ids), nodes) </code></pre> <p><em>Possibility B</em>, needs 2*0.8GB, but is somewhat slower:</p> <pre><code>def maps_to_ids(filename, EDGE_CNT): """ return number of different node ids, and the mapped nodes""" nodes=read_nodes(filename, EDGE_CNT) unique_map = np.unique(nodes) for i in xrange(len(nodes)): node_id=np.searchsorted(unique_map, nodes[i]) # faster than bisect.bisect nodes[i]=node_id return (len(unique_map), nodes) </code></pre> <p><strong>3. step:</strong> put it all into coo_matrix:</p> <pre><code>from scipy import sparse def data_as_coo_matrix(filename, EDGE_CNT) node_cnt, nodes = maps_to_ids(filename, EDGE_CNT) rows=nodes[::2]#it is only a view, not a copy cols=nodes[1::2]#it is only a view, not a copy return sparse.coo_matrix((np.ones(len(rows), dtype=bool), (rows, cols)), shape=(node_cnt, node_cnt)) </code></pre> <p>For calling <code>data_as_coo_matrix("data.txt", 62500000)</code>, memory need peaks at 2.5GB (but with <code>int32</code> instead of <code>int64</code> only 1.5GB are needed). It took around 5 minutes on my machine, but my machine is pretty slow...</p> <p>So what is different from your solution?</p> <ol> <li>I get only unique values from <code>np.unique</code> (and not all the indices and the inverse), so there is some memory saved - I can replace the old ids with the new in-place.</li> <li>I have no experience with <code>pandas</code> so maybe there is some copying involved between <code>pandas</code>&lt;-><code>numpy</code> data structures?</li> </ol> <p>What is the difference from sascha's solution?</p> <ol> <li>There is no need for the list sorted all the time - it is enough to sort after all items are in the list, that is what <code>np.unique()</code> does. sascha's solution keep the list sorted the whole time - you have to pay for this with at least a constant factor, even if the running time stays <code>O(n log(n))</code>. I assumed, an add operation would be <code>O(n)</code>, but as pointed out it is <code>O(log(n)</code>.</li> </ol> <p>What is the difference to GrantJ's solution? </p> <ol> <li>The size of the resulting sparse matrix is <code>NxN</code> - with <code>N</code> - number of different nodes and not <code>2^54x2^54</code> (with very many empty rows and column).</li> </ol> <hr> <p>PS:<br> Here is my idea, how the 9 char string id can be mapped on an <code>int64</code> value, but I guess this function could become a bottle neck the way it is written and should get optimized.</p> <pre><code>def toInt64(string): res=0L for ch in string: res*=62 if ch &lt;='9': res+=ord(ch)-ord('0') elif ch &lt;='Z': res+=ord(ch)-ord('A')+10 else: res+=ord(ch)-ord('a')+36 return res </code></pre>
2
2016-08-04T14:26:36Z
[ "python", "pandas", "numpy", "optimization", "scipy" ]
Converting a 1.2GB list of edges into a sparse matrix
38,688,062
<p>I have a 1.2GB list of edges from a graph in a text file. My ubuntu PC has 8GB of RAM. Each line in the input looks like</p> <pre><code>287111206 357850135 </code></pre> <p>I would like to convert it into a sparse adjacency matrix and output that to a file.</p> <p>Some statistics for my data: </p> <pre><code>Number of edges: around 62500000 Number of vertices: around 31250000 </code></pre> <p>I asked much the same question before at <a href="http://stackoverflow.com/a/38667644/2179021">http://stackoverflow.com/a/38667644/2179021</a> and got a great answer. The problem is that I can't get it to work.</p> <p>I first tried np.loadtxt to load in the file but it was very slow and used a huge amount of memory. So instead I moved to pandas.read_csv which is very fast but this caused it own problems. This is my current code:</p> <pre><code>import pandas import numpy as np from scipy import sparse data = pandas.read_csv("edges.txt", sep=" ", header= None, dtype=np.uint32) A = data.as_matrix() print type(A) k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) rows,cols=k3.reshape(A.shape).T M=sparse.coo_matrix((np.ones(rows.shape,int),(rows,cols))) print type(M) </code></pre> <p>The problem is that the pandas dataframe <code>data</code> is huge and I am effectively making a copy in A which is inefficient. However things are even worse as the code crashes with </p> <pre><code>&lt;type 'instancemethod'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 13, in &lt;module&gt; rows,cols=k3.reshape(A.shape).T AttributeError: 'function' object has no attribute 'shape' raph@raph-desktop:~/python$ python make-sparse-matrix.py &lt;type 'numpy.ndarray'&gt; Traceback (most recent call last): File "make-sparse-matrix.py", line 12, in &lt;module&gt; k1,k2,k3=np.unique(A,return_inverse=True,return_index=True) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/arraysetops.py", line 209, in unique iflag = np.cumsum(flag) - 1 File "/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2115, in cumsum return cumsum(axis, dtype, out) MemoryError </code></pre> <p>So my questions are:</p> <ol> <li>Can I avoid having both the 1.2GB pandas dataframe and the 1.2GB numpy array copy in memory?</li> <li>Is there some way to get the code to complete in 8GB of RAM?</li> </ol> <p>You can reproduce a test input of the size I am trying to process with:</p> <pre><code>import random #Number of edges, vertices m = 62500000 n = m/2 for i in xrange(m): fromnode = str(random.randint(0, n-1)).zfill(9) tonode = str(random.randint(0, n-1)).zfill(9) print fromnode, tonode </code></pre> <p><strong>Update</strong></p> <p>I have now tried a number of different approaches, all of which have failed. Here is a summary.</p> <ol> <li>Using <a href="http://igraph.org/" rel="nofollow">igraph</a> with <code>g = Graph.Read_Ncol('edges.txt')</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>Using <a href="https://networkit.iti.kit.edu/" rel="nofollow">networkit</a> with <code>G= networkit.graphio.readGraph("edges.txt", networkit.Format.EdgeList, separator=" ", continuous=False)</code>. This uses a huge amount of RAM which crashes my computer.</li> <li>The code above in this question but using np.loadtxt("edges.txt") instead of pandas. This uses a huge amount of RAM which crashes my computer.</li> </ol> <p>I then wrote separate code which remapped all the vertex names to number from 1..|V| where |V| is the total number of vertices. This should save the code that imports the edge list from having to build up a table that maps the vertex names. Using this I tried:</p> <ol start="4"> <li>Using this new remapped edge list file I used igraph again with <code>g = Graph.Read_Edgelist("edges-contig.txt")</code>. This now works although it takes 4GB of RAM (which is way more than the theoretical amount it should). However, there is no igraph function to write out a sparse adjacency matrix from a graph. The recommended solution is to <a href="http://stackoverflow.com/a/28179024/2179021">convert the graph to a coo_matrix</a>. Unfortunately this uses a huge amount of RAM which crashes my computer.</li> <li>Using the remapped edge list file I used networkit with <code>G = networkit.readGraph("edges-contig.txt", networkit.Format.EdgeListSpaceOne)</code>. This also works using less than the 4GB that igraph needs. networkit also comes with a function to write Matlab files (which is a form of sparse adjacency matrix that scipy can read). However <code>networkit.graphio.writeMat(G,"test.mat")</code> uses a huge amount of RAM which crashes my computer.</li> </ol> <p>Finally sascha's answer below does complete but takes about 40 minutes.</p>
5
2016-07-31T20:18:19Z
38,780,599
<p>I was trying the different methods available apart from the ones already used. I found the following doing good.</p> <p>Method 1 - Reading the file into a string, parsing the string into a 1-D array using numpy's fromstring.</p> <pre><code>import numpy as np import scipy.sparse as sparse def readEdges(): with open('edges.txt') as f: data = f.read() edges = np.fromstring(data, dtype=np.int32, sep=' ') edges = np.reshape(edges, (edges.shape[0]/2, 2)) ones = np.ones(len(edges), np.uint32) cooMatrix = sparse.coo_matrix((ones, (edges[:,0], edges[:,1]))) %timeit -n5 readEdges() </code></pre> <p>Output:</p> <pre><code>5 loops, best of 3: 13.6 s per loop </code></pre> <p>Method 2 - Same as method 1 except instead of loading the file into a string, using the memory mapped interface.</p> <pre><code>def readEdgesMmap(): with open('edges.txt') as f: with contextlib.closing(mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)) as m: edges = np.fromstring(m, dtype=np.int32, sep=' ') edges = np.reshape(edges, (edges.shape[0]/2, 2)) ones = np.ones(len(edges), np.uint32) cooMatrix = sparse.coo_matrix((ones, (edges[:,0], edges[:,1]))) %timeit -n5 readEdgesMmap() </code></pre> <p>Output:</p> <pre><code>5 loops, best of 3: 12.7 s per loop </code></pre> <p>Monitored using <code>/usr/bin/time</code>, both methods use a maximum of around ~2GB of memory.</p> <p>Few notes:</p> <ol> <li><p>It seems to do slightly better than pandas <code>read_csv</code>. Using pandas read_csv, the output on the same machine is</p> <p><code>5 loops, best of 3: 16.2 s per loop</code></p></li> <li><p>Conversion from COO to CSR/CSC consumes significant time too. In @GrantJ's answer, it takes less time because the COO matrix initialization is incorrect. The argument needs to be given as a tuple. I wanted to leave a comment there but I don't have commenting rights yet.</p></li> <li><p>My guess on why this is slightly better than pandas <code>read_csv</code> is the prior assumption of 1D data.</p></li> </ol>
3
2016-08-05T03:26:08Z
[ "python", "pandas", "numpy", "optimization", "scipy" ]
output a dataframe to a json array
38,688,072
<p>I was wondering if there was a more efficient way to do the following operation.</p> <pre><code># transforms datetime into timestamp in seconds t = df.index.values.astype(np.int64) // 10**6 return jsonify(np.c_[t, df.open, df.high, df.low, df.close, df.volume].tolist()) </code></pre> <p>where <code>df</code> is a dataframe containing an index that is a date, and at least (but not only) the following attributes: <code>open</code>, <code>high</code>, <code>low</code>, <code>close</code>, <code>volume</code>. I then output the newly created array as JSON with flask's <code>jsonify</code>. The code above works but it looks pretty inefficient to me any idea on how to make it nicer/more efficient.</p>
2
2016-07-31T20:20:07Z
38,688,136
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow">to_json()</a> method:</p> <pre><code>In [88]: import pandas_datareader.data as web In [89]: apl = web.get_data_yahoo('AAPL', '2016-07-05', '2016-07-07') In [90]: apl Out[90]: Open High Low Close Volume Adj Close Date 2016-07-05 95.389999 95.400002 94.459999 94.989998 27705200 94.989998 2016-07-06 94.599998 95.660004 94.370003 95.529999 30949100 95.529999 2016-07-07 95.699997 96.500000 95.620003 95.940002 25139600 95.940002 </code></pre> <p>I'll use <code>json.dumps(..., indent=2)</code> in order to make it nicer/readable: </p> <pre><code>In [91]: import json </code></pre> <p><strong>orient='index'</strong></p> <pre><code>In [98]: print(json.dumps(json.loads(apl.to_json(orient='index')), indent=2)) { "1467849600000": { "Close": 95.940002, "High": 96.5, "Open": 95.699997, "Adj Close": 95.940002, "Volume": 25139600, "Low": 95.620003 }, "1467676800000": { "Close": 94.989998, "High": 95.400002, "Open": 95.389999, "Adj Close": 94.989998, "Volume": 27705200, "Low": 94.459999 }, "1467763200000": { "Close": 95.529999, "High": 95.660004, "Open": 94.599998, "Adj Close": 95.529999, "Volume": 30949100, "Low": 94.370003 } } </code></pre> <p><strong>orient='records'</strong> (reset index in order to make column <code>Date</code> visible):</p> <pre><code>In [99]: print(json.dumps(json.loads(apl.reset_index().to_json(orient='records')), indent=2)) [ { "Close": 94.989998, "High": 95.400002, "Open": 95.389999, "Adj Close": 94.989998, "Volume": 27705200, "Date": 1467676800000, "Low": 94.459999 }, { "Close": 95.529999, "High": 95.660004, "Open": 94.599998, "Adj Close": 95.529999, "Volume": 30949100, "Date": 1467763200000, "Low": 94.370003 }, { "Close": 95.940002, "High": 96.5, "Open": 95.699997, "Adj Close": 95.940002, "Volume": 25139600, "Date": 1467849600000, "Low": 95.620003 } ] </code></pre> <p>you can make use of the following <code>to_json()</code> parameters:</p> <blockquote> <p><strong>date_format</strong> : {‘epoch’, ‘iso’} </p> <p>Type of date conversion. epoch = epoch milliseconds, iso` = ISO8601, default is epoch.</p> <p><strong>date_unit</strong> : string, default ‘ms’ (milliseconds)</p> <p>The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond, microsecond, and nanosecond respectively.</p> <p><strong>orient :</strong> string</p> <p>The format of the JSON string</p> <ul> <li>split : dict like {index -> [index], columns -> [columns], data -> [values]}</li> <li>records : list like [{column -> value}, ... , {column -> value}]</li> <li>index : dict like {index -> {column -> value}}</li> <li>columns : dict like {column -> {index -> value}} values : just the values array</li> </ul> </blockquote>
5
2016-07-31T20:29:24Z
[ "python", "json", "pandas", "numpy", "dataframe" ]
Why is this invalid syntax? (Class function)
38,688,097
<p>The code I am trying to enter (python) looks like this:</p> <pre><code>class RainfallTable: def _init_ (self, njrainfall.txt): self.njrainfall.txt = open(njrainfall.txt, 'r') def close(self): if self.t: self.t.close() self.t = None </code></pre> <p>I am trying to get this class to simply open a file called njrainfall.txt, but I get the following error message when I do so:</p> <pre><code> File "3Homework.py", line 2 def _init_ (self, njrainfall.txt): ^ </code></pre> <p>What am I doing wrong?</p>
0
2016-07-31T20:24:41Z
38,688,121
<p>Function/constructor should have only variables as input</p> <p>You can pass text file name along with path to that init function</p> <p>Better you pass text file with absolute path</p> <pre><code>class RainfallTable: def _init_ (self, textfile): self.t = open(textfile, 'r') def close(self): if self.t: self.t.close() self.t = None </code></pre> <p>Call it as <code>RainfallTable('njrainfall.txt')</code></p>
0
2016-07-31T20:27:44Z
[ "python", "class" ]
Why is this invalid syntax? (Class function)
38,688,097
<p>The code I am trying to enter (python) looks like this:</p> <pre><code>class RainfallTable: def _init_ (self, njrainfall.txt): self.njrainfall.txt = open(njrainfall.txt, 'r') def close(self): if self.t: self.t.close() self.t = None </code></pre> <p>I am trying to get this class to simply open a file called njrainfall.txt, but I get the following error message when I do so:</p> <pre><code> File "3Homework.py", line 2 def _init_ (self, njrainfall.txt): ^ </code></pre> <p>What am I doing wrong?</p>
0
2016-07-31T20:24:41Z
38,688,127
<p>You've got a <code>.</code> in your variable names, which is invalid syntax for python. Remove those in your <code>njrainfall.txt</code> variables. A good substitute would be <code>njrainfall_file</code> or something similar. Secondly, the init functions is written with two underscores, like so:</p> <pre><code>def __init__(self, njrainfall_file): </code></pre> <p>Here some code:</p> <pre><code>class RainfallTable: def __init__(self, njrainfall_file): self.njrainfall_file = open(njrainfall_file, 'r') def close(self): if self.t: self.t.close() self.t = None </code></pre> <p>Make sure to pass <code>njrainfall_file</code> as a string of filename!</p>
3
2016-07-31T20:28:32Z
[ "python", "class" ]
Why is this invalid syntax? (Class function)
38,688,097
<p>The code I am trying to enter (python) looks like this:</p> <pre><code>class RainfallTable: def _init_ (self, njrainfall.txt): self.njrainfall.txt = open(njrainfall.txt, 'r') def close(self): if self.t: self.t.close() self.t = None </code></pre> <p>I am trying to get this class to simply open a file called njrainfall.txt, but I get the following error message when I do so:</p> <pre><code> File "3Homework.py", line 2 def _init_ (self, njrainfall.txt): ^ </code></pre> <p>What am I doing wrong?</p>
0
2016-07-31T20:24:41Z
38,688,224
<p>It looks like you are confused with strings, parameter/variable names, and class objects.</p> <p>The dot you are using in what must be a variable name (<code>njrainfall.txt</code>) looks like either a string with an actual file name, or an object attribute.</p> <p>As other people have pointed out already, you shall not use a dot in a variable / parameter name.</p> <p>You should instead do use (I have added a try..except to handle the case in which the passed filename does not exist):</p> <pre><code>class RainfallTable: def _init_ (self, sRainFallFilename): try: self.rainfallFile = open(sRainFallFilename, 'r') except: self.rainfallFile = None print "file %s does not exist" % sRainFallFilename def close(self): if not self.rainfallFile == None : self.rainfallFile.close() self.rainfallFile = None </code></pre> <p>Notice I have also changed the close() method to use the same attribute as in init()</p> <p>Then you can call the class like:</p> <pre><code>RainFallTable myRainFallTable( "NJ.txt" ) ... myRainFallTable.close() </code></pre>
0
2016-07-31T20:42:01Z
[ "python", "class" ]
Why is this invalid syntax? (Class function)
38,688,097
<p>The code I am trying to enter (python) looks like this:</p> <pre><code>class RainfallTable: def _init_ (self, njrainfall.txt): self.njrainfall.txt = open(njrainfall.txt, 'r') def close(self): if self.t: self.t.close() self.t = None </code></pre> <p>I am trying to get this class to simply open a file called njrainfall.txt, but I get the following error message when I do so:</p> <pre><code> File "3Homework.py", line 2 def _init_ (self, njrainfall.txt): ^ </code></pre> <p>What am I doing wrong?</p>
0
2016-07-31T20:24:41Z
38,690,150
<p>Remove the fullstop here <pre>self.njrainfall.txt</pre> to something else.</p> <p>Like</p> <pre>self.njrainfall</pre>
0
2016-08-01T02:17:21Z
[ "python", "class" ]
Python: Elements in a list not being changed?
38,688,116
<p>I'm having issues with editing elements in a list in Python, I've got the following code: </p> <pre><code>mylist = ["one", "day", "soon"] for x in mylist: x = "New word" print(x) print(mylist) </code></pre> <p>I would expect the output to be "New Word" three times (once for each iteration) and then a list of ["New Word", "New Word", "New Word"], but instead I've been getting:</p> <p>New word <br/> New word <br/> New word<br/> ['one', 'day', 'soon']</p> <p><strong>Question:</strong> Why is the list apparently changing, but then reverting back to its original form when printed as a whole list?</p> <p>Note: I have fixed the issue with the following code, but I am still curious as to why the original method fails</p> <pre><code>mylist2 = ["a", "b", "c"] for x in mylist2 : mylist2[mylist2.index(x)] = "Woo" print(mylist2) </code></pre>
0
2016-07-31T20:26:34Z
38,688,123
<p>The problem is that when you iterate with <code>for x in mylist</code> and then assign a value to <code>x</code>, you are creating a new variable called <code>x</code> and that is what you are printing. You cannot change the list in that way, only by iterating like this:</p> <pre><code>for i in range(len(mylist)): mylist[i] = "New word" </code></pre>
2
2016-07-31T20:28:00Z
[ "python", "list", "python-3.x" ]
How to parse text from a html table element
38,688,137
<p>I'm currently writing a small test webscraper using the python requests and lxml libraries. I'm trying to extract the text from the rows of a table from <a href="https://en.wikipedia.org/wiki/Game_of_Thrones" rel="nofollow">this site</a> using xpaths to uniquely identify the table. Since the table itself can only be identified by its class name and given the fact that the class name isn't unique, I had to use the parent div element in order to order to specify the table. The table in question is that lists the dates of the season order, filming, and airdates for the show Game of thrones, which I'm trying to select with the following path:</p> <pre><code>tree.xpath('//div[@id = "mw-content-text"]//table[@class = "wikitable"]//text()') </code></pre> <p>For some reason, when I print this path in the shell, it returns an empty list. I believe that printing this path would simply display all of the text in the table which I was trying to do in order to ensure I could actually get the contents; however, I would actually need to print each row of the table.</p> <p><strong>Is there something wrong with this xpath? If so, what is the correct way to go about printing the table contents?</strong> </p>
2
2016-07-31T20:29:28Z
38,688,170
<p>The <code>wikitable</code> is too broad of a class to distinguish tables on a wiki page between one another.</p> <p>I would instead rely on the preceding <code>Adaptation schedule</code> label:</p> <pre><code>import requests from lxml.html import fromstring url = "https://en.wikipedia.org/wiki/Game_of_Thrones" response = requests.get(url) root = fromstring(response.content) table = root.xpath(".//h3[span = 'Adaptation schedule']/following-sibling::table")[0] for row in table.xpath(".//tr")[1:]: print([cell.text_content() for cell in row.xpath(".//td")]) </code></pre> <p>Prints:</p> <pre><code>['Season 1', 'March 2, 2010[52]', 'Second half of 2010', 'April 17, 2011', 'June 19, 2011', 'A Game of Thrones'] ['Season 2', 'April 19, 2011[53]', 'Second half of 2011', 'April 1, 2012', 'June 3, 2012', 'A Clash of Kings and some early chapters from A Storm of Swords[54]'] ['Season 3', 'April 10, 2012[55]', 'Second half of 2012', 'March 31, 2013', 'June 9, 2013', 'About the first two-thirds of A Storm of Swords[56][57]'] ['Season 4', 'April 2, 2013[58]', 'Second half of 2013', 'April 6, 2014', 'June 15, 2014', 'The remaining one-third of A Storm of Swords and some elements from A Feast for Crows and A Dance with Dragons[59]'] ['Season 5', 'April 8, 2014[60]', 'Second half of 2014', 'April 12, 2015', 'June 14, 2015', 'A Feast for Crows, A Dance with Dragons and original content,[61] with some late chapters from A Storm of Swords[62] and elements from The Winds of Winter[63][64]'] ['Season 6', 'April 8, 2014[60]', 'Second half of 2015', 'April 24, 2016', 'June 26, 2016', 'Original content and outlined from The Winds of Winter,[65][66] with some late elements from A Feast for Crows and A Dance with Dragons[67]'] ['Season 7', 'April 21, 2016[50]', 'Second half of 2016[49]', 'Mid-2017[5]', 'Mid-2017[5]', 'Original content and outlined from The Winds of Winter and A Dream of Spring[66]'] </code></pre>
2
2016-07-31T20:33:55Z
[ "python", "html", "xpath", "python-requests", "lxml" ]
[RESOLVED]Get all files from my C drive - Python
38,688,228
<p>Here is what I try to do: I would like to get a list of all files that are heavier than 35 MB in my C drive.</p> <p>Here is my code:</p> <pre><code>def getAllFileFromDirectory(directory, temp): files = os.listdir(directory) for file in files: if (os.path.isdir(file)): getAllFileFromDirectory(file, temp) elif (os.path.isfile(file) and os.path.getsize(file) &gt; 35000000): temp.write(os.path.abspath(file)) def getFilesOutOfTheLimit(): basePath = "C:/" tempFile = open('temp.txt', 'w') getAllFileFromDirectory(basePath, tempFile) tempFile.close() print("Get all files ... Done !") </code></pre> <p>For some reason, the interpreter doesn't go in the if-block inside 'getAllFileFromDirectory'.</p> <p>Can someone tell me what I'm doing wrong and why (learning is my aim). How to fix it ?</p> <p>Thanks a lot for your comments.</p>
0
2016-07-31T20:42:16Z
38,688,311
<p>I fixed your code. Your problem was that <code>os.path.isdir</code> can only know if something is a directory if it receives the full path of it. So, I changed the code to the following and it works. Same thing for <code>os.path.getsize</code> and <code>os.path.isfile</code>.</p> <pre><code>import os def getAllFileFromDirectory(directory, temp): files = os.listdir(directory) for file in files: if (os.path.isdir(directory + file)): if file[0] == '.': continue # i added this because i'm on a UNIX system print(directory + file) getAllFileFromDirectory(directory + file, temp) elif (os.path.isfile(directory + file) and os.path.getsize(directory + file) &gt; 35000000): temp.write(os.path.abspath(file)) def getFilesOutOfTheLimit(): basePath = "/" tempFile = open('temp.txt', 'w') getAllFileFromDirectory(basePath, tempFile) tempFile.close() print("Get all files ... Done !") getFilesOutOfTheLimit() </code></pre>
1
2016-07-31T20:52:18Z
[ "python", "windows", "python-2.7", "python-3.x" ]
How to return tuples and count from list using list comprehension
38,688,254
<p>I have a function which, using list comprehension, returns the list elements in caps, and counts each element. </p> <pre><code>def wordlengths(mywords): upperword = [word.upper() for word in mywords] lenword = [len(i) for i in mywords] return upperword, lenword print(wordlengths(["The", "quick", "brown", "fox"])) </code></pre> <p>This returns:</p> <pre><code>(['THE', 'QUICK', 'BROWN', 'FOX'], [3, 5, 5, 3]) </code></pre> <p>but i need it to return paired tuples like this:</p> <pre><code>[("THE", 3), ("QUICK", 5), ("BROWN", 5), ("FOX", 3)] </code></pre> <p>I tried to use the <code>zip()</code> method with no success. How do I go about doing this?</p>
0
2016-07-31T20:45:18Z
38,688,271
<p>You can use <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow"><code>zip()</code></a>:</p> <pre><code>def wordlengths(mywords): upperword = [word.upper() for word in mywords] lenword = [len(i) for i in mywords] return list(zip(upperword, lenword)) </code></pre> <p>But, why don't construct the list in a single iteration:</p> <pre><code>def wordlengths(mywords): return [(word.upper(), len(word)) for word in mywords] </code></pre>
4
2016-07-31T20:46:46Z
[ "python", "list", "python-3.x", "list-comprehension" ]
Assert Error is thrown when mocking python3
38,688,258
<p>I am trying to write a test that mocks raising <code>PermissionError</code> on a call to <code>open()</code> when attempting to open a file for reading. However I cannot seem to get the test working. The <code>PermissionError</code> appears to be thrown but my test fails because of this even though I am trying to assert it is thrown. </p> <p>Below contains one of my attempts:</p> <p><strong>fileMethods.py</strong></p> <pre><code>def readfile(myfile): with open(myfile, 'r') as file: filecontent = file.read() file.close() return filecontent </code></pre> <p><strong>fileMethods_test.py</strong></p> <pre><code>def test_readfile_throws_PermissionError(self): with mock.patch('fileMethods.open') as openMock: openMock.side_effect = PermissionError self.assertRaises(PermissionError, fileMethods.readfile('file_to_readin')) </code></pre> <p>Am I missing something obvious or is the way I am testing this method incorrect?</p>
0
2016-07-31T20:45:45Z
38,688,291
<p>The reason this will not work is because you are mocking <code>fileMethods.open</code> and so the <code>open</code> function (which raises the <code>PermissionError</code> you're looking for) is not even called.</p> <p>If the <code>open</code> function is mocked (and therefore, the called function will do nothing), you can't assert that the exception raised by that function is called.</p> <p>The way it works is that when you mock a function it will, by default, not do anything. Look into the documentation <a href="https://docs.python.org/3/library/unittest.mock.html#patch" rel="nofollow">here</a> which explains this in further depth.</p>
0
2016-07-31T20:49:12Z
[ "python", "unit-testing", "mocking" ]
Python string not printing properly in PowerShell
38,688,328
<p>I'm having difficulty parsing data with a lot of scientific and international symbols using Python 2.7 so I wrote a toy program that illustrates what is not making sense to me: </p> <pre><code>#!/usr/bin/python # coding=utf-8 str ="35 μg/m3" str = str.decode('utf-8') str = str.encode('utf-8') #ready for printing? print(str) </code></pre> <p>And instead of printing out the original content, I get something different: </p> <p><a href="http://i.stack.imgur.com/1YGh9.png" rel="nofollow"><img src="http://i.stack.imgur.com/1YGh9.png" alt="screen copy"></a></p>
0
2016-07-31T20:54:28Z
38,688,754
<p>The line <code># coding=utf-8</code> only helps to write unicode literal and is no use for plain byte strings. Anyway assuming that your Python file is UTF-8 encoded, the line <code>str = str.decode('utf-8')</code> gives you a correct unicode string.</p> <p>But as said by Ansgar Wiechers, as you declare your encoding the simpler way would be to directly use a unicode litteral:</p> <pre><code>str = u"35 μg/m3" </code></pre> <p>Simply, Windows console has poor support for UTF8. Common encodings are win1252 (a latin1 variant), or cp850 a native OEM font. Unless you want to explicitely deal with the explicit encoding, your best bet is to directly display the <em>unicode</em> string:</p> <pre><code>#!/usr/bin/python # coding=utf-8 str ="35 μg/m3" str = str.decode('utf-8') # str is now an unicode string print(str) </code></pre> <hr> <p>If you want to explicitely use latin1, and provided you use a TrueType font such as Lucida Console or Consolas, you can do:</p> <pre><code>chcp 1252 python .\encoding.py </code></pre> <p>with</p> <pre><code>#!/usr/bin/python # coding=utf-8 str ="35 μg/m3" str = str.decode('utf-8') # str is now an unicode string str = str.encode('latin1') # str is now an latin1 encoded byte string print(str) </code></pre>
0
2016-07-31T22:02:58Z
[ "python", "powershell", "unicode", "encoding" ]
Python string not printing properly in PowerShell
38,688,328
<p>I'm having difficulty parsing data with a lot of scientific and international symbols using Python 2.7 so I wrote a toy program that illustrates what is not making sense to me: </p> <pre><code>#!/usr/bin/python # coding=utf-8 str ="35 μg/m3" str = str.decode('utf-8') str = str.encode('utf-8') #ready for printing? print(str) </code></pre> <p>And instead of printing out the original content, I get something different: </p> <p><a href="http://i.stack.imgur.com/1YGh9.png" rel="nofollow"><img src="http://i.stack.imgur.com/1YGh9.png" alt="screen copy"></a></p>
0
2016-07-31T20:54:28Z
38,688,846
<p>Python 2.7 doesn't use Unicode strings by default, so you basically have 2 options:</p> <ul> <li><p>Define the string as a Unicode string literal (<code>u"..."</code>):</p> <pre><code># coding=utf-8 str = u"35 µg/m3" print(str) </code></pre> <p>This way you can simply use the string as one would expect, so I'd prefer this approach.</p></li> <li><p>Define the string as a regular string literal and decode it:</p> <pre><code># coding=utf-8 str = "35 \xc2\xb5g/m3" print(str.decode('utf-8')) </code></pre> <p>If you use this approach you need to put special characters as their hexadecimal values (<code>µ</code> in UTF-8 is the character sequence 0xC2,0xB5) even if the file is saved as UTF-8.</p></li> </ul> <p>Demonstration:</p> <pre>PS C:\> <b>$PSVersionTable.PSVersion.ToString()</b> 4.0 PS C:\> <b>C:\Python27\python.exe -V</b> Python 2.7.11 PS C:\> <b>Get-Content .\test.py -Encoding UTF8</b> # coding=utf-8 str1 = "35 \xc2\xb5g/m3" print(str1) print(str1.decode('utf-8')) str2 = u"35 µg/m3" print(str2) PS C:\> <b>C:\Python27\python.exe .\test.py</b> 35 ┬Ág/m3 35 µg/m3 35 µg/m3</pre>
0
2016-07-31T22:20:14Z
[ "python", "powershell", "unicode", "encoding" ]
Python string not printing properly in PowerShell
38,688,328
<p>I'm having difficulty parsing data with a lot of scientific and international symbols using Python 2.7 so I wrote a toy program that illustrates what is not making sense to me: </p> <pre><code>#!/usr/bin/python # coding=utf-8 str ="35 μg/m3" str = str.decode('utf-8') str = str.encode('utf-8') #ready for printing? print(str) </code></pre> <p>And instead of printing out the original content, I get something different: </p> <p><a href="http://i.stack.imgur.com/1YGh9.png" rel="nofollow"><img src="http://i.stack.imgur.com/1YGh9.png" alt="screen copy"></a></p>
0
2016-07-31T20:54:28Z
38,702,903
<p>Your decoding/encoding has no effect:</p> <pre><code># coding=utf-8 s1 = "35 μg/m3" s2 = s1.decode('utf-8') s3 = s2.encode('utf-8') #ready for printing? print s1==s3 </code></pre> <p>If your source is UTF-8 as declared, then <code>s1</code> is a byte string that is UTF-8-encoded already. Decoding it to a Unicode string (<code>s2</code>) and re-encoding it as UTF-8 just gives you the original byte string.</p> <p>Next, the Windows console does not default to UTF-8, so printing those bytes will intepret them in the console encoding, which on my system is:</p> <pre><code>import sys print sys.stdout.encoding print s3 </code></pre> <p>Output:</p> <pre><code>cp437 35 ┬╡g/m3 </code></pre> <p>The correct way to print Unicode strings and have them intepreted correctly is to actually print Unicode strings. They will be encoded to the console encoding by Python and display correctly (assuming the console font and encoding supports the characters).</p> <pre><code># coding=utf-8 s = u"35 µg/m3" print s </code></pre> <p>Output:</p> <pre><code>35 µg/m3 </code></pre>
0
2016-08-01T15:40:11Z
[ "python", "powershell", "unicode", "encoding" ]
Add a delay after 500 requests scrapy
38,688,347
<p>I have a list of start 2000 urls and I'm using:</p> <pre><code>DOWNLOAD_DELAY = 0.25 </code></pre> <p>For controlling the speed of the requests, But I also want to add a bigger delay after n requests. For example, I want a delay of 0.25 seconds for each request and a delay of 100 seconds each 500 requests.</p> <p>Edit:</p> <p>Sample code:</p> <pre><code>import os from os.path import join import scrapy import time date = time.strftime("%d/%m/%Y").replace('/','_') list_of_pages = {'http://www.lapatilla.com/site/':'la_patilla', 'http://runrun.es/':'runrunes', 'http://www.noticierodigital.com/':'noticiero_digital', 'http://www.eluniversal.com/':'el_universal', 'http://www.el-nacional.com/':'el_nacional', 'http://globovision.com/':'globovision', 'http://www.talcualdigital.com/':'talcualdigital', 'http://www.maduradas.com/':'maduradas', 'http://laiguana.tv/':'laiguana', 'http://www.aporrea.org/':'aporrea'} root_dir = os.getcwd() output_dir = join(root_dir,'data/',date) class TestSpider(scrapy.Spider): name = "news_spider" download_delay = 1 start_urls = list_of_pages.keys() def parse(self, response): if not os.path.exists(output_dir): os.makedirs(output_dir) filename = list_of_pages[response.url] print time.time() with open(join(output_dir,filename), 'wb') as f: f.write(response.body) </code></pre> <p>The list, in this case, is shorter yet the idea is the same. I want to have to levels of delays one for each request and one each 'N' requests. I'm not crawling the links, just saving the main page.</p>
1
2016-07-31T20:57:30Z
38,688,378
<p>You can look into using an <a href="http://doc.scrapy.org/en/latest/topics/autothrottle.html" rel="nofollow">AutoThrottle extension</a> which does not give you a tight control of the delays but instead has its own algorithm of slowing down the spider adjusting it on the fly depending on the response time and number of concurrent requests.</p> <p>If you need more control over the delays at certain stages of the scraping process, you might need a <a href="http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#writing-your-own-downloader-middleware" rel="nofollow"><em>custom middleware</em></a> or a custom extension (similar to AutoThrottle - <a href="https://github.com/scrapy/scrapy/blob/ebef6d7c6dd8922210db8a4a44f48fe27ee0cd16/scrapy/extensions/throttle.py" rel="nofollow">source</a>). </p> <p>You can also change the <a href="http://doc.scrapy.org/en/latest/topics/settings.html#download-delay" rel="nofollow"><code>.download_delay</code> attribute of your spider</a> on the fly. By the way, this is exactly what AutoThrottle extension does under-the-hood - it <a href="https://github.com/scrapy/scrapy/blob/ebef6d7c6dd8922210db8a4a44f48fe27ee0cd16/scrapy/extensions/throttle.py#L28" rel="nofollow">updates the <code>.download_delay</code> value on the fly</a>.</p> <p>Some related topics:</p> <ul> <li><a href="https://github.com/scrapy/scrapy/issues/802" rel="nofollow">Per request delay</a></li> <li><a href="https://github.com/scrapy/scrapy/pull/254" rel="nofollow">Request delay configurable for each Request</a></li> </ul>
1
2016-07-31T21:03:43Z
[ "python", "web-scraping", "scrapy" ]
How Can I check if my rectangle(coin) has collided with my chest?
38,688,409
<p>I am currently making a game in pygame.What I am trying to do is check if my rectangle(a yellow coin blitted on it) collides with my chest image.If it does what the program should do is reset the yellow coin back to its initial position of y axis( showed later in here). </p> <p>This is my function to create rectangle and then blit yellow coin's image on it:</p> <pre><code>#Calling Yellow Coin def thing_yellow(thingx,thingy,thingw,thingh,color): pygame.draw.rect(gameDisplay,color,[thingx,thingy,thingw,thingh]) gameDisplay.blit(yellow_coin,(thingx,thingy,thingw,thingh)) </code></pre> <p>This is the function of my chest(treasure chest xD):</p> <pre><code>def chest(x,y): gameDisplay.blit(chestimg,(x,y)) global x, y, x_change chestimg=pygame.image.load('chest.jpg').convert_alpha() </code></pre> <p>My initial size of the yellow coin (in the game loop):</p> <pre><code> #Yellow Coin yellow_startx = random.randrange(-5,display_width) yellow_starty = -600 yellow_speed = 0.5 yellow_width = 46 yellow_height = 54 </code></pre> <p>Now here is the main function.What I basically want to do in this function is to check if the coin is inside the display screen or not.If it is not , I want it back to initial position that is at <code>yellow_starty = -600</code>.Another thing what I am trying to do is to check if it has collided with the top of my chest , so that I can add one point (+1) in the score. This is the code for it:</p> <pre><code>if yellow_starty &gt;display_height or yellow_height==chest_height: yellow_starty = 0 - yellow_height yellow_startx= random.randrange(0,display_width) print "Collected" </code></pre> <p>What it is doing is only checks if the coin has gone beyond the screen.I want it to check if it has collided with the chest and only then add a score and NOT if it crosses the screen.</p> <p><strong>Edit:</strong><br> I have tried the following code but it gives me a endless loop of "Collected" messages.Though , that's not what I want.</p> <pre><code>#Check if yellow coin is in the screen if yellow_starty &gt;display_height or yellow_rect.colliderect(chest_rect): yellow_starty = 0 - yellow_height yellow_startx= random.randrange(0,display_width) print "Collected" </code></pre> <p><strong>Edit 2:</strong><br> What I have tried:</p> <pre><code>yellow_coin=pygame.image.load('yellow.png').convert_alpha() rect=yellow_coin.get_rect() </code></pre> <p>The rect gets a rectangle around the coin. Inside my while loop,</p> <pre><code>pygame.draw.rect(gameDisplay, black, rect) </code></pre> <p>As Mr.Python suggested. And</p> <pre><code>if rect.colliderect(chest_rect): print "Works" print "Hi" </code></pre> <p>To check the collision. But what happens is firstly I have a black rectangle on the upper right of my screen.Second, it continuously print "Works" and "Hi" even if it dint collide with the chest.</p> <p>Thanks in advance guys!</p>
0
2016-07-31T21:08:59Z
38,689,115
<p>What you probably want is <code>pygame.Rect.collide_rect()</code>. It returns <code>True</code> if two rects collided. You first want to put your coin and your treasure chest dimensions in a <code>pygame.Rect</code> object. I'll use your coin rect as a example: <code>coin_rect = pygame.Rect(thingx,thingy,thingw,thingh)</code>. Then simple call <code>pygame.draw.rect(surface, color, coin_rect)</code> to draw it. (<strong>Note</strong> make sure to declare <code>coin_rect</code> <strong>outside</strong> any function so that you can use it globally.) </p> <p>Next down in your while loop(I assume you <strong>do</strong> have a while loop displaying your window)under the if statement where you check if the user wants to close the window, I'd write this:</p> <pre><code>if coin_rect.colliderect(tresure_chest): # run the code you want to happen if a coin # hits the tresure chest </code></pre> <p>This should run your code for when your two rect objects(coin and treasure chest) collide.</p> <p>~Mr.Python</p>
0
2016-07-31T23:07:18Z
[ "python", "image", "python-2.7", "pygame", "rect" ]
AttributeError for function called in another function within same class - Python
38,688,414
<p>When I run the following code I get an AttributeError: 'set' object has no attribute findMean. What am I doing wrong?</p> <pre><code>class BasicStats: def findMean(self, num = {}): length = len(num) sum = 0 for x in num: sum = sum + x mean =sum/length return mean def findVariance(self, num = {}): mean = self.findMean(num) length = len(num) squared_difference = 0 for x in num: squared_difference = squared_difference + (x-mean)**2 variance = squared_difference/length return variance arr = {1, 23, 343.34, 2} findVariance(arr) </code></pre>
1
2016-07-31T21:09:49Z
38,688,436
<p>It's because <code>self</code> in that scope is a <code>set</code>. More specifically, it is <code>arr</code> (which is a set and you pass in as the first argument).</p> <p>The <code>self</code> keyword only works for functions called against an instance of a class (this special type of functions are called methods, read more <a href="http://stackoverflow.com/questions/14086830/python-calling-method-in-class">here</a>.)</p>
3
2016-07-31T21:11:44Z
[ "python" ]
django use models variable in template
38,688,432
<p>I would like to use the values in a choices field in my template. Suppose I had the class:</p> <pre><code>MY_CHOICES = ( ('A1', 'The best steak sauce'), ('B2', 'Very stealthy'), ('C3', 'Missing a P0')) class MyClass(models.Model): my_field = models.CharField(max_length = 2, choices = MY_CHOICES) </code></pre> <p>and my form is:</p> <pre><code>&lt;form method="post"&gt; {% csrf_token %} &lt;select&gt; {% for m in models.MY_CHOICES %} &lt;option&gt;m&lt;/option&gt; {% endfor %} &lt;/select&gt; &lt;/form&gt; </code></pre> <p>What I have here returns an empty select (i.e. one with no options).</p> <p>I looked at <a href="http://stackoverflow.com/questions/20685155/django-use-model-choices-in-modelform">this</a> but couldn't really understand what was going on. Any help would be appreciated, thanks!</p>
2
2016-07-31T21:11:23Z
38,688,646
<p><strong>EDIT:</strong> Added a new solution since the first was only about a line the author Woody1193 forgot to mention but implemented it already.</p> <p><strong>NEW:</strong></p> <p>I have actually had a similar problem and solved it by creating a custom POST function in javascript.</p> <p>Copied from <a href="https://docs.djangoproject.com/en/1.9/ref/csrf/#ajax" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/csrf/#ajax</a> you can get the cookie with this snippet of code:</p> <pre><code>// using jQuery function getCookie(name) { var cookieValue = null; if (document.cookie &amp;&amp; document.cookie !== '') { var cookies = document.cookie.split(';'); for (var i = 0; i &lt; cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) === (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } var csrftoken = getCookie('csrftoken'); </code></pre> <p>Having read a solution regarding hidden input forms I was led to go for an analogous approach for tackling the select problem: <a href="http://stackoverflow.com/questions/133925/javascript-post-request-like-a-form-submit">JavaScript post request like a form submit</a></p> <p>Here comes the customized part. I basically used mixed 2 already existent solutions which worked for me. It actually is the second solution in the link above:</p> <pre><code>// Post to the provided URL with the specified parameters. function post(path, parameters) { var form = $('&lt;form&gt;&lt;/form&gt;'); form.attr("method", "post"); form.attr("action", path); var csrf_field.setAttribute("name", 'csrfmiddlewaretoken'); csrf_field.setAttribute("value", getCookie('csrftoken')); form.appendChild(csrf_field); $.each(parameters, function(key, value) { var field = $('&lt;input&gt;&lt;/input&gt;'); field.attr("type", "hidden"); field.attr("name", key); field.attr("value", value); form.append(field); }); // The form needs to be a part of the document in // order for us to be able to submit it. $(document.body).append(form); form.submit(); } </code></pre> <p><strong>OLD:</strong></p> <p>I solved it by using the csrf_token in the beginning of the form.</p> <p>Your code would look like:</p> <pre><code>&lt;form method="post"&gt; {% crsf_token %} &lt;!-- all the good stuff --&gt; &lt;/form&gt; </code></pre> <p>You can find a better formulated description on how to (hopefully) solve your problem here: <a href="https://docs.djangoproject.com/en/1.9/ref/csrf/" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/csrf/</a></p>
-1
2016-07-31T21:44:27Z
[ "python", "django" ]
django use models variable in template
38,688,432
<p>I would like to use the values in a choices field in my template. Suppose I had the class:</p> <pre><code>MY_CHOICES = ( ('A1', 'The best steak sauce'), ('B2', 'Very stealthy'), ('C3', 'Missing a P0')) class MyClass(models.Model): my_field = models.CharField(max_length = 2, choices = MY_CHOICES) </code></pre> <p>and my form is:</p> <pre><code>&lt;form method="post"&gt; {% csrf_token %} &lt;select&gt; {% for m in models.MY_CHOICES %} &lt;option&gt;m&lt;/option&gt; {% endfor %} &lt;/select&gt; &lt;/form&gt; </code></pre> <p>What I have here returns an empty select (i.e. one with no options).</p> <p>I looked at <a href="http://stackoverflow.com/questions/20685155/django-use-model-choices-in-modelform">this</a> but couldn't really understand what was going on. Any help would be appreciated, thanks!</p>
2
2016-07-31T21:11:23Z
38,694,920
<p>the simplest solution of this problem is, You should create a file forms.py in a same directory where models.py is locaed and write this code into the file:</p> <p><strong>forms.py</strong></p> <pre><code>from models import MyClass from django import forms class MyForm(forms.ModelForm): class Meta: model = MyClass </code></pre> <p>after this pass your form as context in your respective view like this:</p> <p><strong>views.py</strong></p> <pre><code>from forms import MyForm # import that created class in previous step from django.core.context_processors import csrf def my_view(request): context = {} context.update(csrf(request)) context["form"] = MyForm() return render(request, "form.html", context) </code></pre> <p>and at the end use template tag to get the desired output e.g.</p> <p><strong>form.html</strong></p> <pre><code>&lt;form method="post"&gt; {% csrf_token %} &lt;select&gt; {{ form.as_p }} &lt;/select&gt; &lt;/form&gt; </code></pre> <p>May this helps you.</p>
1
2016-08-01T09:09:15Z
[ "python", "django" ]
Adding a text input dialogue box in psychopy using python?
38,688,451
<p>I am using psychopy and python to program a simple psychology experiment. Basically, a foreign word appears on the screen for 8 seconds, followed by 5 seconds of a translation of that word. During the 8 second exposure to the foreign word, participants are instructed to type in a guess as to what the translation might be. When they start typing, their text appears underneath the foreign word that is being displayed on the screen.</p> <p>Here is my question; how can include a dialogue, input text box in my experiment underneath the foreign word where they type and their letters appear, rather than just appearing beneath the word with no border or boundary?</p>
0
2016-07-31T21:13:48Z
38,722,662
<p>I made something similar to what you are describing a while back. Perhaps this will help. You first have some TextStim, and some predefined keys:</p> <pre><code>instruction = psychopy.visual.TextStim(myWindow,color="white") quitKeys = ['escape', 'esc'] ansKeys = ['space', 'return'] keyboardKeys = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] answer = '' </code></pre> <p>And then you have a loop, where inside of it you have something like this (I'm guessing you would also have something relating to the foreign word you are showing)</p> <pre><code>#Loop Starts Here&gt; instruction.setText(u':{0}'.format(answer)) instruction.draw() myWindow.flip() # get some keys. for letter in (keyboardKeys): if psychopy.event.getKeys([letter]): answer += letter if psychopy.event.getKeys(['backspace']): answer = answer[:-1] if psychopy.event.getKeys([quitKeys[0]]): psychopy.core.quit() if psychopy.event.getKeys([ansKeys[1]]): # enter is pressed # and they have given their answer, So some code to check their answer </code></pre> <p>Hope this helps</p>
1
2016-08-02T14:02:53Z
[ "python", "psychopy" ]
Dictionary Sorting based on lower dictionary value
38,688,455
<p>I am working with an API that returns values in this format:</p> <pre><code>raw = [{'Name': 'Erica','Value':12},{'Name':'Sam','Value':8},{'Name':'Joe','Value':60}] </code></pre> <p>I am trying to return the 'name' of the people with the top 2 'values':</p> <pre><code>result = ['Joe','Erica'] </code></pre> <p>What is the most efficient way to complete this?</p>
-6
2016-07-31T21:14:22Z
38,688,474
<pre><code>raw = [{'Name': 'Erica','Value':12},{'Name':'Sam','Value':8},{'Name':'Joe','Value':60}] raw.sort(key=lambda d: d['Value'], reverse=True) result = [] for i in range(2): result.append(raw[i]['Name']) print(result) # =&gt; ['Joe', 'Erica'] </code></pre> <p>print(result)</p> <p>Try this.</p>
-1
2016-07-31T21:17:08Z
[ "python", "dictionary" ]
Converting and writing list of strings as binary in Python 3
38,688,488
<p>I'm trying to convert a Python 2.x version of this code:</p> <pre><code>out_chunk = open('out.txt','w+b') chunks.append(out_chunk) # out_chunk is just a list of strings like ['a', 'b', ...] out_chunk.writelines(chunk) </code></pre> <p>into Python 3.x version. If I run the above code in Python 3.x directly, I get an error like below, which is expected:</p> <pre><code>Traceback (most recent call last): File "C:/Users/Desktop/es/prog.py", line 145, in &lt;module&gt; ob.external_sort() File "C:/Users/Desktop/es/prog.py", line 70, in my_func out_chunk.writelines(chunk) TypeError: a bytes-like object is required, not 'str' </code></pre> <p>Is there a way to write list of strings as bytes in Python 3.x? Or should I just write as a list of strings (and take the performance hit, maybe?)</p>
0
2016-07-31T21:18:43Z
38,688,521
<p>Just don't open the file in binary mode:</p> <pre><code>out_chunk = open('out.txt','w+') </code></pre> <p>Hope it helps!</p>
1
2016-07-31T21:23:31Z
[ "python", "python-2.7", "python-3.x", "io" ]
Converting and writing list of strings as binary in Python 3
38,688,488
<p>I'm trying to convert a Python 2.x version of this code:</p> <pre><code>out_chunk = open('out.txt','w+b') chunks.append(out_chunk) # out_chunk is just a list of strings like ['a', 'b', ...] out_chunk.writelines(chunk) </code></pre> <p>into Python 3.x version. If I run the above code in Python 3.x directly, I get an error like below, which is expected:</p> <pre><code>Traceback (most recent call last): File "C:/Users/Desktop/es/prog.py", line 145, in &lt;module&gt; ob.external_sort() File "C:/Users/Desktop/es/prog.py", line 70, in my_func out_chunk.writelines(chunk) TypeError: a bytes-like object is required, not 'str' </code></pre> <p>Is there a way to write list of strings as bytes in Python 3.x? Or should I just write as a list of strings (and take the performance hit, maybe?)</p>
0
2016-07-31T21:18:43Z
38,688,648
<p>You opened the file in <em>binary</em> mode, so you'd have to encode your bytes.</p> <p>If you drop the <code>'b'</code> part from the file mode (so open with <code>'w+'</code> rather than <code>'w+b'</code>), you get an implementation of the <a href="https://docs.python.org/3/library/io.html#io.TextIOBase" rel="nofollow"><code>TextIOBase</code> interface</a> instead, which will encode strings for you given an encoding (the default is to use the result of <code>locale.getdefaultencoding()</code>, you probably want to supply an explicit <code>encoding</code> argument to the <code>open()</code> call instead).</p> <p>The alternative would be for you to encode your strings manually, using the <a href="https://docs.python.org/3/library/stdtypes.html#str.encode" rel="nofollow"><code>str.encode()</code> method</a> on each chunk. Leaving encoding to the <code>TextIOBase</code> implementation is going to be a little faster however, because the I/O layer can encode without having to look up a method object on each <code>str</code> chunk, nor do the resulting bytes have to be boxed in a Python <code>bytes</code> object again.</p> <p>Also, for encodings that require a <a href="https://en.wikipedia.org/wiki/Byte_order_mark" rel="nofollow">byte order mark</a>, it is best to leave writing that marker to the file implementation.</p> <p>However, if you could produce <em><code>bytes</code> objects</em> in the first place, you'd avoid having to encode at all.</p>
3
2016-07-31T21:44:32Z
[ "python", "python-2.7", "python-3.x", "io" ]
Dataframe manipulation and aggregation
38,688,502
<p>I have the following dataframe </p> <pre><code> City Status q1 q2 Record 0 Austin Standard N Y Active 1 Dallas Standard N y Active 2 Orlando Standard N N Active 3 Orlando Ex Y Y Inactive 4 Orlando Standard N N Active </code></pre> <p>I'm trying to manipulate it to look like this:</p> <pre><code> Count % All Cities 5 100.0% Active 4 80% Ex 1 20% Standard 4 80% Q1 = Y 1 20% Q2 = Y 2 40% Inactive 1 20% </code></pre> <p>I have resorted to a large piece of code that calculates each percent by breaking each df column into its component statuses (for example, a column for q1yes, a column for q1no, etc) and then fills a dataframe recursively but I feel like I must be missing something.</p> <p>I will also need to break it down by city but I'd like to figure that part out before asking for more help</p>
-2
2016-07-31T21:20:13Z
38,688,579
<p>you can do it this way:</p> <pre><code>In [159]: df.q1 = 'Q1 = ' + df.q1.str.upper() In [160]: df.q2 = 'Q2 = ' + df.q2.str.upper() In [161]: df Out[161]: City Status q1 q2 Record 0 Austin Standard Q1 = N Q2 = Y Active 1 Dallas Standard Q1 = N Q2 = Y Active 2 Orlando Standard Q1 = N Q2 = N Active 3 Orlando Ex Q1 = Y Q2 = Y Inactive 4 Orlando Standard Q1 = N Q2 = N Active In [173]: r = (df.drop('City',1) .....: .apply(lambda x: x.value_counts()) .....: .apply(lambda x: x[x.first_valid_index()], axis=1) .....: .to_frame('Count') .....: .astype(np.int16) .....: ) In [174]: r['pct'] = (r.Count / len(df) * 100).astype(str) + '%' In [175]: r.loc['All Cities'] = [len(df), '100.0%'] In [176]: r Out[176]: Count pct Active 4 80.0% Ex 1 20.0% Inactive 1 20.0% Q1 = N 4 80.0% Q1 = Y 1 20.0% Q2 = N 2 40.0% Q2 = Y 3 60.0% Standard 4 80.0% All Cities 5 100.0% </code></pre> <p>and finally:</p> <pre><code>In [178]: r[~r.index.str.contains('= N')] Out[178]: Count pct Active 4 80.0% Ex 1 20.0% Inactive 1 20.0% Q1 = Y 1 20.0% Q2 = Y 3 60.0% Standard 4 80.0% All Cities 5 100.0% </code></pre>
1
2016-07-31T21:31:31Z
[ "python", "pandas" ]
Notepad++ or python - Compare two lists and change the original changed
38,688,542
<p>I have this original list.</p> <pre><code>John:password123 Daved:Password123 Steve:Password123 Michael:Password123 </code></pre> <hr> <p>The second list is a random of original list and with changed passwords.</p> <pre><code>Michael:p241d111 John:fcvbfdg122 Steve:pdPo134! </code></pre> <p>What I want is to change the original list with new password but same postion.</p> <p>For example:</p> <pre><code>John:fcvbfdg122 Daved:Password123 Steve:pdPo134! Michael:p241d111 </code></pre> <p>How to do that in notepad++ or in python!</p>
-1
2016-07-31T21:26:40Z
38,688,654
<p><strong>test.txt</strong></p> <pre><code>John:password123 Daved:Password123 Steve:Password123 Michael:Password123 </code></pre> <p><strong>test2.txt</strong></p> <pre><code>Michael:p241d111 John:fcvbfdg122 Steve:pdPo134! </code></pre> <p><strong>script.py</strong></p> <pre><code>lst = {} with open("test.txt") as f: for line in f: split = line.split(":") lst[split[0]] = split[1] lst2 = {} with open("test2.txt") as f: for line in f: split = line.split(":") lst2[split[0]] = split[1] final_lst = {} for item in lst: final_lst[item] = lst2.get(item, None) or lst[item] print(final_lst) </code></pre> <p>Here is a simple solution written with Python.</p> <pre><code>~/temp ❯❯❯ python3 test2.py ⏎ {'Michael': 'p241d111', 'Steve': 'pdPo134!', 'John': 'fcvbfdg122', 'Daved': 'Password123'} </code></pre>
0
2016-07-31T21:46:22Z
[ "python", "regex", "list", "compare", "notepad++" ]
What do the keywords in the eventful api map to?
38,688,666
<p>I have been trying to setup the eventful api using python. The following example is all the documentation I could find on the it:</p> <pre><code>import eventful api = eventful.API('your API key here') # If you need to log in: # api.login('username', 'password') events = api.call('/events/search', q='music', l='San Diego') for event in events['events']['event']: print "%s at %s" % (event['title'], event['venue_name']) </code></pre> <p>What does the <code>q</code> and <code>l</code> stand for in the <code>api.call()</code> method?</p> <p><a href="https://api.eventful.com/libs/python/" rel="nofollow">Here is the link to the python</a></p> <p><a href="https://api.eventful.com/docs" rel="nofollow">The eventful API documentation</a></p>
0
2016-07-31T21:47:39Z
38,688,691
<p>Just found out what the args stood for, this link has them:</p> <p><a href="http://api.eventful.com/tools/tutorials/search" rel="nofollow">http://api.eventful.com/tools/tutorials/search</a></p> <p><em>What: The 'what' argument, also called <strong>'q'</strong> or 'keywords', is used to search by any aspect of an event that isn't part of the category, location or time.</em></p> <p><em>Where: The 'where' argument, also called <strong>'l'</strong> or 'location', is used to search by city, region, postal code (ZIP), country, street address, or venue. It's often used in concert with the 'within' and 'units' parameters to do a radius search.</em></p>
0
2016-07-31T21:51:17Z
[ "python" ]
Sublime Text3 creates Scripts inside Scripts folder inside virtualenv
38,688,707
<p>I'm trying to run Python scripts inside virtualenv from Sublime Text 3. When I activate the virtualenv in ST3 and choose the <code>.py</code>, ST3 creates a <code>Scripts</code> folder inside the preexisting <code>Scripts</code> folder (for a new `.py'). What is causing this problem and how I do stop this from happening?</p> <p>Following are the detailed steps I follow:</p> <ol> <li>Create `virtualenx Somevenv' from CMD</li> <li>Navigate to 'Someenv\Scripts`</li> <li>activate</li> <li><code>pip install somePackage</code></li> <li>Select <code>Virtualenv:New</code> (<code>Virtualenv: Activate</code> does nothing)</li> <li>Paste <code>\path\to\Someenv\Scripts</code> under <code>Virtualenv Path</code></li> <li>Select <code>c:\Python27</code></li> <li><p>ST3 does it's thing and produces this message:</p> <p><code>New python executable in C:\Users\Gandalf\Documents\Python_Virtual_Env\Legolas\Scripts\Scripts\python.exe Installing setuptools, pip, wheel...done.</code></p></li> </ol> <p>As you see, ST3 creates a <code>Scripts</code> inside the previous <code>Scripts</code> folder. As a result, the packages installed in step 4 are not used. I want to stop the creation of this second <code>Scripts</code> folder. </p>
0
2016-07-31T21:54:39Z
38,689,463
<p>Solved. In ST3, use <code>Virtualenv: Add Directory</code> instead of <code>Virtualenv: New</code>. The latter creates a new virtualenv (hence the new Scripts folder).</p>
0
2016-08-01T00:10:40Z
[ "python", "python-2.7", "sublimetext3", "virtualenv" ]
change specific parts of a string in python (update bootstrap values in phylogenetic trees)
38,688,721
<p>So basically I have a string:</p> <pre><code>string_1 = '(((A,B)123,C)456,(D,E)789)135' </code></pre> <p>Containing a phylogenetic tree with bootstrap values is parenthetical notation (not really important to the question, but in case anyone was wondering). This example tree contains four relationships with four bootstrap values (the numbers following each close parenthesis). I have each of these relationships in a list of lists:</p> <pre><code>list_1 = [['(A,B)', 321], ['((A,B),C)', 654], ['(D,E)', 987], ['(((A,B),C),(D,E))', 531]] </code></pre> <p>each containing a relationship and its updated bootstrap value. All I need to do is to create a final string:</p> <pre><code>final = '(((A,B)321,C)654,(D,E)987)531' </code></pre> <p>where all the bootstrap values are updated to the values in list_1. I have a function to remove bootstrap values:</p> <pre><code>import re def remove_bootstrap(string): matches = re.split(r'(?&lt;=\))\d+\.*\d*', string) matches = ''.join(matches) return matches </code></pre> <p>and code to isolate relationships:</p> <pre><code>list_of_bipart_relationships = [] for bipart_file in list_bipart_files: open_file = open(bipart_file) read_file = open_file.read() length = len(read_file) for index in range(1, length): if read_file[index] == '(': parenthesis_count = 1 for sub_index in range(index + 1, length): if read_file[sub_index] == '(': parenthesis_count += 1 if read_file[sub_index] == ')': parenthesis_count -= 1 if parenthesis_count == 0: bad_relationship = read_file[index:sub_index + 1] relationship_without_values = remove_length(bad_relationship) bootstrap_value = extract(sub_index, length, read_file) pair = [] pair.append(bootstrap_value) pair.append(relationship_without_values) list_of_bipart_relationships.insert(0, pair) break </code></pre> <p>and I am completely at a loss. I cannot figure out how to get the program to recognize a larger relationship once a nested relationship's bootstrap value is updated. Any help would be greatly appreciated! </p>
3
2016-07-31T21:57:14Z
38,725,131
<p>This is a solution using Biopython. First you need to load your trees. If you're using strings, you'll need to load then first as <code>StringIO</code>, as the Parser only accepts file handles:</p> <pre><code>from io import StringIO from Bio.Phylo.NewickIO import Parser string_1 = u'(((A,B)123,C)456,(D,E)789)135' handle = StringIO(string_1) tree = list(Parser(handle).parse())[0] # Assuming one tree per string </code></pre> <p>Now that you have the tree loaded, lets find the clades and update some values. This should be refactored to a function that accepts a list of clade names and returns a list of clades to pass to <code>common_ancestor</code>, but for illustrating:</p> <pre><code>clade_A = list(tree.find_clades(target="A"))[0] clade_B = list(tree.find_clades(target="B"))[0] tree.common_ancestor(clade_A, clade_B).confidence = 321 </code></pre> <p>Now print the tree to a Newick format</p> <pre><code>print(tree.format("newick")) # Outputs # (((A:1.00000,B:1.00000)321.00:1.00000,C:1.00000)456.00:1.00000,(D:1.00000,E:1.00000)789.00:1.00000)135.00:1.00000; </code></pre> <p>Note the confidence value for (A, B) is now 321 instead 123.</p>
1
2016-08-02T15:48:55Z
[ "python", "biopython", "statistics-bootstrap", "phylogeny" ]
the efficient approach to generate submatrices
38,688,745
<p>The following is a function that can return sub-matries from two given matries. The position of generating these sub-matries are the same for both input matries. The input matries are of <code>Numpy array</code>. I just would like to know are there more elegant ways to fulfill the same type of task as this function provided.</p> <pre><code>def seg(ma1,ma2,size): rowN = len(ma1) colN = len(ma1[0]) dim1 = random.randint(0,rowN-size) dim2 = random.randint(0,colN-size) return ma1[dim1:dim1+size,dim2:dim2+size], ma2[dim1:dim1+size,dim2:dim2+size] </code></pre>
0
2016-07-31T22:01:22Z
38,688,803
<p>As an alternative approach, we could create the indexing ranges with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ix_.html" rel="nofollow"><code>np.ix_</code></a> and index into the input arrays with those, like so -</p> <pre><code>idx = np.ix_(np.arange(size)+dim1,np.arange(size)+dim2) out = ma1[idx], ma2[idx] </code></pre> <hr> <p>Another approach could be suggested using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html" rel="nofollow"><code>np.mgrid</code></a> to create the dense row and column indices. This would require more memory than the previous one, but probably closer to the original code though shorter. Here's how it would look like -</p> <pre><code>R,C = np.mgrid[dim1:dim1+size,dim2:dim2+size] out = ma1[R,C], ma2[R,C] </code></pre> <p>Another way to "shift" the elegance into <code>np.mgrid</code> and push out workload into the latter indexing part would be like so -</p> <pre><code>R,C = np.mgrid[:size,:size] out = ma1[R+dim1,C+dim2], ma2[R+dim1,C+dim2] </code></pre>
0
2016-07-31T22:10:06Z
[ "python", "numpy", "scipy" ]
the efficient approach to generate submatrices
38,688,745
<p>The following is a function that can return sub-matries from two given matries. The position of generating these sub-matries are the same for both input matries. The input matries are of <code>Numpy array</code>. I just would like to know are there more elegant ways to fulfill the same type of task as this function provided.</p> <pre><code>def seg(ma1,ma2,size): rowN = len(ma1) colN = len(ma1[0]) dim1 = random.randint(0,rowN-size) dim2 = random.randint(0,colN-size) return ma1[dim1:dim1+size,dim2:dim2+size], ma2[dim1:dim1+size,dim2:dim2+size] </code></pre>
0
2016-07-31T22:01:22Z
38,695,437
<p>You likely want to do <code>rowN, colN = ma1.shape</code> instead of</p> <p>rowN = len(ma1)<br> colN = len(ma1[0]) </p> <p>Also, you might want to seed your random number generator.</p>
0
2016-08-01T09:32:56Z
[ "python", "numpy", "scipy" ]
Daylight savings time for GPX and PostgreSQL
38,688,752
<p>So here I'm working with gpx files. Take note of an excerpt of one:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;!-- GPSTrack 2.2.1 — http://bafford.com/gpstrack --&gt; &lt;gpx xmlns="http://www.topografix.com/GPX/1/1"&gt; &lt;trk&gt; &lt;name&gt;&lt;![CDATA[2016-03-31 10-17-54]]&gt;&lt;/name&gt; &lt;trkseg&gt; &lt;trkpt lat="38.704859" lon="-8.970304"&gt;&lt;ele&gt;13.050667&lt;/ele&gt;&lt;time&gt;2016-03-31T09:17:51Z&lt;/time&gt;&lt;!-- hAcc=95.768176 vAcc=10.000000 --&gt;&lt;/trkpt&gt; &lt;trkpt lat="38.704894" lon="-8.970324"&gt;&lt;ele&gt;13.050667&lt;/ele&gt;&lt;time&gt;2016-03-31T09:17:55Z&lt;/time&gt;&lt;!-- hAcc=141.087476 vAcc=10.000000 --&gt;&lt;/trkpt&gt; &lt;trkpt lat="38.704859" lon="-8.970304"&gt;&lt;ele&gt;13.050667&lt;/ele&gt;&lt;time&gt;2016-03-31T09:17:55Z&lt;/time&gt;&lt;!-- hAcc=95.768176 vAcc=10.000000 --&gt;&lt;/trkpt&gt; &lt;trkpt lat="38.704878" lon="-8.970343"&gt;&lt;ele&gt;13.150488&lt;/ele&gt;&lt;time&gt;2016-03-31T09:18:43Z&lt;/time&gt;&lt;!-- hAcc=165.000000 vAcc=10.000000 --&gt;&lt;/trkpt&gt; </code></pre> <p>See the name of the file? It gives us the time 10H17. Now check the time for each point. It's counted with one less hour.</p> <p>I still don't get how the times are messed up here. But this is the beginning of the problem.</p> <p>Now, I'm parsing these many gpx files, and loading them into a PostgreSQL database. More specifically this table:</p> <pre><code>CREATE TABLE IF NOT EXISTS trips ( trip_id SERIAL PRIMARY KEY, start_location TEXT REFERENCES locations(label), end_location TEXT REFERENCES locations(label), start_date TIMESTAMP WITHOUT TIME ZONE NOT NULL, end_date TIMESTAMP WITHOUT TIME ZONE NOT NULL, bounds geography(POLYGONZ, 4326) NOT NULL, points geography(LINESTRINGZ, 4326) NOT NULL, timestamps TIMESTAMP WITHOUT TIME ZONE[] NULL ); </code></pre> <p>Even while using <code>TIMESTAMP WITHOUT TIME ZONE</code> the points are being loaded with the wrong hour (one less hour). This happens only in the days after the Daylight Savings Time is in effect. The point here: is there any way to check if a date is on DST time and add one hour to it if that checks?</p> <p>I checked for <code>tm_isdst</code> and <code>datetime.dst()</code> but I still don't understand it.</p>
0
2016-07-31T22:02:56Z
38,690,354
<p>If you <em>know</em> the <strong><em>time zone name</em></strong> where this local time is supposed to be set in (assuming <em>'Europe/Vienna'</em> in the example), you can normalize the value to correct UTC time (or <em>any</em> other time zone) with this expression:</p> <pre><code>SELECT '2016-03-31T09:17:51Z'::timestamp AT TIME ZONE 'Europe/Vienna' AT TIME ZONE 'UTC' </code></pre> <p>Result:</p> <pre><code>timezone -------------------- 2016-03-31 07:17:51 </code></pre> <p>Yes, apply <code>AT ZIME ZONE</code> <em>twice</em>.</p> <p>Since you seem to be dealing with different time zones I would consider using <code>timestamptz</code> instead of <code>timestamp</code> in your tables.</p> <p>Be sure to use time zone <strong><em>names</em></strong>, not abbreviations or plain time offsets to get precise adjustments for DST.<br> <sub>I hate the moronic concept of "daylight saving time" - it never saves any daylight but keeps wasting valuable time of people being confused by it.</sub></p> <p>Detailed explanation for all of that:</p> <ul> <li><p><a href="http://stackoverflow.com/questions/9571392/ignoring-timezones-altogether-in-rails-and-postgresql/9576170#9576170">Ignoring timezones altogether in Rails and PostgreSQL</a></p></li> <li><p><a href="http://stackoverflow.com/questions/16086962/how-to-get-a-time-zone-from-a-location-using-latitude-and-longitude-coordinates/16086964#16086964">How to get a time zone from a location using latitude and longitude coordinates?</a></p></li> </ul>
0
2016-08-01T02:50:24Z
[ "python", "xml", "postgresql", "time", "dst" ]
handling large integers in for loop optimized in python
38,688,780
<p>I'm reading a large csv file (500 000 rows) and adding every row in a dict. One example row is: </p> <p><code>6AH8,F,B,0,60541765,60541765,90.52,1</code></p> <p>index 4 - <code>60541765</code> and index 5 - <code>60541765</code>in this case is the same, but it is not always the case. These integers are timestamps that indicates time in milliseconds-after-midnight format</p> <p>I want to iterate through every row, and add the large number in a list and address that number to the list index.</p> <p>i.e: </p> <pre><code>timeList = [60541765, 20531765, ..., 80542765] </code></pre> <p>so the row will be : <code>6AH8,F,B,0,0,0,90.52,1</code></p> <p><strong>WHY?</strong> - because the times sometimes occurs more than once in the file.</p> <p><strong>Myquestion is:</strong></p> <p>Is there a better way to this than store them in a list? </p> <p>if not:</p> <p>How do I iterate through the rows, to replace index 4 and 5 the fastest way - right now it takes more than 15 minutes. </p> <p>Im doing like this at the moment:</p> <pre><code>timeStampList = [] def putTimeStampsInList(inputDict): for values in inputDict.values(): timestamp1 = values[4] if values[4] not in timeStampList: timeStampList.append(values[4]) </code></pre> <p>----------------------- <strong>Additional information</strong> -----------------------</p> <p>This is an assignment, where I'm supposed to use compression to make a 19MB file smaller without using any 3rd party or framework provided compression libraries for your solution. So I can't use huffman or lz77 and copy it.</p> <p>I already have a solution to minimize the</p> <pre><code>index 1 - 6AH8 index 2 and 3 - F,B </code></pre> <p>My issue is the time stamps which I can't minimize properly and in a time saving way.</p>
0
2016-07-31T22:06:49Z
38,688,928
<p>Your issue is likely that checking whether a number is in a List in python is an O(n) operation, which will need to be performed for every row in your large dataset, meaning a total of O(n^2) algorithm, which is enormous on 500,000 entries.</p> <p>My suggestion would be to add a bit of space complexity O(n) to save on time complexity (now making it average case O(n), I think on typical data)</p> <pre><code>timeStampList = [] timeStampSet = set() def putTimeStampsInList(inputDict): for values in inputDict.values(): timestamp1 = values[4] if values[4] not in timeStampSet: timeStampList.append(values[4]) TimeStampSet.add(values[4]) </code></pre> <p>Now checking for membership is a constant time operation, so rather than your code cycling through your gigantic list every single time it needs to check if something is in the List, it can just quickly check if it's in the set that you're creating! This should speed up the time of your algorithm significantly.</p> <p>Once you're done creating the List, you don't need the set anymore, so it won't affect the compression size.</p>
1
2016-07-31T22:34:02Z
[ "python", "for-loop", "optimization", "integer", "iteration" ]
Copy value from matching index in another dataframe after criteria matched
38,688,784
<p>With the test Pandas dataframe below I am trying to copy a value from matching index in another dataframe after certain criteria is matched.</p> <p>This is a snip from the dataframe called <code>data2</code>:</p> <pre><code> Signal Value2 2013-01-01 09:00:00 1.0 NaN 2013-01-01 10:00:00 1.0 NaN 2013-01-01 11:00:00 1.0 NaN 2013-01-01 12:00:00 1.0 NaN 2013-01-01 13:00:00 1.0 NaN 2013-01-01 14:00:00 -1.0 NaN </code></pre> <p>and this is a snip from <code>data</code>:</p> <pre><code> value 2013-01-01 09:00:00 9 2013-01-01 10:00:00 10 2013-01-01 11:00:00 11 2013-01-01 12:00:00 12 2013-01-01 13:00:00 13 2013-01-01 14:00:00 14 2013-01-01 15:00:00 15 2013-01-01 16:00:00 16 2013-01-02 09:00:00 33 2013-01-02 10:00:00 34 </code></pre> <p>So when <code>data2</code> <code>Signal</code> at <code>2013-01-01 14:00:00</code> shows <code>-1</code> I want to copy the corresponding <code>value</code> from <code>data</code> which is <code>14</code> and copy this to <code>data2</code> <code>Values2</code>.</p> <p>Here is the code to test this:</p> <pre><code>import pandas as pd import datetime import numpy as np index = pd.date_range('2013-1-1',periods=100,freq='1h') data = pd.DataFrame(data=list(range(100)), columns=['value'], index=index) signal = 1.0 data2 = pd.DataFrame(data=signal, columns=['Signal'], index=index) data2['Signal']['2013-01-01 14:00:00'] = -1.0 data2['Value2'] = np.nan start = datetime.time(9,0,0) end = datetime.time(16,00,0) data = data.between_time(start,end) </code></pre> <p>This will ultimately be used on a large dataframe and will involve multiple days.</p>
0
2016-07-31T22:07:16Z
38,688,970
<p>Could be something like this?</p> <pre><code>data2.loc[data2.Signal == -1, 'Value2'] = data.loc[data2.Signal == -1, 'value'] </code></pre>
1
2016-07-31T22:40:50Z
[ "python", "pandas" ]
django deleteview NoReverseMatch at
38,688,793
<p>I am new to Django. I keep receiving NoReverseMatch when building an app. Was wondering if someone can point me in the right direction as to where I am going wrong. </p> <p>The app has scenarios, each scenario has associated emailss. When I click on a scenario the app displays the emails associated to the scenario, allowing the user to then delete/edit each of the associated email. </p> <p>When I try delete an email I receive a NoReverseMatch if I have the success_url set to the email:index which generated the list of emails for that scenario. If I change success_url to the main screen(scenarios:index) it works however this is not ideal having the user go to the main screen listing the scenarios on each deletion.</p> <p>Here are my url patterns for the email datasource</p> <pre><code># emails # /scenarios/&lt;scenarioid&gt;/email url(r'^(?P&lt;pk&gt;[0-9]+)/email/$', views.EmailListView.as_view(), name='email-index'), # /scenarios/12/email/&lt;emailid&gt;/delete url(r'^([0-9]+)/email/(?P&lt;pk&gt;[0-9]+)/delete/$', views.EmailDelete.as_view(), name='email-delete'), </code></pre> <p>Here's my views file:</p> <pre><code>class EmailDelete(DeleteView): model = Email success_url = reverse_lazy('scenarios:email-index') class EmailListView(generic.ListView): model = Email template_name = 'scenarios/emailindex.html' context_object_name = 'scenario_emails' print "in email list view" def get_queryset(self): return Email.objects.filter(scenario=self.kwargs['pk']) </code></pre> <p>Here's the template:</p> <pre><code>&lt;td&gt;&lt;form action="{% url 'scenarios:email-delete' email.id %}" method="post"&gt; {% csrf_token %} </code></pre>
0
2016-07-31T22:08:58Z
38,690,928
<p><code>email-delete</code> URL has two capture groups - one non-named <code>([0-9]+)</code> and one named <code>(?P&lt;pk&gt;[0-9]+)</code>, but in the <code>url</code> tag you pass only positional argument - <code>email.id</code>. You must either remove the first group from url pattern or pass two arguments to <code>url</code> tag. Something like this <code>{% url 'scenarios:email-delete' scenario.id pk=email.id %}</code>.</p>
0
2016-08-01T04:14:11Z
[ "python", "django" ]
How to implement multithreading with tornado?
38,688,816
<p>I am working with python2.7 with futures module installed.</p> <p>I am trying to implement multithreading in tornado using ThreadPoolExecutor. </p> <p>Here is the code that I have implemented.</p> <pre><code>from __future__ import absolute_import from base_handler import BaseHandler from tornado import gen from pyrestful import mediatypes from pyrestful.rest import get, post, put, delete from bson.objectid import ObjectId from spark_map import Map from concurrent import futures import tornado class MapService(BaseHandler): MapDB = dict() executor = futures.ProcessPoolExecutor(max_workers=3) @tornado.web.asynchronous @gen.coroutine @post(_path='/map', _type=[str, str]) def postMap(self, inp, out): db = self.settings['db'] function = lambda (x,y): (x,y[0]*2) future = yield db.MapInfo.insert({'input': inp, 'output': out, 'input_function': str(function)}) response = {"inserted ID": str(future)} self.write(response) m = Map(inp, out, function, appName=str(future)) futuree = self.executor.submit(m.operation()) self.MapDB[str(future)] = {'map_object': m, 'running_process_future_object': futuree} self.finish() @tornado.web.asynchronous @gen.coroutine @delete(_path='/map/{_id}', _types=[str]) def deleteMap(self, _id): db = self.settings['db'] future = yield db.MapInfo.find_one({'_id': ObjectId(_id)}) if future is None: raise AttributeError('No entry exists in the database with the provided ID') chk = yield db.MapInfo.remove(future) response = { "Succes": "OK" } self.write(response) self.MapDB[_id]['map_object'].stop() del self.MapDB[_id] self.finish() </code></pre> <p>In the above code, I receive two inputs using the post request in inp and out. Then I perform some operation with them. This operation should last until a delete request is received to stop and remove the process.</p> <p>The problem I am facing is with the multiple requests. It only executes the first request while other requests wait for the first one to complete thus blocking the main IOLoop.</p> <p>So, I want to run each process in a separate thread. How should I implement it?</p>
0
2016-07-31T22:13:45Z
38,701,789
<p>It appears that <code>m.operation()</code> is blocking, so you need to run it on a thread. The way you're doing it blocks the main thread while calling <code>m.operation()</code>, and spawns a thread <em>after</em>:</p> <pre><code>self.executor.submit(m.operation()) </code></pre> <p>You want, instead, to pass the function to a thread which will execute it:</p> <pre><code>self.executor.submit(m.operation) </code></pre> <p>No parens.</p>
1
2016-08-01T14:44:58Z
[ "python", "multithreading", "asynchronous", "tornado" ]
regular expression for shortest match in Python 2.7
38,688,827
<p>I am using Python 2.7. The current code returns <code>hello }{(2) world</code>. If I only want the shortest match, in this case <code>hello</code>, what is the solution in Python 2.7?</p> <pre><code>import re content = '{(1) hello }{(2) world}' reg = '{\(1\)(.*)}' results = re.findall(reg, content) print results[0] </code></pre>
0
2016-07-31T22:16:10Z
38,688,836
<p>Make the wildcard match <a href="http://stackoverflow.com/questions/2301285/what-do-lazy-and-greedy-mean-in-the-context-of-regular-expressions"><em>non-greedy</em></a>:</p> <pre><code>&gt;&gt;&gt; reg = r'{\(1\)(.*?)}' # this ? is important^ &gt;&gt;&gt; results = re.findall(reg, content) &gt;&gt;&gt; print results[0] hello </code></pre>
3
2016-07-31T22:18:14Z
[ "python", "regex", "python-2.7" ]
regular expression for shortest match in Python 2.7
38,688,827
<p>I am using Python 2.7. The current code returns <code>hello }{(2) world</code>. If I only want the shortest match, in this case <code>hello</code>, what is the solution in Python 2.7?</p> <pre><code>import re content = '{(1) hello }{(2) world}' reg = '{\(1\)(.*)}' results = re.findall(reg, content) print results[0] </code></pre>
0
2016-07-31T22:16:10Z
38,690,961
<p>For this kind of situation negated character class will also help you.</p> <pre><code>reg = r'{\(1\)([^}]*)}' results = re.findall(reg, content) print results[0] </code></pre>
0
2016-08-01T04:18:54Z
[ "python", "regex", "python-2.7" ]
how to create new variable from loop index in python
38,688,865
<p>i have the following sort of code</p> <pre><code> ps=[1,2,3,4] for subj in ps: num=num+1 datapath='/home/subj%d' %(int(subj)) if num==1: d1= pd.read_csv(datapath, 'words.csv') if num==2: d2= pd.read_csv(datapath, 'words.csv') if num==3: d3= pd.read_csv(datapath, 'words.csv') if num==4: d4= pd.read_csv(datapath, 'words.csv') </code></pre> <p>which I would like to simplify i.e. to assign the csv file to a new d[num] variable such as the following, which wont work</p> <pre><code>ps=[1,2,3,4] for subj in ps: num=num+1 datapath='/home/subj%d' %(int(subj)) d[num] = pd.read_csv(datapath, 'numbers.csv') </code></pre> <p>d[num] should be a different numeric dataset for every loop</p> <p>any suggestions please? thanks!</p>
0
2016-07-31T22:23:34Z
38,688,887
<p>Just do this:</p> <pre><code>paths = map('/home/subj{}'.format, ps) datas = map(pd.read_csv, paths) </code></pre> <p>Now you have a list of DataFrames, but if you want separate variables, you can:</p> <pre><code>d1, d2, d3, d4 = datas </code></pre>
0
2016-07-31T22:27:02Z
[ "python" ]
how to create new variable from loop index in python
38,688,865
<p>i have the following sort of code</p> <pre><code> ps=[1,2,3,4] for subj in ps: num=num+1 datapath='/home/subj%d' %(int(subj)) if num==1: d1= pd.read_csv(datapath, 'words.csv') if num==2: d2= pd.read_csv(datapath, 'words.csv') if num==3: d3= pd.read_csv(datapath, 'words.csv') if num==4: d4= pd.read_csv(datapath, 'words.csv') </code></pre> <p>which I would like to simplify i.e. to assign the csv file to a new d[num] variable such as the following, which wont work</p> <pre><code>ps=[1,2,3,4] for subj in ps: num=num+1 datapath='/home/subj%d' %(int(subj)) d[num] = pd.read_csv(datapath, 'numbers.csv') </code></pre> <p>d[num] should be a different numeric dataset for every loop</p> <p>any suggestions please? thanks!</p>
0
2016-07-31T22:23:34Z
38,688,889
<p>Create a list and append the data.</p> <pre><code>ps=[1,2,3,4] d = [] for subj in ps: datapath = '/home/subj%d' % (subj) d.append(pd.read_csv(datapath, 'numbers.csv')) </code></pre> <p>You can access the data with <code>d[0]</code> to <code>d[3]</code>.</p> <p>Using a list comprehension will lead to the following code:</p> <pre><code>ps = [1,2,3,4] d = [pd.read_csv('/home/subj%d' %(subj), 'numbers.csv') for subj in ps] </code></pre>
4
2016-07-31T22:27:21Z
[ "python" ]
Seaborn OS X : seaborn.pairplot() ValueError: 'transform' must be an instance of 'matplotlib.transform.Transform'
38,688,881
<p>The following steps were taken inside of jupyter notebook in an attempt to make <code>seaborn.pairplot()</code> work. An error from <code>/usr/local/lib/python2.7/site-packages/matplotlib/matplotlib/transforms.pyc</code> stopped the function from working.</p> <p>Below are the python library versions:</p> <pre><code>print(matplotlib.__version__, sns.__version__) ('1.5.1', '0.7.1') </code></pre> <p>A csv of the iris dataset was read</p> <pre><code>data = pandas.read_csv('iris.csv') data_no_nans = data.dropna() sns.pairplot(data_no_nans) </code></pre> <p>Error Message:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-4-001343d0343b&gt; in &lt;module&gt;() ----&gt; 1 sns.pairplot(data) /usr/local/lib/python2.7/site-packages/seaborn/linearmodels.pyc in pairplot(data, hue, hue_order, palette, vars, x_vars, y_vars, kind, diag_kind, markers, size, aspect, dropna, plot_kws, diag_kws, grid_kws) 1588 hue_order=hue_order, palette=palette, 1589 diag_sharey=diag_sharey, -&gt; 1590 size=size, aspect=aspect, dropna=dropna, **grid_kws) 1591 1592 # Add the markers here as PairGrid has figured out how many levels of the /usr/local/lib/python2.7/site-packages/seaborn/axisgrid.pyc in __init__(self, data, hue, hue_order, palette, hue_kws, vars, x_vars, y_vars, diag_sharey, size, aspect, despine, dropna) 1253 if despine: 1254 utils.despine(fig=fig) -&gt; 1255 fig.tight_layout() 1256 1257 def map(self, func, **kwargs): /usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in tight_layout(self, renderer, pad, h_pad, w_pad, rect) 1752 renderer, 1753 pad=pad, h_pad=h_pad, w_pad=w_pad, -&gt; 1754 rect=rect) 1755 1756 self.subplots_adjust(**kwargs) /usr/local/lib/python2.7/site-packages/matplotlib/tight_layout.pyc in get_tight_layout_figure(fig, axes_list, subplotspec_list, renderer, pad, h_pad, w_pad, rect) 347 subplot_list=subplot_list, 348 ax_bbox_list=ax_bbox_list, --&gt; 349 pad=pad, h_pad=h_pad, w_pad=w_pad) 350 351 if rect is not None: /usr/local/lib/python2.7/site-packages/matplotlib/tight_layout.pyc in auto_adjust_subplotpars(fig, renderer, nrows_ncols, num1num2_list, subplot_list, ax_bbox_list, pad, h_pad, w_pad, rect) 126 tight_bbox_raw = union([ax.get_tightbbox(renderer) for ax in subplots]) 127 tight_bbox = TransformedBbox(tight_bbox_raw, --&gt; 128 fig.transFigure.inverted()) 129 130 row1, col1 = divmod(num1, cols) /usr/local/lib/python2.7/site-packages/matplotlib/matplotlib/transforms.pyc in __init__(self, bbox, transform, **kwargs) 1080 msg = ("'transform' must be an instance of" 1081 " 'matplotlib.transform.Transform'") -&gt; 1082 raise ValueError(msg) 1083 if transform.input_dims != 2 or transform.output_dims != 2: 1084 msg = "The input and output dimensions of 'transform' must be 2" ValueError: 'transform' must be an instance of 'matplotlib.transform.Transform' </code></pre> <p>Regplot works perfectly</p> <pre><code>sns.regplot(x="petal_length", y="petal_width", data=data) </code></pre> <p><strong>EDIT</strong></p> <p>I suspect it has to do with matplotlib's font manager malfunctioning. I deleted fontconfig and spicy directories from ~/.cache/ and got a new error message: AttributeError: 'module' object has no attribute 'Locked'</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-3-001343d0343b&gt; in &lt;module&gt;() ----&gt; 1 sns.pairplot(data) /usr/local/lib/python2.7/site-packages/seaborn/linearmodels.pyc in pairplot(data, hue, hue_order, palette, vars, x_vars, y_vars, kind, diag_kind, markers, size, aspect, dropna, plot_kws, diag_kws, grid_kws) 1588 hue_order=hue_order, palette=palette, 1589 diag_sharey=diag_sharey, -&gt; 1590 size=size, aspect=aspect, dropna=dropna, **grid_kws) 1591 1592 # Add the markers here as PairGrid has figured out how many levels of the /usr/local/lib/python2.7/site-packages/seaborn/axisgrid.pyc in __init__(self, data, hue, hue_order, palette, hue_kws, vars, x_vars, y_vars, diag_sharey, size, aspect, despine, dropna) 1253 if despine: 1254 utils.despine(fig=fig) -&gt; 1255 fig.tight_layout() 1256 1257 def map(self, func, **kwargs): /usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in tight_layout(self, renderer, pad, h_pad, w_pad, rect) 1737 """ 1738 -&gt; 1739 from .tight_layout import (get_renderer, get_tight_layout_figure, 1740 get_subplotspec_list) 1741 /usr/local/lib/python2.7/site-packages/matplotlib/tight_layout.py in &lt;module&gt;() 15 from matplotlib.transforms import TransformedBbox, Bbox 16 ---&gt; 17 from matplotlib.font_manager import FontProperties 18 rcParams = matplotlib.rcParams 19 /usr/local/lib/python2.7/site-packages/matplotlib/matplotlib/font_manager.py in &lt;module&gt;() 1448 verbose.report("Using fontManager instance from %s" % _fmcache) 1449 except: -&gt; 1450 _rebuild() 1451 else: 1452 _rebuild() /usr/local/lib/python2.7/site-packages/matplotlib/matplotlib/font_manager.py in _rebuild() 1433 1434 if _fmcache: -&gt; 1435 with cbook.Locked(cachedir): 1436 json_dump(fontManager, _fmcache) 1437 AttributeError: 'module' object has no attribute 'Locked' </code></pre>
3
2016-07-31T22:25:58Z
38,708,506
<p>So I am answering my own question now.</p> <p>I solved the problem by using a Macports virtual environment</p> <pre><code>sudo port selfupdate sudo port install python27 sudo port install py27-virtualenv /opt/local/bin/virtualenv-2.7 $HOME/local/python/27 </code></pre> <p>Add to .bash_profile:</p> <pre><code>alias py27="source $HOME/local/python/27/bin/activate" </code></pre> <p>Run in terminal to set virtual env and install required packages with pip.</p> <pre><code>py27 which pip $HOME/local/python/27/bin/pip pip install ipython etc </code></pre> <p>It is strange that the home-brew installation bugged out, and I am wondering if there were any conflicting packages installed.</p> <p>Not sure if its related, but here is my .bash_profile:</p> <pre><code># some settings to prefer homebrew paths in case it exists: if which -s brew ; then PATH="/usr/local/bin:/usr/local/sbin:$PATH" fi # set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi </code></pre> <p>This could of been messing things up.</p>
0
2016-08-01T21:35:49Z
[ "python", "matplotlib", "seaborn" ]
Python Minimalmodbus sending value
38,688,893
<p>I need to control the speed of a VSD via Modbus using Python. I have all working accept one part and that is to send the speed of the VSD.</p> <p>If I send this command the motor will start</p> <pre><code> vsd.write_register(8192 , 2 , 0) </code></pre> <p>And this command will let it stop </p> <pre><code> vsd.write_register(8192 , 6 , 0) </code></pre> <p>The format of the commands are like this </p> <pre><code>Id code adrsh adrsl high low crc 01 06 20 00 00 06 02 08 </code></pre> <p>The documentation of the VSD shows the commands registers and values as hex But the Python Library requires Decimal. So if I send simple commands all works 100%.</p> <p>But when I want to set the speed I have to set the high and the low values. If I send it a 0x2710 or a 10000 the vsd sends back a error. If I use the tool that came with the VSD and I split the value 100.00 (10000) into its 2 bytes of 27 and 10 then all works fine. How do I send the value to the VSD because 0x2710 does not work.</p> <p>Kind Regards.</p>
0
2016-07-31T22:28:09Z
38,704,134
<p>You have to set the functioncode = 6. Then it sends the correct bytes.</p>
0
2016-08-01T16:51:14Z
[ "python", "modbus" ]
python 2.7 Regex for a range of numbers
38,688,899
<p>What is the regular expression to return a number in the range 274-342 and the rest of the line until '\n'? Here is my attempt.</p> <pre><code>import re text = '333get\n361donuts\n400chickenmcsandwich\n290this\n195foo\n301string' match=re.findall(r'(27[4-9]|8[0-9]|9[0-9]|3[0-3]\d|4[0-2])(.*)', text) </code></pre> <p>The correct regex would return the following result: </p> <pre><code>[('333', 'get'), ('290', 'this'), ('301', 'string')] </code></pre>
-5
2016-07-31T22:29:32Z
38,792,879
<p>You can use <code>'(\d+)(.*)'</code> and then filter the list:</p> <pre><code>import re text = '333get\n361donuts\n400chickenmcsandwich\n290this\n195foo\n301string' matches = re.findall(r'(\d+)(.*)', text) matches = [ item for item in matches if int(item[0]) in range(274,342) ] print(matches) # should print : [('333', 'get'), ('290', 'this'), ('301', 'string')] </code></pre>
1
2016-08-05T15:29:40Z
[ "python", "regex" ]
Python - get combination of data from array of arrays
38,688,982
<p>I have an array which holds other arrays which are holding possible values of data at given position. Example: </p> <pre><code>data = [['A'],['A','B'],['A','B'],['A','B','D'],['0','2']] </code></pre> <p>From this data possible values are (for example): </p> <pre><code>"AAAA0" # (00000) "AAAA2" # (00001) "AAAD0" # (00020) </code></pre> <p>and so on.<br> What I would need, is to get all possible combinations of data from those single arrays, but order of data is important: </p> <ul> <li>data from inner array can only be placed on its position in outer array (in above example on first position only 'A' can be placed) </li> </ul> <p>Is there some Python module that will be able to handle this (I found itertools, but it's doing not exactly what I need), or maybe someone has some idea how to do this ?</p>
0
2016-07-31T22:43:21Z
38,689,025
<p>Try this:</p> <pre><code>data = [['A'], ['A','B'], ['A','B'], ['A','B','D'], ['0','2']] size = 5 def rec(cur): if len(cur) == size: print(cur) return for x in data[len(cur)]: rec(cur + [x]) rec([]) </code></pre> <p>Output:</p> <pre><code>['A', 'A', 'A', 'A', '0'] ['A', 'A', 'A', 'A', '2'] ['A', 'A', 'A', 'B', '0'] ['A', 'A', 'A', 'B', '2'] ['A', 'A', 'A', 'D', '0'] ['A', 'A', 'A', 'D', '2'] ['A', 'A', 'B', 'A', '0'] ['A', 'A', 'B', 'A', '2'] ['A', 'A', 'B', 'B', '0'] ['A', 'A', 'B', 'B', '2'] ['A', 'A', 'B', 'D', '0'] ['A', 'A', 'B', 'D', '2'] ['A', 'B', 'A', 'A', '0'] ['A', 'B', 'A', 'A', '2'] ['A', 'B', 'A', 'B', '0'] ['A', 'B', 'A', 'B', '2'] ['A', 'B', 'A', 'D', '0'] ['A', 'B', 'A', 'D', '2'] ['A', 'B', 'B', 'A', '0'] ['A', 'B', 'B', 'A', '2'] ['A', 'B', 'B', 'B', '0'] ['A', 'B', 'B', 'B', '2'] ['A', 'B', 'B', 'D', '0'] ['A', 'B', 'B', 'D', '2'] </code></pre>
0
2016-07-31T22:50:35Z
[ "python", "arrays", "combinations" ]
Python - get combination of data from array of arrays
38,688,982
<p>I have an array which holds other arrays which are holding possible values of data at given position. Example: </p> <pre><code>data = [['A'],['A','B'],['A','B'],['A','B','D'],['0','2']] </code></pre> <p>From this data possible values are (for example): </p> <pre><code>"AAAA0" # (00000) "AAAA2" # (00001) "AAAD0" # (00020) </code></pre> <p>and so on.<br> What I would need, is to get all possible combinations of data from those single arrays, but order of data is important: </p> <ul> <li>data from inner array can only be placed on its position in outer array (in above example on first position only 'A' can be placed) </li> </ul> <p>Is there some Python module that will be able to handle this (I found itertools, but it's doing not exactly what I need), or maybe someone has some idea how to do this ?</p>
0
2016-07-31T22:43:21Z
38,689,033
<p>I think you need the <code>itertools.product</code> here:</p> <pre><code>import itertools [''.join(p) for p in itertools.product(*data)] #['AAAA0', # 'AAAA2', # 'AAAB0', # 'AAAB2', # 'AAAD0', # 'AAAD2', # 'AABA0', # 'AABA2', # 'AABB0', # 'AABB2', # 'AABD0', # 'AABD2', # 'ABAA0', # 'ABAA2', # 'ABAB0', # 'ABAB2', # 'ABAD0', # 'ABAD2', # 'ABBA0', # 'ABBA2', # 'ABBB0', # 'ABBB2', # 'ABBD0', # 'ABBD2'] </code></pre>
1
2016-07-31T22:51:30Z
[ "python", "arrays", "combinations" ]
handling exit status popen python
38,689,014
<p>I am tring to handle status exit with popen but it gives a error, the code is:</p> <pre><code>import os try: res = os.popen("ping -c 4 www.google.com") except IOError: print "ISPerror: popen" try: #wait = [0,0] wait = os.wait() except IOError: print "ISPerror:os.wait" if wait[1] != 0: print(" os.wait:exit status != 0\n") else: print ("os.wait:"+str(wait)) print("before read") result = res.read() print ("after read:") print ("exiting") </code></pre> <p>But it if giving the following error:</p> <p>close failed in file object destructor: IOError: [Errno 10] No child processes</p>
0
2016-07-31T22:48:52Z
38,689,192
<h1>Error Explanation</h1> <p>It looks like this error is occurring because upon exiting, the program tries to destroy <code>res</code>, which involves calling the <code>res.close()</code> method. But somehow invoking <code>os.wait()</code> has already closed the object. So it's trying to close <code>res</code> twice, resulting in the error. If the call to <code>os.wait()</code> is removed, the error no longer occurs.</p> <pre><code>import os try: res = os.popen("ping -c 4 www.google.com") except IOError: print "ISPerror: popen" print("before read") result = res.read() res.close() # explicitly close the object print ("after read: {}".format(result) print ("exiting") </code></pre> <p>But this leaves you with the problem of knowing when the process has finished. And since <code>res</code> just has type <code>file</code>, your options are limited. I would instead move to using <a href="https://docs.python.org/2/library/subprocess.html#replacing-os-popen-os-popen2-os-popen3" rel="nofollow"><code>subprocess.Popen</code></a></p> <h1>Using subprocess.Popen</h1> <p>To use <code>subprocess.Popen</code>, you pass your command in as a <code>list</code> of strings. To be able to <a href="http://stackoverflow.com/questions/4514751/pipe-subprocess-standard-output-to-a-variable">access the output of the process</a>, you set the <code>stdout</code> argument to <code>subprocess.PIPE</code>, which allows you to access <code>stdout</code> later on using file operations. Then, instead of using the regular <code>os.wait()</code> method, <code>subprocess.Popen</code> objects have their own <code>wait</code> methods you call directly on the object, this also <a href="http://stackoverflow.com/questions/5631624/how-to-get-exit-code-when-using-python-subprocess-communicate-method">sets the <code>returncode</code></a> value which represents the exit status.</p> <pre><code>import os import subprocess # list of strings representing the command args = ['ping', '-c', '4', 'www.google.com'] try: # stdout = subprocess.PIPE lets you redirect the output res = subprocess.Popen(args, stdout=subprocess.PIPE) except OSError: print "error: popen" exit(-1) # if the subprocess call failed, there's not much point in continuing res.wait() # wait for process to finish; this also sets the returncode variable inside 'res' if res.returncode != 0: print(" os.wait:exit status != 0\n") else: print ("os.wait:({},{})".format(res.pid, res.returncode) # access the output from stdout result = res.stdout.read() print ("after read: {}".format(result)) print ("exiting") </code></pre>
0
2016-07-31T23:18:48Z
[ "python" ]
Keep track of items in array with delimiter python
38,689,028
<p>I take in a user supplied input string array that can look like the following:</p> <pre><code>x=[100.0,150.0,200.0:300.0:10.0,300.0,350.0:400.0:10.0,500.0,600.0:700.0:10.0,800.0,900.0] </code></pre> <p>As these are user supplied lists, the ordering of the interval slices [e.g., 200.0:300.0:10.0] can vary, as can the individual entries without slices.</p> <p>Then I split on the ':," delimiters, so I can covert to from float to string for use in numpy.r_. I then get the following list:</p> <pre><code>x_arr=[100.0,150.0,200.0,300.0,10.0,300.0,350.0,400.0,10.0,500.0,600.0,700.0,10.0,800.0,900.0] </code></pre> <p>I would like to keep track of the original index where the ":" delimiter existed as well as where the ":" delimiter was absent so that I can reconstruct the original array as a series of floats in the following way:</p> <pre><code>np.r_[100.0, 150.0, slice(200.0,300.0,10.0), 300, slice(350.0,400.0,10.0), 500.0, slice(600,700,10),800,900] </code></pre> <p>The issue is how to keep track of the change in indices from the original array to the new array in a consistent manner. I would appreciate any ideas as to how to best implement this with random user supplied input. </p> <p>Here's one way I thought about approaching it:</p> <p>I split the original array on ',' to find the elements that are missing the ":" delimiter:</p> <pre><code>x_no_colon=re.split((','),x) xh=[] for ind in x_no_colon: inds_wo_colon=re.findall(":",ind) xh.append(inds_wo_colon) </code></pre> <p>which using the above example would return the following:</p> <pre><code>xh=[[],[],[":",":"],[],[":",":"],[],[":",":"],[],[]] </code></pre> <p>Then I can identify the indices without colons in the following manner:</p> <pre><code>x_wo_colons = [item for item in range(len(xh)) if xh[item] == []] </code></pre> <p>which would return:</p> <pre><code>x_wo_colons=[0,1,3,6,8,9] </code></pre> <p>Then I find the indices with the ':' delimiter using an array split on ':' :</p> <pre><code>colon_arr=re.split('(:)',x) prelim_x_with_colon=[item for item in range(len(colon_arr)) if colon_arr[item] == ':'] x_w_colon=[] for i in prelim_x_with_colon: if i == 1 and colon_arr[1] != ':': x_w_colon.append(i) elif i == 1 and colon_arr[1] == ':': x_w_colon.append(i-1) else: x_w_colon_append(i-1) </code></pre> <p>With a list of indices where colons exist and don't exist the only thing to do would be to remove the indices w/o colons from the list w/colons. The issue I've found here is that it's hard to get the indices correct each time for varying lists. This might be because my approach is convoluted and I'm using two different arrays for the different lists. </p> <p>The issue is how to keep track of the change in indices from the original array to the new array in a consistent manner. I would appreciate any ideas as to how to best implement this with random user supplied input. </p> <p>Thanks in advance!</p>
2
2016-07-31T22:51:03Z
38,689,231
<p>Are you trying to convert this input string/list in to a list/array of numbers, taking into account that some items look like slices?</p> <p>Here's my experiment with your string (minus the <code>[]</code>). I'll leave a lot of the trial and error in. It might be instructive.</p> <pre><code>In [957]: txt='100.0,150.0,200.0:300.0:10.0,300.0,350.0:400.0:10.0,500.0,600.0:700.0:10.0,800.0,900.0' </code></pre> <p>I assume <code>,</code> is the primary delimiter, <code>:</code> secondary</p> <pre><code>In [958]: txt.split(',') Out[958]: ['100.0', '150.0', '200.0:300.0:10.0', '300.0', '350.0:400.0:10.0', '500.0', '600.0:700.0:10.0', '800.0', '900.0'] </code></pre> <p>define a function to process one of these items:</p> <pre><code>In [960]: def foo(astr): ...: items=astr.split(':') ...: if len(items)==1: ...: return float(items[0]) ...: else: ...: return slice(*[float(i) for i in items]) ...: In [961]: [foo(s) for s in txt.split(',')] Out[961]: [100.0, 150.0, slice(200.0, 300.0, 10.0), 300.0, slice(350.0, 400.0, 10.0), 500.0, slice(600.0, 700.0, 10.0), 800.0, 900.0] In [962]: np.r_[_] Out[962]: array([100.0, 150.0, slice(200.0, 300.0, 10.0), 300.0, slice(350.0, 400.0, 10.0), 500.0, slice(600.0, 700.0, 10.0), 800.0, 900.0], dtype=object) </code></pre> <p>It creates slices like I expected, but <code>np.r_</code> doesn't accept literal slices; it requires the <code>:</code> syntax. Actually it's the Python interpreter that does that, converting the <code>[a:b:c]</code> into <code>slice(a,b,c)</code> object. Seems we addressed that issue recently. Rather than fight that, let's jump directly to <code>arange</code> (since <code>np.r_</code> translates the <code>slices</code> to <code>arange</code> or <code>linspace</code> anyways).</p> <pre><code>In [963]: def foo(astr): ...: items=astr.split(':') ...: if len(items)==1: ...: return float(items[0]) ...: else: ...: return np.arange(*[float(i) for i in items]) In [964]: [foo(s) for s in txt.split(',')] Out[964]: [100.0, 150.0, array([ 200., 210., 220., 230., 240., 250., 260., 270., 280., 290.]), 300.0, array([ 350., 360., 370., 380., 390.]), 500.0, array([ 600., 610., 620., 630., 640., 650., 660., 670., 680., 690.]), 800.0, 900.0] In [965]: np.concatenate(_) ... ValueError: zero-dimensional arrays cannot be concatenated </code></pre> <p>Oops, <code>concatenate</code> doesn't like the single numbers;</p> <pre><code>In [966]: def foo(astr): ...: items=astr.split(':') ...: if len(items)==1: ...: return [float(items[0])] ...: else: ...: return np.arange(*[float(i) for i in items]) In [967]: [foo(s) for s in txt.split(',')] Out[967]: [[100.0], [150.0], array([ 200., 210., 220., 230., 240., 250., 260., 270., 280., 290.]), [300.0], array([ 350., 360., 370., 380., 390.]), [500.0], array([ 600., 610., 620., 630., 640., 650., 660., 670., 680., 690.]), [800.0], [900.0]] In [968]: np.concatenate(_) Out[968]: array([ 100., 150., 200., 210., 220., 230., 240., 250., 260., 270., 280., 290., 300., 350., 360., 370., 380., 390., 500., 600., 610., 620., 630., 640., 650., 660., 670., 680., 690., 800., 900.]) </code></pre> <p>Looks good.</p> <p>=======================</p> <p>In a recent answer I did find a way of passing literal <code>slice</code> objects to <code>r_</code>, in a tuple.</p> <pre><code>In [969]: def foo1(astr): ...: items=astr.split(':') ...: if len(items)==1: ...: return float(items[0]) ...: else: ...: return slice(*[float(i) for i in items]) ... In [971]: tuple([foo1(s) for s in txt.split(',')]) Out[971]: (100.0, 150.0, slice(200.0, 300.0, 10.0), 300.0, slice(350.0, 400.0, 10.0), 500.0, slice(600.0, 700.0, 10.0), 800.0, 900.0) In [972]: np.r_[tuple([foo1(s) for s in txt.split(',')])] Out[972]: array([ 100., 150., 200., 210., 220., 230., 240., 250., 260., 270., 280., 290., 300., 350., 360., 370., 380., 390., 500., 600., 610., 620., 630., 640., 650., 660., 670., 680., 690., 800., 900.]) </code></pre>
1
2016-07-31T23:25:42Z
[ "python", "arrays", "numpy" ]
Keep track of items in array with delimiter python
38,689,028
<p>I take in a user supplied input string array that can look like the following:</p> <pre><code>x=[100.0,150.0,200.0:300.0:10.0,300.0,350.0:400.0:10.0,500.0,600.0:700.0:10.0,800.0,900.0] </code></pre> <p>As these are user supplied lists, the ordering of the interval slices [e.g., 200.0:300.0:10.0] can vary, as can the individual entries without slices.</p> <p>Then I split on the ':," delimiters, so I can covert to from float to string for use in numpy.r_. I then get the following list:</p> <pre><code>x_arr=[100.0,150.0,200.0,300.0,10.0,300.0,350.0,400.0,10.0,500.0,600.0,700.0,10.0,800.0,900.0] </code></pre> <p>I would like to keep track of the original index where the ":" delimiter existed as well as where the ":" delimiter was absent so that I can reconstruct the original array as a series of floats in the following way:</p> <pre><code>np.r_[100.0, 150.0, slice(200.0,300.0,10.0), 300, slice(350.0,400.0,10.0), 500.0, slice(600,700,10),800,900] </code></pre> <p>The issue is how to keep track of the change in indices from the original array to the new array in a consistent manner. I would appreciate any ideas as to how to best implement this with random user supplied input. </p> <p>Here's one way I thought about approaching it:</p> <p>I split the original array on ',' to find the elements that are missing the ":" delimiter:</p> <pre><code>x_no_colon=re.split((','),x) xh=[] for ind in x_no_colon: inds_wo_colon=re.findall(":",ind) xh.append(inds_wo_colon) </code></pre> <p>which using the above example would return the following:</p> <pre><code>xh=[[],[],[":",":"],[],[":",":"],[],[":",":"],[],[]] </code></pre> <p>Then I can identify the indices without colons in the following manner:</p> <pre><code>x_wo_colons = [item for item in range(len(xh)) if xh[item] == []] </code></pre> <p>which would return:</p> <pre><code>x_wo_colons=[0,1,3,6,8,9] </code></pre> <p>Then I find the indices with the ':' delimiter using an array split on ':' :</p> <pre><code>colon_arr=re.split('(:)',x) prelim_x_with_colon=[item for item in range(len(colon_arr)) if colon_arr[item] == ':'] x_w_colon=[] for i in prelim_x_with_colon: if i == 1 and colon_arr[1] != ':': x_w_colon.append(i) elif i == 1 and colon_arr[1] == ':': x_w_colon.append(i-1) else: x_w_colon_append(i-1) </code></pre> <p>With a list of indices where colons exist and don't exist the only thing to do would be to remove the indices w/o colons from the list w/colons. The issue I've found here is that it's hard to get the indices correct each time for varying lists. This might be because my approach is convoluted and I'm using two different arrays for the different lists. </p> <p>The issue is how to keep track of the change in indices from the original array to the new array in a consistent manner. I would appreciate any ideas as to how to best implement this with random user supplied input. </p> <p>Thanks in advance!</p>
2
2016-07-31T22:51:03Z
38,689,385
<p>You said the input array is a string, so (using your example):</p> <pre><code>x = '[100.0,150.0,200.0:300.0:10.0,300.0,350.0:400.0:10.0,500.0,600.0:700.0:10.0,800.0,900.0]' </code></pre> <p>Then we split <code>x</code> by <code>,</code> and then the elements by <code>:</code>:</p> <pre><code>x = x[1:-1].split(',') x = ([float(y) for y in elt.split(':')] for elt in x) </code></pre> <p>I made <code>x</code> into a generator, but it is now essentially</p> <pre><code>[[100.0], [150.0], [200.0, 300.0, 10.0], [300.0], [350.0, 400.0, 10.0], [500.0], [600.0, 700.0, 10.0], [800.0], [900.0]] </code></pre> <p>At this point I don't know how to create the array you want with <code>numpy.r_</code>, but I think the same goal can be achieved by</p> <pre><code>x = (y if len(y) == 1 else np.arange(*y) for y in x) result = np.hstack(x) </code></pre> <p>Here <code>np.arange</code> is numpy's <code>range</code> that takes <code>float</code> arguments, and <code>np.hstack</code>, according to its docstring, "Stack arrays in sequence horizontally (column wise)."</p>
1
2016-07-31T23:55:48Z
[ "python", "arrays", "numpy" ]
Attribute error response object doesnot have an attribute 'text'
38,689,047
<p>I am trying to use python scrapy tool for extracting the information from the bitcointalk.org website about the users and the public keys that they post in the forum for donation.</p> <p>I found this piece of code online, made changes to it so that it runs on my desired website, but I am running into an error AttributeError response object has no attribute text.</p> <p>Below is the code for reference</p> <pre><code>class BitcointalkSpider(CrawlSpider): name = "bitcointalk" allowed_domains = ["bitcointalk.org"] start_urls = ["https://bitcointalk.org/index.php"] rules = ( Rule(SgmlLinkExtractor(deny=[ 'https://bitcointalk\.org/index\.php\?action=ignore', 'https://bitcointalk\.org/index\.php\?action=profile', ], allow_domains='bitcointalk.org'), callback='parse_item', follow=True), ) def parse_item(self, response): sel = Selector(response) sites = sel.xpath('//tr[contains(@class, "td_headerandpost")]') items = [] for site in sites: item = BitcoinItem() item["membername"] = site.xpath('.//td[@class="poster_info"]/b/a/text()').extract() addresses = site.xpath('.//div[contains(@class, "signature")]/text()').re(r'(1[1-9A-HJ-NP-Za-km-z]{26,33})') if item["membername"] and addresses: addr_list = set() for addr in addresses: if (bcv.check_bc(addr)): addr_list.add(addr) item["address"] = addr_list if len(addr_list) &gt; 0: items.append(item) return items </code></pre> <p>and the error that I am receiving is : </p> <pre><code>Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output for x in result: File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/referer.py", line 22, in &lt;genexpr&gt; return (_set_referer(r) for r in result or ()) File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/urllength.py", line 37, in &lt;genexpr&gt; return (r for r in result or () if _filter(r)) File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/depth.py", line 58, in &lt;genexpr&gt; return (r for r in result or () if _filter(r)) File "/usr/local/lib/python2.7/dist-packages/scrapy/spiders/crawl.py", line 72, in _parse_response cb_res = callback(response, **cb_kwargs) or () File "/home/sunil/Desktop/Nikhil/Thesis/mit_bitcoin/bitcoin/spiders/bitcointalk_spider.py", line 24, in parse_item sel = Selector(response) File "/usr/local/lib/python2.7/dist-packages/scrapy/selector/unified.py", line 63, in __init__ text = response.text AttributeError: 'Response' object has no attribute 'text' </code></pre>
0
2016-07-31T22:53:45Z
38,689,544
<p>Something is likely wrong with one of your requests, since it seems like the response from at least one url your crawling is not properly formatted. Either the request itself failed, or you're not making requests appropriately.</p> <p><a href="https://github.com/scrapy/scrapy/blob/master/scrapy/selector/unified.py#L63" rel="nofollow">See here</a> for the source of your error.</p> <p>And <a href="http://doc.scrapy.org/en/latest/topics/selectors.html#constructing-selectors" rel="nofollow">see here</a> for a clue as to why your request may be poorly formatted. It looks like <code>Selector</code> expects an <code>HtmlResponse</code> object, or a similar type.</p>
0
2016-08-01T00:26:25Z
[ "python", "scrapy", "attributeerror" ]
when creating script have :TypeError: '_io.TextIOWrapper' object is not subscriptable
38,689,106
<p>I am trying to make my code narrow down words in a listing an input then sorting them into a different list but it threw out this can anyone help me?</p> <pre><code>Traceback (most recent call last): File "C:/Users/dan/Desktop/python/threeword.py", line 4, in &lt;module&gt; word = words[x],words[(x+1)],words[(x+2)] TypeError: '_io.TextIOWrapper' object is not subscriptable words=open("three.txt",'r+') f=open("three1","w") for x in words: word = words[x],words[(x+1)],words[(x+2)] print(word) input=('y or n') if input=="y": f.write(word) x=x+3 elif input=='stop': break else: x=x+3 f.close() </code></pre>
-1
2016-07-31T23:06:01Z
38,689,243
<p>You problem is that you can't just flat out say <code>words[0]</code> when all you assigned <code>words</code> to is <code>open(filename)</code> the open function in python does not return a list(as you seem to think) instead is returns a <code>file</code> object as said in the python docs about what the <code>open()</code> function does:</p> <blockquote> <p>Open a file, returning an object of the file type</p> </blockquote> <p>instead do either <code>words = open(filename).read()</code> or <code>words = list(words)</code> and then you can do <code>words[0]</code></p> <p>~Mr.Python</p>
0
2016-07-31T23:27:24Z
[ "python" ]
how to get rid of pandas converting large numbers in excel sheet to exponential?
38,689,125
<p>In the excel sheet , i have two columns with large numbers.</p> <p>But when i read the excel file with read_excel() and display the dataframe,</p> <p>those two columns are printed in scientific format with exponential.</p> <p>How can get rid of this format?</p> <p>Thanks </p> <p>Output in Pandas</p> <p><a href="http://i.stack.imgur.com/1JQFW.png" rel="nofollow"><img src="http://i.stack.imgur.com/1JQFW.png" alt="enter image description here"></a></p>
1
2016-07-31T23:08:26Z
38,691,325
<p>The way scientific notation is applied is controled via pandas' options:</p> <pre><code>import pandas as pd pd.set_option('display.precision',3) pd.DataFrame({'x':[.001]}) x 0 0.001 </code></pre> <p>but</p> <pre><code>pd.DataFrame({'x':[.0001]}) x 0 1.000e-04 </code></pre> <p>but</p> <pre><code>pd.set_option('display.precision',4) pd.DataFrame({'x':[.0001]}) x 0 0.0001 </code></pre> <p>You may see more about how to control pandas output in <a href="http://pandas.pydata.org/pandas-docs/stable/options.html" rel="nofollow">Options and Settings</a> section of pandas docs.</p> <p><strong>EDIT</strong></p> <p>If this is simply for presentational purposes, you may convert your data to strings while formatting them on a column-by-column basis:</p> <pre><code>df = pd.DataFrame({'Traded Value':[67867869890077.96,78973434444543.44], 'Deals':[789797, 789878]}) df Deals Traded Value 0 789797 6.786787e+13 1 789878 7.897343e+13 df['Deals'] = df['Deals'].apply(lambda x: '{:d}'.format(x)) df['Traded Value'] = df['Traded Value'].apply(lambda x: '{:.2f}'.format(x)) df Deals Traded Value 0 789797 67867869890077.96 1 789878 78973434444543.44 </code></pre> <p>An alternative more straightforward method would to put the following line at the top of your code that would format floats only:</p> <pre><code>pd.options.display.float_format = '{:.2f}'.format </code></pre>
0
2016-08-01T05:03:19Z
[ "python", "pandas", "machine-learning", "data-analysis" ]
get UTF-8 encoded hex value for international character
38,689,160
<p>Using Mac OSX and if there is a file encoded with UTF-8 (contains international characters besides ASCII), wondering if any tools or simple command (e.g. in Python 2.7 or shell) we can use to find the related hex (base-16) values (in terms of byte stream)? For example, if I write some Asian characters into the file, I can find the related hex value.</p> <p>My current solution is I open the file and read them byte by byte using Python str. Wondering if any simpler ways without coding. :)</p> <p><strong>Edit 1</strong>, it seems the output of <code>od</code> is not correct,</p> <pre><code>cat ~/Downloads/12 1 od ~/Downloads/12 0000000 000061 0000001 </code></pre> <p><strong>Edit 2</strong>, tried <code>od -t x1</code> options as well,</p> <pre><code>od -t x1 ~/Downloads/12 0000000 31 0000001 </code></pre> <p>thanks in advance, Lin</p>
0
2016-07-31T23:14:40Z
38,689,190
<p>You can use the command <code>iconv</code> to convert between encodings. The basic command is:</p> <pre><code>iconv -f from_encoding -t to_encoding inputfile </code></pre> <p>and you can see a list of supported encodings with</p> <pre><code>iconv --list </code></pre> <p>In your case,</p> <pre><code>iconv -f UTF8 -t UCS-2 inputfile </code></pre> <p>You've also asked to see the hex values. A standard utility that will do this is <code>xxd</code>. You can pipe the results of <code>iconv</code> to <code>xxd</code> as follows:</p> <pre><code>iconv -f UTF8 -t UCS-2 inputfile | xxd </code></pre>
0
2016-07-31T23:18:24Z
[ "python", "python-2.7", "shell", "unicode", "utf-8" ]
get UTF-8 encoded hex value for international character
38,689,160
<p>Using Mac OSX and if there is a file encoded with UTF-8 (contains international characters besides ASCII), wondering if any tools or simple command (e.g. in Python 2.7 or shell) we can use to find the related hex (base-16) values (in terms of byte stream)? For example, if I write some Asian characters into the file, I can find the related hex value.</p> <p>My current solution is I open the file and read them byte by byte using Python str. Wondering if any simpler ways without coding. :)</p> <p><strong>Edit 1</strong>, it seems the output of <code>od</code> is not correct,</p> <pre><code>cat ~/Downloads/12 1 od ~/Downloads/12 0000000 000061 0000001 </code></pre> <p><strong>Edit 2</strong>, tried <code>od -t x1</code> options as well,</p> <pre><code>od -t x1 ~/Downloads/12 0000000 31 0000001 </code></pre> <p>thanks in advance, Lin</p>
0
2016-07-31T23:14:40Z
38,705,587
<p><code>od</code> is the right command, but you need to specify an optional argument <code>-t x1</code>:</p> <pre><code>$ od -t x1 ~/Downloads/12 0000000 31 0000001 </code></pre> <p>If you prefer not to see the file offsets, try adding <code>-A none</code>:</p> <pre><code>$ od -A none -t x1 ~/Downloads/12 31 </code></pre> <p>Additionally, the Linux man page (but not the OS X man page) lists this example: <code>od -A x -t x1z -v</code>, "Display hexdump format output."</p> <p>Reference: <a href="http://www.unix.com/man-page/osx/1/od/" rel="nofollow">http://www.unix.com/man-page/osx/1/od/</a></p>
1
2016-08-01T18:22:13Z
[ "python", "python-2.7", "shell", "unicode", "utf-8" ]
get UTF-8 encoded hex value for international character
38,689,160
<p>Using Mac OSX and if there is a file encoded with UTF-8 (contains international characters besides ASCII), wondering if any tools or simple command (e.g. in Python 2.7 or shell) we can use to find the related hex (base-16) values (in terms of byte stream)? For example, if I write some Asian characters into the file, I can find the related hex value.</p> <p>My current solution is I open the file and read them byte by byte using Python str. Wondering if any simpler ways without coding. :)</p> <p><strong>Edit 1</strong>, it seems the output of <code>od</code> is not correct,</p> <pre><code>cat ~/Downloads/12 1 od ~/Downloads/12 0000000 000061 0000001 </code></pre> <p><strong>Edit 2</strong>, tried <code>od -t x1</code> options as well,</p> <pre><code>od -t x1 ~/Downloads/12 0000000 31 0000001 </code></pre> <p>thanks in advance, Lin</p>
0
2016-07-31T23:14:40Z
38,705,658
<p>I'm not sure exactly what you want, but this script can help you look up the Unicode codepoint and UTF-8 byte sequence for any character. Be sure to save the source as UTF-8.</p> <pre><code># coding: utf8 s = u'我是美国人。' for c in s: print c,'U+{:04X} {}'.format(ord(c),repr(c.encode('utf8'))) </code></pre> <p>Output:</p> <pre><code>我 U+6211 '\xe6\x88\x91' 是 U+662F '\xe6\x98\xaf' 美 U+7F8E '\xe7\xbe\x8e' 国 U+56FD '\xe5\x9b\xbd' 人 U+4EBA '\xe4\xba\xba' 。 U+3002 '\xe3\x80\x82' </code></pre>
2
2016-08-01T18:27:21Z
[ "python", "python-2.7", "shell", "unicode", "utf-8" ]
How can update my code for a django app without earasing the existing database (Read Description)?
38,689,171
<p>I have a python django application that I published to heroku by connecting to github. I want some people to be able to add some information to the database from the website. If I make changes to the code, push to github and deploy the branch the database will go back to how it was at first. How can update my code for the app without changing the database?</p>
0
2016-07-31T23:16:13Z
38,689,489
<p>If you host your database on a separate server, like with <a href="https://aws.amazon.com/rds/mysql/" rel="nofollow">Amazon RDS</a> or <a href="https://www.heroku.com/postgres" rel="nofollow">Heroku Postgres</a>, and configure your code to connect to this remote host, you should have sufficient decoupling to avoid what you are talking about. </p>
1
2016-08-01T00:15:10Z
[ "python", "django", "database", "heroku", "web" ]
Delay between printing characters in Tkinter
38,689,182
<p>I'm trying to make a program that prints to a scrollable textbox. The only problem is that I want a delay between the characters. I've seen it using the print method but when I try to replace it with a textbox and insert it doesnt do anything. Here is the code using the print method.</p> <pre><code>from time import sleep import sys output = 'Hi... I just wanted to know if you like steak?' for char in output: sys.stdout.write ('%s' % char) sleep (0.1) </code></pre>
-1
2016-07-31T23:17:22Z
38,689,321
<p>Tkinter widgets have a method named <code>after</code> which you can use to run a command in the future. You can set up a function that inserts one letter, then calls itself again after a day. </p> <p>I don't know what you mean by a "scrollable textbox" since tkinter doesn't have a "textbox" widget. There is a single line <code>Entry</code> widget and a multiline <code>Text</code> widget, both of which are scrollable. Here's an example using the <code>Entry</code> widget:</p> <pre><code>import Tkinter as tk def insert_slow(widget, string): if len(string) &gt; 0: widget.insert("end", string[0]) widget.xview("end") if len(string) &gt; 1: widget.after(100, insert_slow, widget, string[1:]) root = tk.Tk() entry = tk.Entry() entry.pack() insert_slow(entry, "Hi... I just wanted to know if you like steak?") root.mainloop() </code></pre>
0
2016-07-31T23:41:57Z
[ "python", "user-interface", "tkinter", "textbox" ]
How do I make a triangle of numbers using python loops?
38,689,185
<p>I am trying to achieve this</p> <pre><code>0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 0 1 2 3 4 5 0 1 2 3 4 0 1 2 3 0 1 2 0 1 0 </code></pre> <p>And I'm getting close but now I'm stuck. Here is my current code</p> <pre><code>def triangle(): n = 9 numList = [0,1,2,3,4,5,6,7,8,9] for i in range(10): for i in numList: print(i, end=" ") print() numList[n] = 0 n -= 1 triangle() </code></pre> <p>And this is the current output</p> <pre><code>0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 0 1 2 3 4 5 6 7 0 0 0 1 2 3 4 5 6 0 0 0 0 1 2 3 4 5 0 0 0 0 0 1 2 3 4 0 0 0 0 0 0 1 2 3 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>So I'm there in a round about way, except, its backwards, and there is 0's instead of spaces</p>
2
2016-07-31T23:17:57Z
38,689,263
<p>interesting puzzle, you could try this:</p> <pre><code>n = range(0,10) #set your range while len(n)&lt;20: #loop until it exhausts where you want it too print ''.join(str(e) for e in n[0:10]) #print as a string! n = [' ']+n #prepend blank spaces </code></pre> <p><a href="http://rextester.com/DTSBRB59410" rel="nofollow">here is an example</a></p> <p>You could apply the same logic to your attempt. Basically I add a space to the beggining of N after each loop and then print only the first ten elements. The way I print the list is a little clunky because I am joining, I need to change each element to a string.</p>
1
2016-07-31T23:30:13Z
[ "python", "loops" ]
How do I make a triangle of numbers using python loops?
38,689,185
<p>I am trying to achieve this</p> <pre><code>0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 0 1 2 3 4 5 0 1 2 3 4 0 1 2 3 0 1 2 0 1 0 </code></pre> <p>And I'm getting close but now I'm stuck. Here is my current code</p> <pre><code>def triangle(): n = 9 numList = [0,1,2,3,4,5,6,7,8,9] for i in range(10): for i in numList: print(i, end=" ") print() numList[n] = 0 n -= 1 triangle() </code></pre> <p>And this is the current output</p> <pre><code>0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 0 1 2 3 4 5 6 7 0 0 0 1 2 3 4 5 6 0 0 0 0 1 2 3 4 5 0 0 0 0 0 1 2 3 4 0 0 0 0 0 0 1 2 3 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>So I'm there in a round about way, except, its backwards, and there is 0's instead of spaces</p>
2
2016-07-31T23:17:57Z
38,689,267
<p>You can try this code:</p> <pre><code>def triangle(): for i in range(10): print i * " ", for j in range(10 - i): print j, print triangle() </code></pre> <p>The code is almost self explaining.</p> <p>Online example is <a href="http://www.codeskulptor.org/#user41_988HYxG9MG_1.py" rel="nofollow">here</a></p>
2
2016-07-31T23:30:45Z
[ "python", "loops" ]
How do I make a triangle of numbers using python loops?
38,689,185
<p>I am trying to achieve this</p> <pre><code>0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 0 1 2 3 4 5 0 1 2 3 4 0 1 2 3 0 1 2 0 1 0 </code></pre> <p>And I'm getting close but now I'm stuck. Here is my current code</p> <pre><code>def triangle(): n = 9 numList = [0,1,2,3,4,5,6,7,8,9] for i in range(10): for i in numList: print(i, end=" ") print() numList[n] = 0 n -= 1 triangle() </code></pre> <p>And this is the current output</p> <pre><code>0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 0 1 2 3 4 5 6 7 0 0 0 1 2 3 4 5 6 0 0 0 0 1 2 3 4 5 0 0 0 0 0 1 2 3 4 0 0 0 0 0 0 1 2 3 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>So I'm there in a round about way, except, its backwards, and there is 0's instead of spaces</p>
2
2016-07-31T23:17:57Z
38,689,887
<p>The other solutions are fine, but life becomes a little easier with <code>numpy</code>'s <code>arange</code> and the overloaded <code>*</code> operator for strings. Python's built-ins are very powerful.</p> <pre><code>for i in range(11): print ' ' * i + ''.join(str(elt) for elt in np.arange(10 - i)) </code></pre>
0
2016-08-01T01:31:53Z
[ "python", "loops" ]
Creating a save as function in python
38,689,271
<p>I'm trying to create a function to save data taken from multiple Entry widgets in my code and create a new save file storing the data from all the entries.</p> <p>I made a entry list called entries and try to pull from that but can't get it quite right. It will create the file but its always blank. </p> <p>This is the code for my save as function using tkinter widgets.</p> <pre><code>def file_save_as(self): fout = asksaveasfile(mode = 'a', defaultextension = '.txt') with open('fout', 'a') as f: for entry in self.entries: f.write("%s\n" % entry) </code></pre>
-2
2016-07-31T23:31:21Z
38,690,231
<pre><code>def file_save_as(self): fout = asksaveasfilename(defaultextension = '.txt') try: with open(fout, 'w') as output: for x in self.entries: output.write(x.get()) except FileNotFoundError: print("Cancelled save or error in filename") </code></pre>
2
2016-08-01T02:30:21Z
[ "python", "file", "tkinter", "save" ]
Trying to install pip error with configparser
38,689,289
<p>I am trying to install pip and am getting some issues with configparser. Somehow the import configparser is pointing to the system version of python, python 2.7 instead of python 3.5 </p> <pre><code>&gt;&gt; pip install -U pip Traceback (most recent call last): File "/Users/../anaconda3/bin/pip", line 9, in &lt;module&gt; load_entry_point('pip==8.1.2', 'console_scripts', 'pip')() File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 542, in load_entry_point File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 2569, in load_entry_point File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 2229, in load File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 2235, in resolve File "/Users/../anaconda3/lib/python3.5/site-packages/pip/__init__.py", line 14, in &lt;module&gt; from pip.utils import get_installed_distributions, get_prog File "/Users/../anaconda3/lib/python3.5/site-packages/pip/utils/__init__.py", line 23, in &lt;module&gt; from pip.locations import ( File "/Users/../anaconda3/lib/python3.5/site-packages/pip/locations.py", line 10, in &lt;module&gt; from distutils.command.install import install, SCHEME_KEYS # noqa File "/Users../anaconda3/lib/python3.5/distutils/command/install.py", line 9, in &lt;module&gt; from distutils.core import Command File "/Users/../anaconda3/lib/python3.5/distutils/core.py", line 18, in &lt;module&gt; from distutils.config import PyPIRCCommand File "/Users/../anaconda3/lib/python3.5/distutils/config.py", line 7, in &lt;module&gt; from configparser import RawConfigParser File "/Library/Python/2.7/site-packages/configparser.py", line 12, in &lt;module&gt; from backports.configparser import ( </code></pre>
0
2016-07-31T23:35:37Z
38,689,344
<p>I'm assuming you are using Linux OS based on the command. Give it a try to install with <strong>apt-get</strong>:</p> <pre><code>apt-get install python-pip </code></pre> <p>when not logged in as root use, you must know root password:</p> <pre><code>sudo apt-get install python-pip </code></pre> <p>I used the same command to install it on my Raspberry Pi and Ubuntu 16</p>
0
2016-07-31T23:46:39Z
[ "python", "python-2.7", "configparser" ]
Trying to install pip error with configparser
38,689,289
<p>I am trying to install pip and am getting some issues with configparser. Somehow the import configparser is pointing to the system version of python, python 2.7 instead of python 3.5 </p> <pre><code>&gt;&gt; pip install -U pip Traceback (most recent call last): File "/Users/../anaconda3/bin/pip", line 9, in &lt;module&gt; load_entry_point('pip==8.1.2', 'console_scripts', 'pip')() File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 542, in load_entry_point File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 2569, in load_entry_point File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 2229, in load File "/Users/../anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 2235, in resolve File "/Users/../anaconda3/lib/python3.5/site-packages/pip/__init__.py", line 14, in &lt;module&gt; from pip.utils import get_installed_distributions, get_prog File "/Users/../anaconda3/lib/python3.5/site-packages/pip/utils/__init__.py", line 23, in &lt;module&gt; from pip.locations import ( File "/Users/../anaconda3/lib/python3.5/site-packages/pip/locations.py", line 10, in &lt;module&gt; from distutils.command.install import install, SCHEME_KEYS # noqa File "/Users../anaconda3/lib/python3.5/distutils/command/install.py", line 9, in &lt;module&gt; from distutils.core import Command File "/Users/../anaconda3/lib/python3.5/distutils/core.py", line 18, in &lt;module&gt; from distutils.config import PyPIRCCommand File "/Users/../anaconda3/lib/python3.5/distutils/config.py", line 7, in &lt;module&gt; from configparser import RawConfigParser File "/Library/Python/2.7/site-packages/configparser.py", line 12, in &lt;module&gt; from backports.configparser import ( </code></pre>
0
2016-07-31T23:35:37Z
38,689,800
<p>Modify PYTHONPATH to have python 3.x packages path before 2.x packages path</p>
0
2016-08-01T01:16:46Z
[ "python", "python-2.7", "configparser" ]
How should I be going about exiting python script when user types 'q'?
38,689,323
<p>I'm trying to add code to exit my curses python script correctly when the user types <code>q</code>. I can't merely do <code>CTRL+C</code> because then curses won't be de-initialized correctly.</p> <p>I haven't found a good solution with getting user input that has a timeout so the program doesn't sit there until the user gives some input. </p> <p>Is there a simple way with creating a second thread that just handles user input and can request the main thread to run a de-init function?</p>
-2
2016-07-31T23:42:11Z
38,707,781
<p>The suggested answer <a href="http://stackoverflow.com/questions/24308583/python3-curses-how-to-press-q-for-ending-program-immediately">Python3 + Curses: How to press “q” for ending program immediately?</a> is a starting point, but (like the suggestion for using a separate thread) is not what is needed.</p> <p>Here is an example, starting from the former:</p> <pre><code>import sys, curses, time def main(sc): sc.nodelay(1) while True: try: sc.addstr(1, 1, time.strftime("%H:%M:%S")) sc.refresh() if sc.getch() == ord('q'): break time.sleep(1) except KeyboardInterrupt: curses.endwin() print "Bye" sys.exit() if __name__=='__main__': curses.wrapper(main) </code></pre> <p>When you press <code>^C</code>, it sends a keyboard interrupt. If you catch that, you can tell curses to cleanup (and restore the terminal modes). After that, exit.</p> <p>A separate thread will not work because it is unlikely that the underlying curses is thread-safe (and improbable that someone has gotten around to using the feature from Python).</p> <p>Further reading:</p> <ul> <li><a href="http://invisible-island.net/ncurses/ncurses.faq.html#multithread" rel="nofollow">Why does (fill in the blank) happen when I use two threads?</a></li> </ul>
1
2016-08-01T20:42:47Z
[ "python", "curses", "python-curses" ]
Copying Attributes from Superclass Instance while Inheriting?
38,689,362
<p>I'm trying to make a subclass that inherits from an instance of its superclass, and bases the majority of its attributes on the superclass attributes. </p> <pre><code>class Thing: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class ThingWithD(Thing): def __init__(self, thing, d): self.a = thing.a self.b = thing.b self.c = thing.c self.d = d </code></pre> <p>Is there a more concise way to make the <code>a</code>, <code>b</code>, and <code>c</code> declarations inside <code>ThingWithD.__init__()</code>? </p>
0
2016-07-31T23:50:33Z
38,689,449
<p>With class <code>Thing</code> defined as such:</p> <pre><code>class Thing: def __init__(self, a, b, c): self.a = a self.b = b self.c = c </code></pre> <p>I can think of 3 ways to do achieve what you are asking using classical inheritence. The first, is to take advantage of your known arguments and explicitly index <code>args</code> to pull out a to c, and d, like so:</p> <pre><code>class ThingWithD(Thing): def __init__(self, *args): self.d = args[-1] a_to_c = args[:-1] super().__init__(*a_to_c) thing_with_d = ThingWithD(1,2,3,4) thing_with_d.a # 1 thing_with_d.d # 4 </code></pre> <p>The second, and best way, would be to convert your params to keyword arguments so that they can be more easily mixed and matched. This is the most scalable solution, and could pave the way for <code>ThingWithE</code> and <code>ThingWithF</code>. </p> <pre><code>class ThingWithD(Thing): def __init__(self, d=None, **kwargs): super().__init__(**kwargs) self.d = d thing_with_d = ThingWithD(a=1,b=2,c=3,d=4) thing_with_d.a # 1 thing_with_d.d # 4 </code></pre> <p>The last way, which seems closest to what you already tried, is to use <code>ThingWithD</code> as a factory class that adds d to a class self referentially.</p> <pre><code>class ThingWithD(Thing): def __init__(self, thing, d): super().__init__(thing.a, thing.b, thing.c) self.d = d thing = Thing(1,2,3) thing_with_d = ThingWithD(thing, 4) thing_with_d.a # 1 thing_with_d.d # 4 </code></pre> <p>This is a strange approach, because here we are actually creating a copy of the original <code>thing</code> instance, and its unclear why we are inheriting from <code>Thing</code> at all. Instead, we could use a function that does the following.</p> <pre><code>def add_d_to_thing(thing, d): thing.d = d return thing thing = Thing(1,2,3) thing_with_d = add_d_to_thing(thing, 4) thing_with_d.a # 1 thing_with_d.d # 4 </code></pre> <p>This would return the same instance of thing, would add a <code>d</code> attribute, and is easier to read.</p>
1
2016-08-01T00:07:46Z
[ "python", "inheritance", "subclass", "python-3.4", "superclass" ]
Copying Attributes from Superclass Instance while Inheriting?
38,689,362
<p>I'm trying to make a subclass that inherits from an instance of its superclass, and bases the majority of its attributes on the superclass attributes. </p> <pre><code>class Thing: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class ThingWithD(Thing): def __init__(self, thing, d): self.a = thing.a self.b = thing.b self.c = thing.c self.d = d </code></pre> <p>Is there a more concise way to make the <code>a</code>, <code>b</code>, and <code>c</code> declarations inside <code>ThingWithD.__init__()</code>? </p>
0
2016-07-31T23:50:33Z
38,689,483
<p>The most concise—and object-oriented—way would probably be to just call the superclass's <code>__init__()</code> method and avoid the repetition:</p> <pre><code>class Thing: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class ThingWithD(Thing): def __init__(self, thing, d): super().__init__(thing.a, thing.b, thing.c) # Python 3 only self.d = d thing = Thing(1, 2, 3) thing_with_d = ThingWithD(thing, 4) print('{}, {}'.format(thing_with_d.a, thing_with_d.d)) # -&gt; 1, 4 </code></pre> <p>To do the same thing in Python 2.x, you would need to make <code>Thing</code> a new-style class by explicitly specifying its base class as <code>object</code> and change the call to the superclass constructor as shown below. </p> <p>If you make both of these modifications, the same code will work in both Python 2 and 3.</p> <pre><code>class Thing(object): def __init__(self, a, b, c): self.a = a self.b = b self.c = c class ThingWithD(Thing): def __init__(self, thing, d): super(ThingWithD, self).__init__(thing.a, thing.b, thing.c) self.d = d </code></pre>
2
2016-08-01T00:14:27Z
[ "python", "inheritance", "subclass", "python-3.4", "superclass" ]
Python Error 'list index out of range' When Using Split Function
38,689,515
<p>Im am trying to pull info from yahoo finance using urllib2 and then using the split function to display the Net Income. When I go to run the program in the Python 2.7.12 shell I have to print the command "yahooNetIncome('')" with the stock symbol in the single quotes. Whenever I do this it comes up with the error "failed in main loop list index out of range". Im rather new to python so I do not fully understand the problem. If someone could please help that would be greatly appreciated. </p> <pre><code>import time import urllib2 from urllib2 import urlopen stock = ['a', 'aa', 'aapl', 'abbv', 'abc', 'abt', 'ace', 'aci', 'acn', 'act', 'adbe', 'adi', 'adm', 'adp'] def yahooNetIncome(stock): try: sourceCode = urllib2.urlopen('https://finance.yahoo.com/quote/' + stock + '/financials').read() NI = sourceCode.split('&lt;span data-reactid=".1vqhh4ora92.1.$0.0.0.3.1.$main-0-Quote-Proxy.$main-0-Quote.0.2.0.2:1:$INCOME_STATEMENT.0.0.$GROSS_PROFIT.1:$0.0.0"&gt;')[1].split('&lt;/span&gt;')[0] print 'Net Income: ', NI except Exception, e: print 'failed in main loop', str(e) </code></pre>
-2
2016-08-01T00:21:04Z
38,689,551
<p>Your list index out of range means that your call to the <code>split()</code> method didn't find anything to split on. Therefore, there wouldn't be an index <code>1</code> (no splits, only one index), causing a list index out of range error to be thrown. </p>
0
2016-08-01T00:28:44Z
[ "python", "python-2.7" ]
Python Error 'list index out of range' When Using Split Function
38,689,515
<p>Im am trying to pull info from yahoo finance using urllib2 and then using the split function to display the Net Income. When I go to run the program in the Python 2.7.12 shell I have to print the command "yahooNetIncome('')" with the stock symbol in the single quotes. Whenever I do this it comes up with the error "failed in main loop list index out of range". Im rather new to python so I do not fully understand the problem. If someone could please help that would be greatly appreciated. </p> <pre><code>import time import urllib2 from urllib2 import urlopen stock = ['a', 'aa', 'aapl', 'abbv', 'abc', 'abt', 'ace', 'aci', 'acn', 'act', 'adbe', 'adi', 'adm', 'adp'] def yahooNetIncome(stock): try: sourceCode = urllib2.urlopen('https://finance.yahoo.com/quote/' + stock + '/financials').read() NI = sourceCode.split('&lt;span data-reactid=".1vqhh4ora92.1.$0.0.0.3.1.$main-0-Quote-Proxy.$main-0-Quote.0.2.0.2:1:$INCOME_STATEMENT.0.0.$GROSS_PROFIT.1:$0.0.0"&gt;')[1].split('&lt;/span&gt;')[0] print 'Net Income: ', NI except Exception, e: print 'failed in main loop', str(e) </code></pre>
-2
2016-08-01T00:21:04Z
38,689,674
<p>You're getting bitten by a couple of things.</p> <p>The out-of-range error you're getting is due to there being nothing in the list that you're trying to get the zeroth element of in your <code>NI</code> assignment line. That list is empty because the <code>split()</code> call is failing, and it's failing because the string you're feeding into it doesn't exist in the data pulled from the URL. This is the smaller issue.</p> <p>You're probably wondering why that string isn't in there when you can see it pretty clearly with a browser inspector. Here's where you're getting bitten by the bigger issue: the page you're downloading dynamically changes its content via JavaScript. If you use a tool like Curl to dump it straight out to disk without executing JavaScript you'll see that the string you're searching on doesn't exist within the file. Worse, the string you're trying to fetch (the number that you want to assign <code>NI</code> to) doesn't exist either. The JavaScript must be run before it gets displayed. In your browser you're seeing a live display after the JavaScript has been run. When you're pulling in the page via Python (or Curl or any other tool that doesn't behave like a browser and run JavaScript the way the page expects) you won't get the data you're looking for.</p> <p>Quite possibly this is being done by the site owner specifically to prevent the sort of thing you're trying to do.</p>
1
2016-08-01T00:54:09Z
[ "python", "python-2.7" ]
Multiprocessing using map
38,689,526
<p>I have a list of strings and on every string I am doing some changes that you can see in <code>wordify()</code>. Now, to speed this up, I split up the list into sublists using <code>chunked()</code> (the number of sublists is the number of CPU cores - 1). That way I get lists that look like <code>[[,,],[,,],[,,],[,,]]</code> . </p> <p>What I try to achieve:</p> <p>I want to do <code>wordify()</code> on every of these sublists simultaneously, returning the sublists as separate lists. I want to wait until all processes finish and then join these sublists into one list. The approach below does not work.</p> <pre><code>import multiprocessing from multiprocessing import Pool from contextlib import closing def readFiles(): words = [] with open("somefile.txt") as f: w = f.readlines() words = words + w return words def chunked(words, num_cpu): avg = len(words) / float(num_cpu) out = [] last = 0.0 while last &lt; len(words): out.append(words[int(last):int(last + avg)]) last += avg return out def wordify(chunk,wl): wl.append([chunk[word].split(",", 1)[0] for word in range(len(chunk))]) return wl if __name__ == '__main__': num_cpu = multiprocessing.cpu_count() - 1 words = readFiles() chunked = chunked(words, num_cpu) wordlist = [] wordify(words, wordlist) # works with closing(Pool(processes = num_cpu)) as p: p.map(wordify, chunked, wordlist) # fails </code></pre>
0
2016-08-01T00:22:35Z
38,689,558
<p>You have write your code so that you're just passing a single function to <code>map</code>; it's not smart enough to know that your hoping it passes <code>wordlist</code> into the second argument of your function. </p> <p>TBH partial function application is a bit clunky in Python, but you can use <code>functools.partial</code>:</p> <pre><code>from functools import partial p.map(partial(wordify, wordlist), chunked) </code></pre>
1
2016-08-01T00:29:39Z
[ "python", "dictionary", "multiprocess" ]
How to get dictionary values from list of lists (any language)?
38,689,547
<p>I most interested in simplicity, so would appreciate solution in any language (also interesting how it would look using LINQ). I tried to do it in Python, but failed.</p> <p>From these two lists:</p> <pre><code>init_li1 = [[234,45,1,86,2,0],[324,6,1],[123,1111,3]] init_li2 = ["Alpha", "Beta", "Gamma"] </code></pre> <p>I would like to get this dictionary:</p> <pre><code>{"Alpha":[234,45,1,86,2,0], "Beta":[324,6,1], "Gamma":[123,1111,3]} </code></pre>
-1
2016-08-01T00:27:28Z
38,689,555
<p>Actually, it's very easy in Python:</p> <pre><code>dictionary = dict(zip(init_li2, init_li1)) </code></pre> <p>You see, the <code>dict</code> constructor can be used like this:</p> <pre><code>dict(('key1', 'value1'), ('key2', 'value2')) </code></pre> <p>Well, <code>zip()</code> will generate tuples by taking corresponding elements from its arguments. That is:</p> <pre><code>zip([4, 5, 6], [2, 3, 4]) #-&gt; (4, 2), (5, 3), (6, 4) </code></pre> <p>Therefore, we take the keys from <code>init_li2</code>, and the values from <code>init_li1</code>.</p> <p>Note that <code>zip()</code> returns a list in Python 2. If you are working with large lists, that is not memory-efficient. To improve on the memory usage, use <code>itertools.izip</code> instead.</p>
1
2016-08-01T00:29:05Z
[ "c#", "python", "list", "dictionary" ]