title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to efficiently insert bulk data into Cassandra using Python? | 38,852,492 | <p>I have a Python application, built with Flask, that allows importing of many data records (anywhere from 10k-250k+ records at one time). Right now it inserts into a Cassandra database, by inserting one record at a time like this:</p>
<pre><code>for transaction in transactions:
self.transaction_table.insert_record(transaction)
</code></pre>
<p>This process is incredibly slow. Is there a best-practice approach I could use to more efficiently insert this bulk data?</p>
| 0 | 2016-08-09T13:43:13Z | 38,855,747 | <p>The easiest solution is to generate csv files from your data, and import it with the <a href="https://docs.datastax.com/en/cql/3.3/cql/cql_reference/copy_r.html" rel="nofollow">COPY</a> command. That should work well for up to a few million rows. For more complicated scenarios you could use the <a href="http://www.datastax.com/dev/blog/simple-data-importing-and-exporting-with-cassandra" rel="nofollow">sstableloader</a> command.</p>
| 1 | 2016-08-09T16:12:27Z | [
"python",
"cassandra"
] |
Stipulation of "Good"/"Bad"-Cases in an LDA Model (Using gensim in Python) | 38,852,506 | <p>I am trying to analyze news snippets in order to identify crisis periods.
To do so, I have already downloaded news articles over the past 7 years and have those available.
Now, I am applying a LDA (Latent Dirichlet Allocation) model on this dataset in order to identify those countries show signs of an economic crisis. </p>
<p>I am basing my code on a blog post by Jordan Barber (<a href="https://rstudio-pubs-static.s3.amazonaws.com/79360_850b2a69980c4488b1db95987a24867a.html" rel="nofollow">https://rstudio-pubs-static.s3.amazonaws.com/79360_850b2a69980c4488b1db95987a24867a.html</a>) â here is my code so far:</p>
<pre><code>import os, csv
#create list with text blocks in rows, based on csv file
list=[]
with open('Testfile.csv', 'r') as csvfile:
emails = csv.reader(csvfile)
for row in emails:
list.append(row)
#create doc_set
doc_set=[]
for row in list:
doc_set.append(row[0])
#import plugins - need to install gensim and stop_words manually for fresh python install
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim
tokenizer = RegexpTokenizer(r'\w+')
# create English stop words list
en_stop = get_stop_words('en')
# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
# list for tokenized documents in loop
texts = []
# loop through document list
for i in doc_set:
# clean and tokenize document string
raw = i.lower()
tokens = tokenizer.tokenize(raw)
# remove stop words from tokens
stopped_tokens = [i for i in tokens if not i in en_stop]
# stem tokens
stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
# add tokens to list
texts.append(stemmed_tokens)
# turn our tokenized documents into a id <-> term dictionary
dictionary = corpora.Dictionary(texts)
# convert tokenized documents into a document-term matrix
corpus = [dictionary.doc2bow(text) for text in texts]
# generate LDA model
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=5, id2word = dictionary, passes=10)
print(ldamodel.print_topics(num_topics=5, num_words=5))
# map topics to documents
doc_lda=ldamodel[corpus]
with open('doc_lda.csv', 'w') as outfile:
writer = csv.writer(outfile)
for row in doc_lda:
writer.writerow(row)
</code></pre>
<p>Essentially, I identify a number of topics (5 in the code above â to be checked), and using the last line I assign each news article a score, which indicates the probability of an article being related to one of these topics.
Now, I can only manually make a qualitative assessment of whether a given topic is related to a crisis, which is bit unfortunate.
What I would much rather do, is to tell the algorithm whether an article was published during a crisis and use this additional piece of information to identify both topics for my âcrisis yearsâ as well as for my ânon-crisis-yearsâ. Simply splitting my dataset to just consider topics for my âbadsâ (i.e. crisis years only) wonât work in my opinion, as I would still need to manually select which topics would actually be related to a crisis, and which topics would show up anyways (sports news, â¦). </p>
<p>So, is there a way to adapt the code to a) incorporate the information of âcrisisâ vs ânon-crisisâ and b) to automatically chose the optimal number of topics / words to optimize the predictive power of the model?</p>
<p>Thanks a lot in advance!</p>
| 0 | 2016-08-09T13:43:38Z | 38,870,432 | <p><strong>First some suggestions on your specific questions:</strong></p>
<blockquote>
<p>a) incorporate the information of âcrisisâ vs ânon-crisisâ </p>
</blockquote>
<p>To do this with a standard LDA model, I'd probably go for mutual information between doc topic proportions and whether docs are in a crisis/non crisis period. </p>
<blockquote>
<p>b) to automatically chose the optimal number of topics / words to optimize the predictive power of the model?</p>
</blockquote>
<p>If you want to do this properly, experiment with many settings for the number of topics and try to use the topic models to predict conflict/nonconflict for held out documents (documents not included in the topic model).</p>
<p>There are many topic model variants that effectively choose the number of topics ("non-parametric" models). It turns out that the Mallet implementation with hyperparameter optimisation effectively does the same, so I'd suggest using that (provide a large number of topics - hyperparameter optimisaiton will result in many topics with very few assigned words, these topics are just noise).</p>
<p><strong>And some general comments:</strong></p>
<p>There are many topic model variants out there, and in particular a few that incorporate time. These may be a good choice for you (as they'll better resolve topic changes over time than standard LDA - though standard LDA is a good starting point). </p>
<p>One model I particularly like uses pitman-yor word priors (better matching zipf distributed words than a dirichlet), accounts for burstiness in topics and provides clues on junk topics: <a href="https://github.com/wbuntine/topic-models" rel="nofollow">https://github.com/wbuntine/topic-models</a></p>
| 0 | 2016-08-10T10:13:05Z | [
"python",
"python-2.7",
"lda",
"gensim"
] |
Look for existing entry in database before trying to insert | 38,852,561 | <p>I am currently working with Access 2013. I have built a database that revolves around applicants submitting for a Job. The database is set up so that a person can apply for many different jobs, when the same person applies for a job through our website (uses JotForms) it automatically updates the database. </p>
<p>I have a Python script pulling the applicants submission information from an email which updates the database. The problem that I am running into is that within the database I have the applicants primary email set to "no duplicates", thus not allowing the same person to apply for many different jobs as the Python script is trying to create a new record within the database causing an error. </p>
<p>Within my Access form (VBA) or in Python what do I need to write to tell my database if the primary emails are the same only create a new record within the position applied for table that is related to the persons primary email? </p>
<p>Tables:</p>
<pre><code>tblPerson_Information tblPosition_Applied_for
Personal_ID (PK) Position_ID
First_Name Position_Personal_ID (FK)
Last_Name Date_of_Submission
Clearance_Type
Primary_Phone
Primary_email
Education_Level
</code></pre>
| 1 | 2016-08-09T13:45:58Z | 38,853,815 | <p>Simply look up the email address in the [tblPerson_Information] table:</p>
<pre class="lang-python prettyprint-override"><code>primary_email = 'gord@example.com' # test data
crsr = conn.cursor()
sql = """\
SELECT Personal_ID FROM tblPerson_Information WHERE Primary_email=?
"""
crsr.execute(sql, (primary_email))
row = crsr.fetchone()
if row is not None:
personal_id = row[0]
print('Email found: tblPerson_Information.Personal_ID = {0}'.format(personal_id))
else:
print('Email not found in tblPerson_Information')
</code></pre>
| 1 | 2016-08-09T14:38:58Z | [
"python",
"database",
"ms-access",
"access-vba",
"ms-access-2013"
] |
How to remove substrings marked with special characters from a string? | 38,852,589 | <p>I have a string in Python:</p>
<pre><code>Tt = "This is a <\"string\">string, It should be <\"changed\">changed to <\"a\">a nummber."
print Tt
'This is a <"string">string, It should be <"changed">changed to <"a">a nummber.'
</code></pre>
<p>You see the some words repeat in this part <code><\" \">.</code> </p>
<p>My question is, how to delete those repeated parts (delimited with the named characters)?</p>
<p>The result should be like: </p>
<pre><code>'This is a string, It should be changed to a nummber.'
</code></pre>
| -2 | 2016-08-09T13:46:56Z | 38,852,628 | <p>Use regular expressions:</p>
<pre><code>import re
Tt = re.sub('<\".*?\">', '', Tt)
</code></pre>
<p>Note the <code>?</code> after <code>*</code>. It makes the expression non-greedy,
so it tries to match so few symbols between <code><\"</code> and <code>\"></code> as possible.</p>
<p>The Solution of <em>James</em> will work only in cases when the delimiting substrings
consist only from one character (<code><</code> and <code>></code>). In this case it is possible to use negations like <code>[^>]</code>. If you want to remove a substring delimited with character sequences (e.g. with <code>begin</code> and <code>end</code>), you should use non-greedy regular expressions (i.e. <code>.*?</code>).</p>
| 5 | 2016-08-09T13:48:41Z | [
"python",
"regex"
] |
How to remove substrings marked with special characters from a string? | 38,852,589 | <p>I have a string in Python:</p>
<pre><code>Tt = "This is a <\"string\">string, It should be <\"changed\">changed to <\"a\">a nummber."
print Tt
'This is a <"string">string, It should be <"changed">changed to <"a">a nummber.'
</code></pre>
<p>You see the some words repeat in this part <code><\" \">.</code> </p>
<p>My question is, how to delete those repeated parts (delimited with the named characters)?</p>
<p>The result should be like: </p>
<pre><code>'This is a string, It should be changed to a nummber.'
</code></pre>
| -2 | 2016-08-09T13:46:56Z | 38,852,687 | <p>I'd use a quick regular expression:</p>
<pre><code>import re
Tt = "This is a <\"string\">string, It should be <\"changed\">changed to <\"a\">a number."
print re.sub("<[^<]+>","",Tt)
#Out: This is a string, It should be changed to a nummber.
</code></pre>
<p>Ah - similar to Igor's post, he beat my by a bit. Rather than making the expression non-greedy, I don't match an expression if it contains another start tag "<" in it, so it will only match a start tag that's followed by an end tag ">".</p>
| 1 | 2016-08-09T13:50:58Z | [
"python",
"regex"
] |
python tool to generate txt file by coping only directory/folder names but not the other file names | 38,852,599 | <p>This is my </p>
<pre><code> import os
filenames= os.listdir (".")
file = open("XML.txt", "w")
result = []
for filename in filenames:
result = "<capltestcase name =\""+filename+"\"\n"
file.write(result)
result = "title = \""+filename+"\"\n"
file.write(result)
result = "/>\n"
file.write(result)
file.close()
</code></pre>
<p>My Question /help needed
I want to add standard text ""
to the txt generated, but i cant add it, it says sytax errors can somebody help with code please.</p>
<p>2) how can i just copy foldernames from directory instead of file names , since with my code , it copies all file names in into txt.</p>
<p>Thank you frnds ..</p>
<p>file.write("\\")</p>
| -2 | 2016-08-09T13:47:13Z | 38,852,815 | <p>use the escape () to write special characters</p>
<pre><code>print("\\<?xml version=\"1.0\" encoding=\"iso-8859-1\"?>\\")
</code></pre>
| 0 | 2016-08-09T13:56:39Z | [
"python"
] |
python tool to generate txt file by coping only directory/folder names but not the other file names | 38,852,599 | <p>This is my </p>
<pre><code> import os
filenames= os.listdir (".")
file = open("XML.txt", "w")
result = []
for filename in filenames:
result = "<capltestcase name =\""+filename+"\"\n"
file.write(result)
result = "title = \""+filename+"\"\n"
file.write(result)
result = "/>\n"
file.write(result)
file.close()
</code></pre>
<p>My Question /help needed
I want to add standard text ""
to the txt generated, but i cant add it, it says sytax errors can somebody help with code please.</p>
<p>2) how can i just copy foldernames from directory instead of file names , since with my code , it copies all file names in into txt.</p>
<p>Thank you frnds ..</p>
<p>file.write("\\")</p>
| -2 | 2016-08-09T13:47:13Z | 38,853,121 | <p>Rather than escaping all those double-quotes, why not embed your string inside single quotes instead? In Python (unlike many other languages) there is no difference between using single or double quotes, provided they are balanced (the same at each end).</p>
<p>If you need the backslashes in the text then use a <a href="http://stackoverflow.com/questions/2081640/what-exactly-do-u-and-r-string-flags-do-in-python-and-what-are-raw-string-l">raw string</a></p>
<pre><code>file.write(r'"\<?xml version="1.0" encoding="iso-8859-1"?>\"')
</code></pre>
<p>That will preserve all the double-quotes and the back-slashes.</p>
| 0 | 2016-08-09T14:09:18Z | [
"python"
] |
Python "split" on empty new line | 38,852,712 | <p>Trying to use a python split on a "empty" newline but not any other new lines. I tried a few other example I found but none of them seem to work. </p>
<p>Data example:</p>
<pre>
(*,224.0.0.0/4) RPF nbr: 96.34.35.36 Flags: C RPF P
Up: 1w6d
(*,224.0.0.0/24) Flags: D P
Up: 1w6d
(*,224.0.1.39) Flags: S P
Up: 1w6d
(96.34.246.55,224.0.1.39) RPF nbr: 96.34.35.36 Flags: RPF
Up: 1w5d
Incoming Interface List
Bundle-Ether434 Flags: F A, Up: 1w5d
Outgoing Interface List
BVI100 Flags: F, Up: 1w5d
TenGigE0/0/0/3 Flags: F, Up: 1w5d
TenGigE0/0/1/1 Flags: F, Up: 1w5d
TenGigE0/0/1/2 Flags: F, Up: 1w5d
TenGigE0/0/1/3 Flags: F, Up: 1w5d
TenGigE0/1/1/1 Flags: F, Up: 1w5d
TenGigE0/1/1/2 Flags: F, Up: 1w5d
TenGigE0/2/1/0 Flags: F, Up: 1w5d
TenGigE0/2/1/1 Flags: F, Up: 1w5d
TenGigE0/2/1/2 Flags: F, Up: 1w5d
Bundle-Ether234 (0/3/CPU0) Flags: F, Up: 3d16h
Bundle-Ether434 Flags: F A, Up: 1w5d
</pre>
<p>I want to split on anything that is a new line online and only a newline.</p>
<p>Example code is below:</p>
<pre><code>myarray = []
myarray = output.split("\n")
for line in myarray:
print line
print "Next Line"
</code></pre>
<p>I am do have the "re" library imported. </p>
| 0 | 2016-08-09T13:52:10Z | 38,852,821 | <p>It's quite easy when you consider what is on empty line. It's just the the newline character, so splitting on empty line would be splitting on two newline characters in sequence (one from the previou non-empty line, one is the 'whole' empty line.</p>
<pre><code>myarray = output.split("\n\n")
for line in myarray:
print line
print "Next Line"
</code></pre>
| 1 | 2016-08-09T13:56:55Z | [
"python"
] |
Python "split" on empty new line | 38,852,712 | <p>Trying to use a python split on a "empty" newline but not any other new lines. I tried a few other example I found but none of them seem to work. </p>
<p>Data example:</p>
<pre>
(*,224.0.0.0/4) RPF nbr: 96.34.35.36 Flags: C RPF P
Up: 1w6d
(*,224.0.0.0/24) Flags: D P
Up: 1w6d
(*,224.0.1.39) Flags: S P
Up: 1w6d
(96.34.246.55,224.0.1.39) RPF nbr: 96.34.35.36 Flags: RPF
Up: 1w5d
Incoming Interface List
Bundle-Ether434 Flags: F A, Up: 1w5d
Outgoing Interface List
BVI100 Flags: F, Up: 1w5d
TenGigE0/0/0/3 Flags: F, Up: 1w5d
TenGigE0/0/1/1 Flags: F, Up: 1w5d
TenGigE0/0/1/2 Flags: F, Up: 1w5d
TenGigE0/0/1/3 Flags: F, Up: 1w5d
TenGigE0/1/1/1 Flags: F, Up: 1w5d
TenGigE0/1/1/2 Flags: F, Up: 1w5d
TenGigE0/2/1/0 Flags: F, Up: 1w5d
TenGigE0/2/1/1 Flags: F, Up: 1w5d
TenGigE0/2/1/2 Flags: F, Up: 1w5d
Bundle-Ether234 (0/3/CPU0) Flags: F, Up: 3d16h
Bundle-Ether434 Flags: F A, Up: 1w5d
</pre>
<p>I want to split on anything that is a new line online and only a newline.</p>
<p>Example code is below:</p>
<pre><code>myarray = []
myarray = output.split("\n")
for line in myarray:
print line
print "Next Line"
</code></pre>
<p>I am do have the "re" library imported. </p>
| 0 | 2016-08-09T13:52:10Z | 38,852,929 | <p>A blank line is just two new lines. So your easiest solution is probably to check for two new lines (UNLESS you expect to have a situation where you'll have more than two blank lines in a row).</p>
<pre><code>import os
myarray = [] #As DeepSpace notes, this is not necessary as split will return a list. No impact to later code, just more typing
myarray = output.split(os.linesep + os.linesep) ##use os.linesep to make this compatible on more systems
</code></pre>
<p>That would be where I'd start anyway</p>
| 1 | 2016-08-09T14:00:59Z | [
"python"
] |
Python: trying to get three elements of a list with slice over a iterator | 38,852,779 | <p>I'm new to python.</p>
<p>I'm trying to create another list from a big one just with 3 elements of that list at a time.</p>
<p>I'm trying this:</p>
<pre><code>my_list = ['test1,test2,test3','test4,test5,test6','test7,test8,test9','test10,test11,test12']
new_three = []
for i in my_list:
item = my_list[int(i):3]
new_three.append(item)
# here I'll write a file with these 3 elements. Next iteration I will write the next three ones, and so on...
</code></pre>
<p>I'm getting this error:</p>
<pre><code>item = my_list[int(i):3]
ValueError: invalid literal for int() with base 10: 'test1,test2,test3'
</code></pre>
<p>I also tried:</p>
<pre><code>from itertools import islice
for i in my_list:
new_three.append(islice(my_list,int(i),3))
</code></pre>
<p>Got the same error. I cannot figure out what I'm doing wrong.</p>
<p>EDIT:</p>
<p>After many tries with help here, I could make it. </p>
<pre><code>listrange = []
for i in range(len(li)/3 + 1):
item = li[i*3:(i*3)+3]
listrange.append(item)
</code></pre>
| -2 | 2016-08-09T13:55:07Z | 38,875,607 | <p>Is this what you meant?</p>
<pre><code>my_list = ['test1,test2,test3','test4,test5,test6','test7,test8,test9','test10,test11,test12']
for item in my_list:
print "this is one item from the list :", item
list_of_things = item.split(',')
print "make a list with split on comma:", list_of_things
# you can write list_of_things to disk here
print "--------------------------------"
</code></pre>
<p>In response to comments, if you want to generate a whole new list with the comma separated strings transformed into sublists, that is a list comprehension:</p>
<pre><code>new_list = [item.split(',') for item in my_list]
</code></pre>
<p>And to split it up into groups of three items from the original list, see the answer linked in comments by PM 2Ring, <a href="http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks">What is the most "pythonic" way to iterate over a list in chunks?</a></p>
<p>I have adapted that to your specific case here:</p>
<pre><code>my_list = ['test1,test2,test3','test4,test5,test6','test7,test8,test9','test10,test11,test12']
for i in xrange(0, len(my_list), 3):
# get the next three items from my_list
my_list_segment = my_list[i:i+3]
# here is an example of making a new list with those three
new_list = [item.split(',') for item in my_list]
print "three items from original list, with string split into sublist"
print my_list_segment
print "-------------------------------------------------------------"
# here is a more practical use of the three items, if you are writing separate files for each three
filename_this_segment = 'temp' # make up a filename, possibly using i/3+1 in the name
with open(filename_this_segment, 'w') as f:
for item in my_list_segment:
list_of_things = item.split(',')
for thing in list_of_things:
# obviously you'll want to format the file somehow, but that's beyond the scope of this question
f.write(thing)
</code></pre>
| 0 | 2016-08-10T14:00:29Z | [
"python",
"list"
] |
How to flatten XML file in Python | 38,852,822 | <p>Is there a library or mechanism I can use to flatten the XML file?</p>
<p>Existing:</p>
<pre><code><A>
<B>
<ConnectionType>a</ConnectionType>
<StartTime>00:00:00</StartTime>
<EndTime>00:00:00</EndTime>
<UseDataDictionary>N</UseDataDictionary>
</code></pre>
<p>Desired:</p>
<pre><code>A.B.ConnectionType = a
A.B.StartTime = 00:00:00
A.B.EndTime = 00:00:00
A.B.UseDataDictionary = N
</code></pre>
| 3 | 2016-08-09T13:56:57Z | 38,853,421 | <p>By using <a href="https://github.com/martinblech/xmltodict" rel="nofollow"><code>xmltodict</code></a> to transform your XML file to a dictionary, in combination with <a href="http://codereview.stackexchange.com/a/21035">this answer</a> to flatten a <code>dict</code>, this should be possible.</p>
<p>Example:</p>
<pre><code># Original code: http://codereview.stackexchange.com/a/21035
from collections import OrderedDict
def flatten_dict(d):
def items():
for key, value in d.items():
if isinstance(value, dict):
for subkey, subvalue in flatten_dict(value).items():
yield key + "." + subkey, subvalue
else:
yield key, value
return OrderedDict(items())
import xmltodict
# Convert to dict
with open('test.xml', 'rb') as f:
xml_content = xmltodict.parse(f)
# Flatten dict
flattened_xml = flatten_dict(xml_content)
# Print in desired format
for k,v in flattened_xml.items():
print('{} = {}'.format(k,v))
</code></pre>
<p>Output:</p>
<pre><code>A.B.ConnectionType = a
A.B.StartTime = 00:00:00
A.B.EndTime = 00:00:00
A.B.UseDataDictionary = N
</code></pre>
| 1 | 2016-08-09T14:22:44Z | [
"python",
"xml"
] |
How to flatten XML file in Python | 38,852,822 | <p>Is there a library or mechanism I can use to flatten the XML file?</p>
<p>Existing:</p>
<pre><code><A>
<B>
<ConnectionType>a</ConnectionType>
<StartTime>00:00:00</StartTime>
<EndTime>00:00:00</EndTime>
<UseDataDictionary>N</UseDataDictionary>
</code></pre>
<p>Desired:</p>
<pre><code>A.B.ConnectionType = a
A.B.StartTime = 00:00:00
A.B.EndTime = 00:00:00
A.B.UseDataDictionary = N
</code></pre>
| 3 | 2016-08-09T13:56:57Z | 38,937,986 | <p>This is not a complete implementation but you could take advantage of <a href="http://lxml.de/api/lxml.etree._ElementTree-class.html#getpath" rel="nofollow">lxmls's getpath</a>:</p>
<pre><code>xml = """<A>
<B>
<ConnectionType>a</ConnectionType>
<StartTime>00:00:00</StartTime>
<EndTime>00:00:00</EndTime>
<UseDataDictionary>N
<UseDataDictionary2>G</UseDataDictionary2>
</UseDataDictionary>
</B>
</A>"""
from lxml import etree
from StringIO import StringIO
tree = etree.parse(StringIO(xml))
root = tree.getroot().tag
for node in tree.iter():
for child in node.getchildren():
if child.text.strip():
print("{}.{} = {}".format(root, ".".join(tree.getelementpath(child).split("/")), child.text.strip()))
</code></pre>
<p>Which gives you:</p>
<pre><code>A.B.ConnectionType = a
A.B.StartTime = 00:00:00
A.B.EndTime = 00:00:00
A.B.UseDataDictionary = N
A.B.UseDataDictionary.UseDataDictionary2 = G
</code></pre>
| 0 | 2016-08-13T23:55:16Z | [
"python",
"xml"
] |
Can I iterate over a cursor in pymssql more than once? | 38,852,826 | <p>For example, if I run a sql query in python (using pymssql):</p>
<pre><code>cursor.execute("""SELECT * FROM TABLE""")
</code></pre>
<p>Then I do:</p>
<pre><code>for row in cursor:
print row[0]
</code></pre>
<p>but then I want to loop through the table a second time for a different operation, like this:</p>
<pre><code>for row in cursor:
print row[1]
</code></pre>
<p>(Obviously I could do both of these in 1 loop, this is just for example's sake). Can I do this without re-executing the query again?</p>
| 2 | 2016-08-09T13:57:10Z | 38,852,924 | <p>No, cursors in pymssql function like a generator. Once you get the results from them, they no longer contain the result set.</p>
<p>The only way to do this is to save the query results to an intermediary list.</p>
<p>For example:</p>
<pre><code>import pymssql
database = pymssql.connect()
db_cursor = database.cursor()
db_cursor.execute("""SELECT * FROM Table""")
results = db_cursor.fetchall()
for result in results:
print(result[0])
for result in results:
print(result[1])
</code></pre>
| 0 | 2016-08-09T14:00:51Z | [
"python",
"sql",
"database"
] |
Can I iterate over a cursor in pymssql more than once? | 38,852,826 | <p>For example, if I run a sql query in python (using pymssql):</p>
<pre><code>cursor.execute("""SELECT * FROM TABLE""")
</code></pre>
<p>Then I do:</p>
<pre><code>for row in cursor:
print row[0]
</code></pre>
<p>but then I want to loop through the table a second time for a different operation, like this:</p>
<pre><code>for row in cursor:
print row[1]
</code></pre>
<p>(Obviously I could do both of these in 1 loop, this is just for example's sake). Can I do this without re-executing the query again?</p>
| 2 | 2016-08-09T13:57:10Z | 38,853,081 | <p>No, you can't do that as pymssql cursor (python generator) is almost same as file pointer and each row in cursor is same as each line in file and once you pass over an line you have to seek to start and start again, same case for cursor, you have to run the query again to get data.</p>
| 0 | 2016-08-09T14:07:05Z | [
"python",
"sql",
"database"
] |
REST API POST - Check if each nested object exists. If no, create new. If yes, return existing object | 38,852,863 | <p>I'm somewhat new to <code> RESTful API</code> design practice and looking for help. The data model I'm working with details a top-level object (e.g. Book) containing a list of a nested objects (e.g. Chapters).<br>
Each Chapter is persisted in its own right and has a <code><em>simple_name</em></code> field that is set to <code>unique=True</code> (ie. only <strong>one</strong> Chapter with <em>Chapter.simple_name</em> = "ch_01" may exist). </p>
<blockquote>
<p><strong>Scenario</strong>: A user POSTs a Book ("myBook_v1") containing Chapters, "ch_01", "ch_02", and "ch_03". The user then edits their book and the
next day POSTs "myBook_v2" containing Chapters, "ch_01", "ch_02",
"ch_03", <strong>and "ch_04"</strong>. Suppose that Chapters "ch_01", "ch_02",
"ch_03" are unchanged from the original POST.</p>
</blockquote>
<p>Currently, since <em>simple_name</em> is required to be unique, the second <code>POST</code> does not pass uniqueness validation and an error response is returned to the user. </p>
<p><strong>Question</strong>: Would the following implementation fit <code>REST</code> design principles? And, most importantly, is this a good (safe) design choice? </p>
<p><strong>Implementation</strong>: Upon Book POST, check each <em>Chapter.simple_name</em> for uniqueness. If <em>Chapter.simple_name</em> already exists in database, skip creation. If it does not exist, create new Chapter. Return complete persisted Book with a mix of newly created Chapters and already existing Chapters. The only criteria for deciding to create new or use existing is whether or not the user-specified <em>simple_name</em> already exists for a Chapter in the database.</p>
| 0 | 2016-08-09T13:58:37Z | 38,854,018 | <p>The idea here is to remove the uniqueness constraint by dropping the associated validator and deal with adding / updating / removing by yourself after (see <a href="https://stackoverflow.com/questions/38438167/unique-validation-on-nested-serializer-on-django-rest-framework">Unique validation on nested serializer on Django Rest Framework</a>)</p>
| 0 | 2016-08-09T14:48:57Z | [
"python",
"django",
"rest",
"django-rest-framework",
"api-design"
] |
Adding new columns to data frame based on the values of multiple columns | 38,852,904 | <p>I have a data frame whose header looks like following,
</p>
<pre><code>df.head()
Out[660]:
Samples variable value Type
0 PE01I 267N12.3_Beta 0.066517 Beta
1 PE01R R267N12.3_Beta 0.061617 Beta
2 PE02I 267N12.3_Beta 0.071013 Beta
3 PE02R 267N12.3_Beta 0.056623 Beta
4 PE03I 267N12.3_Beta 0.071633 Beta
5 PE01I 267N12.3_FPKM 0.000000 FPKM
6 PE01R 267N12.3_FPKM 0.003430 FPKM
7 PE02I 267N12.3_FPKM 0.272144 FPKM
8 PE02R 267N12.3_FPKM 0.005753 FPKM
9 PE03I 267N12.3_FPKM 0.078708 FPKM
</code></pre>
<p>And I wanted to add new columns with header name as Beta and FPKM by using from column "Type"based on their corresponding values from column "value".
So far I tried this via following one-liner,</p>
<pre><code>df['Beta'] = df['Type'].map(lambda x: df.value if x == "Beta" else "FPKM")
</code></pre>
<p>and it give sme following output,</p>
<pre><code>Samples variable value Type Beta
0 PE01I 267N12.3_Beta 0.066517 Beta 0 0.066517 1 0.061617 2 0.07...
1 PE01R 267N12.3_Beta 0.061617 Beta 0 0.066517 1 0.061617 2 0.07...
2 PE02I 267N12.3_Beta 0.071013 Beta 0 0.066517 1 0.061617 2 0.07...
3 PE02R 267N12.3_Beta 0.056623 Beta 0 0.066517 1 0.061617 2 0.07...
4 PE03I 267N12.3_Beta 0.071633 Beta 0 0.066517 1 0.061617 2 0.07...
</code></pre>
<p>The column Beta has three values and all column is repeating.
What I am aiming is to have a data frame which looks like,</p>
<pre><code>Samples variable Beta FPKM
PE01I 267N12.3_Beta 0.066517 0
PE01R 267N12.3_Beta 0.061617 0.00343
PE02I 267N12.3_Beta 0.071013 0.272144
PE02R 267N12.3_Beta 0.056623 0.005753
PE03I 267N12.3_Beta 0.071633 0.078708
</code></pre>
<p>Any help would be really great..
Thank you</p>
| 1 | 2016-08-09T14:00:01Z | 38,853,142 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>df1 = df.set_index(['Samples','Type']).unstack()
print (df1)
variable value
Type Beta FPKM Beta FPKM
Samples
PE01I 267N12.3_Beta 267N12.3_FPKM 0.066517 0.000000
PE01R R267N12.3_Beta 267N12.3_FPKM 0.061617 0.003430
PE02I 267N12.3_Beta 267N12.3_FPKM 0.071013 0.272144
PE02R 267N12.3_Beta 267N12.3_FPKM 0.056623 0.005753
PE03I 267N12.3_Beta 267N12.3_FPKM 0.071633 0.078708
#remove Multiindex in columns
df1.columns = ['_'.join(col) for col in df1.columns]
df1.reset_index(inplace=True)
print (df1)
Samples variable_Beta variable_FPKM value_Beta value_FPKM
0 PE01I 267N12.3_Beta 267N12.3_FPKM 0.066517 0.000000
1 PE01R R267N12.3_Beta 267N12.3_FPKM 0.061617 0.003430
2 PE02I 267N12.3_Beta 267N12.3_FPKM 0.071013 0.272144
3 PE02R 267N12.3_Beta 267N12.3_FPKM 0.056623 0.005753
4 PE03I 267N12.3_Beta 267N12.3_FPKM 0.071633 0.078708
#if need remove column
print (df1.drop('variable_FPKM', axis=1))
Samples variable_Beta value_Beta value_FPKM
0 PE01I 267N12.3_Beta 0.066517 0.000000
1 PE01R R267N12.3_Beta 0.061617 0.003430
2 PE02I 267N12.3_Beta 0.071013 0.272144
3 PE02R 267N12.3_Beta 0.056623 0.005753
4 PE03I 267N12.3_Beta 0.071633 0.078708
</code></pre>
<p>EDIT by comment:</p>
<p>If get error:</p>
<blockquote>
<p>ValueError: Index contains duplicate entries, cannot reshape</p>
</blockquote>
<p>it means you have duplicates values in <code>index</code> and aggragating is necessary.</p>
<p>Better sample are in <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1463/reshaping-and-pivoting/4771/pivoting-with-aggregating#t=201608240553599927796">SO Documentation - pivoting with aggregating</a>.</p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> and if aggfunc is <code>np.sum</code> or <code>np.mean</code> (working with numeric), string columns are omited and function <code>''.join</code> works only with string values and numeric are omited.</p>
<p>Call function twice with different <code>aggfunc</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Type': {0: 'Beta', 1: 'Beta', 2: 'Beta', 3: 'Beta', 4: 'Beta', 5: 'FPKM', 6: 'FPKM', 7: 'FPKM', 8: 'FPKM', 9: 'FPKM'}, 'value': {0: 0.066516999999999993, 1: 0.061616999999999998, 2: 0.071012999999999993, 3: 0.056623, 4: 0.071633000000000002, 5: 0.0, 6: 0.0034299999999999999, 7: 0.272144, 8: 0.0057530000000000003, 9: 0.078708}, 'variable': {0: '267N12.3_Beta', 1: 'R267N12.3_Beta', 2: '267N12.3_Beta', 3: '267N12.3_Beta', 4: '267N12.3_Beta', 5: '267N12.3_FPKM', 6: '267N12.3_FPKM', 7: '267N12.3_FPKM', 8: '267N12.3_FPKM', 9: '267N12.3_FPKM'}, 'Samples': {0: 'PE01I', 1: 'PE01I', 2: 'PE02I', 3: 'PE02R', 4: 'PE03I', 5: 'PE01I', 6: 'PE01R', 7: 'PE02I', 8: 'PE02R', 9: 'PE03I'}})
#changed value in second row in column Samples
print (df)
Samples Type value variable
0 PE01I Beta 0.066517 267N12.3_Beta
1 PE01I Beta 0.061617 R267N12.3_Beta
2 PE02I Beta 0.071013 267N12.3_Beta
3 PE02R Beta 0.056623 267N12.3_Beta
4 PE03I Beta 0.071633 267N12.3_Beta
5 PE01I FPKM 0.000000 267N12.3_FPKM
6 PE01R FPKM 0.003430 267N12.3_FPKM
7 PE02I FPKM 0.272144 267N12.3_FPKM
8 PE02R FPKM 0.005753 267N12.3_FPKM
9 PE03I FPKM 0.078708 267N12.3_FPKM
</code></pre>
<pre><code>df1 = df.pivot_table(index='Samples', columns=['Type'], aggfunc=','.join)
print (df1)
variable
Type Beta FPKM
Samples
PE01I 267N12.3_Beta,R267N12.3_Beta 267N12.3_FPKM
PE01R None 267N12.3_FPKM
PE02I 267N12.3_Beta 267N12.3_FPKM
PE02R 267N12.3_Beta 267N12.3_FPKM
PE03I 267N12.3_Beta 267N12.3_FPKM
df2 = df.pivot_table(index='Samples', columns=['Type'], aggfunc=np.mean)
print (df2)
value
Type Beta FPKM
Samples
PE01I 0.064067 0.000000
PE01R NaN 0.003430
PE02I 0.071013 0.272144
PE02R 0.056623 0.005753
PE03I 0.071633 0.078708
df3 = pd.concat([df1, df2], axis=1)
df3.columns = ['_'.join(col) for col in df3.columns]
df3.reset_index(inplace=True)
print (df3)
Samples variable_Beta variable_FPKM value_Beta value_FPKM
0 PE01I 267N12.3_Beta,R267N12.3_Beta 267N12.3_FPKM 0.064067 0.000000
1 PE01R None 267N12.3_FPKM NaN 0.003430
2 PE02I 267N12.3_Beta 267N12.3_FPKM 0.071013 0.272144
3 PE02R 267N12.3_Beta 267N12.3_FPKM 0.056623 0.005753
4 PE03I 267N12.3_Beta 267N12.3_FPKM 0.071633 0.078708
</code></pre>
| 1 | 2016-08-09T14:10:03Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Adding new columns to data frame based on the values of multiple columns | 38,852,904 | <p>I have a data frame whose header looks like following,
</p>
<pre><code>df.head()
Out[660]:
Samples variable value Type
0 PE01I 267N12.3_Beta 0.066517 Beta
1 PE01R R267N12.3_Beta 0.061617 Beta
2 PE02I 267N12.3_Beta 0.071013 Beta
3 PE02R 267N12.3_Beta 0.056623 Beta
4 PE03I 267N12.3_Beta 0.071633 Beta
5 PE01I 267N12.3_FPKM 0.000000 FPKM
6 PE01R 267N12.3_FPKM 0.003430 FPKM
7 PE02I 267N12.3_FPKM 0.272144 FPKM
8 PE02R 267N12.3_FPKM 0.005753 FPKM
9 PE03I 267N12.3_FPKM 0.078708 FPKM
</code></pre>
<p>And I wanted to add new columns with header name as Beta and FPKM by using from column "Type"based on their corresponding values from column "value".
So far I tried this via following one-liner,</p>
<pre><code>df['Beta'] = df['Type'].map(lambda x: df.value if x == "Beta" else "FPKM")
</code></pre>
<p>and it give sme following output,</p>
<pre><code>Samples variable value Type Beta
0 PE01I 267N12.3_Beta 0.066517 Beta 0 0.066517 1 0.061617 2 0.07...
1 PE01R 267N12.3_Beta 0.061617 Beta 0 0.066517 1 0.061617 2 0.07...
2 PE02I 267N12.3_Beta 0.071013 Beta 0 0.066517 1 0.061617 2 0.07...
3 PE02R 267N12.3_Beta 0.056623 Beta 0 0.066517 1 0.061617 2 0.07...
4 PE03I 267N12.3_Beta 0.071633 Beta 0 0.066517 1 0.061617 2 0.07...
</code></pre>
<p>The column Beta has three values and all column is repeating.
What I am aiming is to have a data frame which looks like,</p>
<pre><code>Samples variable Beta FPKM
PE01I 267N12.3_Beta 0.066517 0
PE01R 267N12.3_Beta 0.061617 0.00343
PE02I 267N12.3_Beta 0.071013 0.272144
PE02R 267N12.3_Beta 0.056623 0.005753
PE03I 267N12.3_Beta 0.071633 0.078708
</code></pre>
<p>Any help would be really great..
Thank you</p>
| 1 | 2016-08-09T14:00:01Z | 38,854,278 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow"><code>merge</code></a> after separating them into 2 dataframes based on their <code>Type</code> column.</p>
<pre><code>In [14]: df_1 = df.loc[(df['Type'] == "Beta"), ['Samples', 'variable', 'value']]
In [15]: df_2 = df.loc[(df['Type'] == "FPKM"), ['Samples', 'value']]
In [16]: df_1['Beta'] = df_1['value']
In [17]: df_2['FPKM'] = df_2['value']
In [18]: df_1[['Samples', 'variable', 'Beta']].merge(df_2[['Samples', 'FPKM']], on="Samples")
Out[18]:
Samples variable Beta FPKM
0 PE01I 267N12.3_Beta 0.066517 0.000000
1 PE01R R267N12.3_Beta 0.061617 0.003430
2 PE02I 267N12.3_Beta 0.071013 0.272144
3 PE02R 267N12.3_Beta 0.056623 0.005753
4 PE03I 267N12.3_Beta 0.071633 0.078708
</code></pre>
| 1 | 2016-08-09T15:01:00Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
How to get screen name instead of user ID with tweepy | 38,853,003 | <p>I'm printing out the tweets from my twitter feed to a CSV and want to get the CSV to show the username instead of the user ID.</p>
<p>My code uses tweepy.</p>
<pre><code>outtweets.append([tweet.id_str, tweet.created_at, tweet.text.encode("utf-8")])
</code></pre>
<p>This is what I use in my loop to get the user ID.
How would I go about to get the screen name instead?</p>
| 0 | 2016-08-09T14:03:55Z | 38,853,350 | <p>See <a href="https://dev.twitter.com/overview/api/tweets" rel="nofollow">https://dev.twitter.com/overview/api/tweets</a> Tweet object and <a href="https://dev.twitter.com/overview/api/users" rel="nofollow">https://dev.twitter.com/overview/api/users</a> User object.</p>
<p>This will reveal that you need to use:</p>
<pre><code>tweet.user.name
</code></pre>
<p>to access the user name</p>
| 2 | 2016-08-09T14:19:47Z | [
"python",
"csv",
"twitter",
"tweepy"
] |
Paramiko & rsync - Get Progress asynchronously while command runs | 38,853,031 | <p>I am using paramiko to run rsync on a remote machine. When i use stdout.readlines() it blocks my program and outputs a ton of lines after the command ends. I know rsync progress always updates its output. How do i read the output every interval without waiting for the command to finish (i am transferring a very large file).</p>
<pre><code>import paramiko
import time
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(...)
stdin, stdout, stderr = ssh.exec_command("rsync...")
counter = 0
while True:
counter += 1
print stdout.readlines(), stderr.readlines(), counter
time.sleep(3)
</code></pre>
| 2 | 2016-08-09T14:05:07Z | 38,853,426 | <p>Instead of using <code>readlines()</code> you should use <code>read([bytes])</code> to progressively read output. <code>readlines()</code> reads all lines until EOF (that's why you see the blocking) and then splits into lines on the <code>\n</code> character.</p>
<p>Instead, do something like this:</p>
<pre><code>while True:
counter += 1
print(stdout.read(2048), stderr.read(2048), counter)
time.sleep(3)
</code></pre>
<p>Note: This doesn't ever terminate the loop, you might want to consider terminating the loop if the output from both stdout and stderr have length zero.</p>
| 0 | 2016-08-09T14:22:55Z | [
"python",
"rsync",
"paramiko"
] |
Append an existing DataFrame with output from def as DataFrame | 38,853,054 | <p>I want to append an existing <code>DataFrame</code> with the output from a defined routine which returns itself a <code>DataFrame</code>. I want to take six of the thirteen columns and append the existing dataframe. Here is my code to create the output from the defined routine:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import csv
import pandas
sqrt_annual = 255**(1/2)
path = 'data_prices.csv'
data = pandas.read_csv(path, sep=';', encoding='utf-8-sig')
data['DATE'] = pandas.to_datetime(data['DATE'], format='%Y%m%d')
data = data.sort_values(by=['TICKER', 'DATE'], ascending=[True, True])
def vol(ticker, date, date_prev, date_next):
data_filtered = data[(data.TICKER==ticker) & (data.DATE > date_prev) & (data.DATE < date_next)]
data_filtered['pct_change'] = data_filtered.PRICE.pct_change()
data_filtered['log_ret'] = np.log(data_filtered.PRICE) - np.log(data_filtered.PRICE.shift(1))
data_filtered['rolling_vola_40'] = sqrt_annual * data_filtered.shift(1).log_ret.rolling(center=False,window=40).std()
data_filtered['rolling_vola_80'] = sqrt_annual * data_filtered.shift(1).log_ret.rolling(center=False,window=80).std()
data_filtered['f_rolling_vola_40'] = sqrt_annual * data_filtered.shift(-40).log_ret.rolling(center=False,window=40).std()
data_filtered['f_rolling_vola_80'] = sqrt_annual * data_filtered.shift(-80).log_ret.rolling(center=False,window=80).std()
data_filtered['rolling_vola_prev_annum'] = sqrt_annual * data_filtered[(data.DATE < date)].log_ret.std()
data_filtered['rolling_vola_post_annum'] = sqrt_annual * data_filtered[(data.DATE > date)].log_ret.std()
prev_x = len(data_filtered[(data_filtered.DATE <= date)])-1
post_x = len(data_filtered[(data_filtered.DATE >= date)])-1
if prev_x > 235 and post_x > 235:
return(data_filtered[(data_filtered.DATE == date)])
</code></pre>
<p>For example, the output for <code>print(vol('A UN Equity','2014-11-17','2013-11-14','2015-11-16'))</code> would be:</p>
<pre><code> DATE TICKER PRICE pct_change log_ret rolling_vola_40 \
279 2014-11-17 A UN Equity 41.24 -0.007938 -0.00797 0.253339
rolling_vola_80 f_rolling_vola_40 f_rolling_vola_80 \
279 0.212863 0.247969 0.241233
rolling_vola_prev_annum rolling_vola_post_annum
279 0.217963 0.225887
</code></pre>
<p>I then have the <code>DataFrame</code> that I want to append:</p>
<pre><code>path_static = 'data_static.csv'
data_static = pandas.read_csv(path_static, sep=';', encoding='utf-8-sig')
data_static = data_static[(data_static.DATE_PREV != 0) & (data_static.DATE_NEXT != 0)]
data_static['DATE'] = pandas.to_datetime(data_static['DATE'], format='%Y%m%d')
data_static['DATE_PREV'] = pandas.to_datetime(data_static['DATE_PREV'], format='%Y%m%d')
data_static['DATE_NEXT'] = pandas.to_datetime(data_static['DATE_NEXT'], format='%Y%m%d')
</code></pre>
<p>I now want to take the last six columns and append my current <code>DataFrame</code>. The input for the function is the following:</p>
<pre><code>vol(data_static['TICKER'], data_static['DATE'], data_static['DATE_PREV'], data_static['DATE_NEXT'])
</code></pre>
<p>Anyone with a hint on how I can get this done?</p>
<p>EDIT:
Here is some dummy data.</p>
<p><code>data_static.csv</code> (including headers):</p>
<pre><code>YEAR;DATE;TICKER;LONG_COMP_NAME;ISSUER_INDUSTRY;INDUSTRY_SECTOR;COUNTRY;ACCOUNTING_STANDARD;ACCOUNTING_STANDARD_OVERRIDE;EQY_FUND_CRNCY;INDEX;DATE_PREV;DATE_NEXT
2015;20151116;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20141117;0
2014;20141117;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20131114;20151116
2013;20131114;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20121119;20141117
2012;20121119;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20111115;20131114
2011;20111115;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20101112;20121119
2010;20101112;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20091113;20111115
2009;20091113;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20081114;20101112
2008;20081114;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20071115;20091113
2007;20071115;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20061114;20081114
2006;20061114;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;20051114;20071115
2005;20051114;A UN Equity;Agilent Technologies Inc;Electronic Measur Instr;Industrial;US;US GAAP;MIXED;USD;S&P500;0;20061114
2015;20160111;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20150112;0
2014;20150112;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20140109;20160111
2013;20140109;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20130108;20150112
2012;20130108;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20120109;20140109
2011;20120109;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20110110;20130108
2010;20110110;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20100111;20120109
2009;20100111;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20090112;20110110
2008;20090112;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20080109;20100111
2007;20080109;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20070109;20090112
2006;20070109;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;20060109;20080109
2005;20060109;AA UN Equity;Alcoa Inc;Metal-Aluminum;Basic Materials;US;US GAAP;MIXED;USD;S&P500;0;20070109
</code></pre>
<p><code>data_prices.csv</code> (including headers, caution very long):</p>
<pre><code>DATE;TICKER;PRICE
20151231;A UN Equity;41.81
20151230;A UN Equity;42.17
20151229;A UN Equity;42.36
20151228;A UN Equity;41.78
20151224;A UN Equity;42.14
20151223;A UN Equity;41.77
20151222;A UN Equity;41.22
20151221;A UN Equity;40.83
20151218;A UN Equity;40.1
20151217;A UN Equity;40.78
20151216;A UN Equity;41.43
20151215;A UN Equity;40.81
20151214;A UN Equity;40.25
20151211;A UN Equity;40.19
20151210;A UN Equity;41.25
20151209;A UN Equity;40.98
20151208;A UN Equity;41.17
20151207;A UN Equity;40.7
20151204;A UN Equity;41.1
20151203;A UN Equity;40.15
20151202;A UN Equity;40.42
20151201;A UN Equity;41.06
20151130;A UN Equity;41.82
20151127;A UN Equity;41.97
20151125;A UN Equity;41.34
20151124;A UN Equity;40.67
20151123;A UN Equity;40.03
20151120;A UN Equity;39.28
20151119;A UN Equity;38.5
20151118;A UN Equity;39.34
20151117;A UN Equity;38.38
20151116;A UN Equity;37.33
20151113;A UN Equity;36.77
20151112;A UN Equity;37.49
20151111;A UN Equity;37.66
20151110;A UN Equity;37.98
20151109;A UN Equity;37.92
20151106;A UN Equity;38.14
20151105;A UN Equity;38.3
20151104;A UN Equity;38.34
20151103;A UN Equity;38.27
20151102;A UN Equity;38.59
20151030;A UN Equity;37.76
20151029;A UN Equity;37.7
20151028;A UN Equity;37.52
20151027;A UN Equity;37.05
20151026;A UN Equity;36.83
20151023;A UN Equity;37.11
20151022;A UN Equity;36.09
20151021;A UN Equity;35.9
20151020;A UN Equity;36.32
20151019;A UN Equity;36.23
20151016;A UN Equity;35.78
20151015;A UN Equity;35.58
20151014;A UN Equity;35.05
20151013;A UN Equity;35.63
20151012;A UN Equity;35.99
20151009;A UN Equity;36.23
20151008;A UN Equity;36.01
20151007;A UN Equity;35.54
20151006;A UN Equity;34.9
20151005;A UN Equity;35.34
20151002;A UN Equity;34.67
20151001;A UN Equity;33.74
20150930;A UN Equity;34.33
20150929;A UN Equity;33.74
20150928;A UN Equity;33.37
20150925;A UN Equity;34.45
20150924;A UN Equity;34.55
20150923;A UN Equity;34.95
20150922;A UN Equity;35.05
20150921;A UN Equity;35.69
20150918;A UN Equity;35.74
20150917;A UN Equity;36.4
20150916;A UN Equity;36.52
20150915;A UN Equity;36.15
20150914;A UN Equity;35.55
20150911;A UN Equity;35.96
20150910;A UN Equity;35.86
20150909;A UN Equity;35.54
20150908;A UN Equity;36.21
20150904;A UN Equity;35.06
20150903;A UN Equity;35.75
20150902;A UN Equity;35.53
20150901;A UN Equity;34.75
20150831;A UN Equity;36.31
20150828;A UN Equity;36.51
20150827;A UN Equity;36.64
20150826;A UN Equity;35.62
20150825;A UN Equity;34.36
20150824;A UN Equity;34.68
20150821;A UN Equity;36.23
20150820;A UN Equity;37.51
20150819;A UN Equity;38.4
20150818;A UN Equity;39.02
20150817;A UN Equity;38.82
20150814;A UN Equity;38.65
20150813;A UN Equity;38.56
20150812;A UN Equity;38.89
20150811;A UN Equity;39.42
20150810;A UN Equity;40.47
20150807;A UN Equity;39.99
20150806;A UN Equity;40.12
20150805;A UN Equity;40.72
20150804;A UN Equity;40.62
20150803;A UN Equity;41
20150731;A UN Equity;40.95
20150730;A UN Equity;40.97
20150729;A UN Equity;40.4
20150728;A UN Equity;40.45
20150727;A UN Equity;39.61
20150724;A UN Equity;39.31
20150723;A UN Equity;40.25
20150722;A UN Equity;40.33
20150721;A UN Equity;39.57
20150720;A UN Equity;40.06
20150717;A UN Equity;39.95
20150716;A UN Equity;40.34
20150715;A UN Equity;40.13
20150714;A UN Equity;40.49
20150713;A UN Equity;39.96
20150710;A UN Equity;39.4
20150709;A UN Equity;38.92
20150708;A UN Equity;38.75
20150707;A UN Equity;39.79
20150706;A UN Equity;39.36
20150702;A UN Equity;39.58
20150701;A UN Equity;39.26
20150630;A UN Equity;38.58
20150629;A UN Equity;38.74
20150626;A UN Equity;40.02
20150625;A UN Equity;40.05
20150624;A UN Equity;40.19
20150623;A UN Equity;39.6
20150622;A UN Equity;39.81
20150619;A UN Equity;39.49
20150618;A UN Equity;39.9
20150617;A UN Equity;39.6
20150616;A UN Equity;39.79
20150615;A UN Equity;39.52
20150612;A UN Equity;39.84
20150611;A UN Equity;40.53
20150610;A UN Equity;40.52
20150609;A UN Equity;40.12
20150608;A UN Equity;39.95
20150605;A UN Equity;40.31
20150604;A UN Equity;40.54
20150603;A UN Equity;41.1
20150602;A UN Equity;41.11
20150601;A UN Equity;40.92
20150529;A UN Equity;41.19
20150528;A UN Equity;41.75
20150527;A UN Equity;42.61
20150526;A UN Equity;42.06
20150522;A UN Equity;42.5
20150521;A UN Equity;42.32
20150520;A UN Equity;42.61
20150519;A UN Equity;42.37
20150518;A UN Equity;42.63
20150515;A UN Equity;42.04
20150514;A UN Equity;42.05
20150513;A UN Equity;41.81
20150512;A UN Equity;41.91
20150511;A UN Equity;42.62
20150508;A UN Equity;42.5
20150507;A UN Equity;41.8
20150506;A UN Equity;41.59
20150505;A UN Equity;41.59
20150504;A UN Equity;41.94
20150430;A UN Equity;41.37
20150429;A UN Equity;41.96
20150428;A UN Equity;42.18
20150427;A UN Equity;41.98
20150424;A UN Equity;42.49
20150423;A UN Equity;42.62
20150422;A UN Equity;42.71
20150421;A UN Equity;42.89
20150420;A UN Equity;43.19
20150417;A UN Equity;42.98
20150416;A UN Equity;43.13
20150415;A UN Equity;43.38
20150414;A UN Equity;43.07
20150413;A UN Equity;43.04
20150410;A UN Equity;43.55
20150409;A UN Equity;42.49
20150408;A UN Equity;42.26
20150407;A UN Equity;42.44
20150402;A UN Equity;42.05
20150401;A UN Equity;41.39
20150331;A UN Equity;41.55
20150330;A UN Equity;41.72
20150327;A UN Equity;41.11
20150326;A UN Equity;40.7
20150325;A UN Equity;40.81
20150324;A UN Equity;41.09
20150323;A UN Equity;42.2
20150320;A UN Equity;42.21
20150319;A UN Equity;42.21
20150318;A UN Equity;42.12
20150317;A UN Equity;41.58
20150316;A UN Equity;41.81
20150313;A UN Equity;40.87
20150312;A UN Equity;41.1
20150311;A UN Equity;40.85
20150310;A UN Equity;40.63
20150309;A UN Equity;41.74
20150306;A UN Equity;41.53
20150305;A UN Equity;42.22
20150304;A UN Equity;42
20150303;A UN Equity;42.26
20150302;A UN Equity;42.7
20150227;A UN Equity;42.21
20150226;A UN Equity;42.36
20150225;A UN Equity;42.2
20150224;A UN Equity;42.06
20150223;A UN Equity;41.73
20150220;A UN Equity;41.95
20150219;A UN Equity;41.15
20150218;A UN Equity;41.54
20150217;A UN Equity;40.52
20150213;A UN Equity;40.15
20150212;A UN Equity;40.02
20150211;A UN Equity;39.33
20150210;A UN Equity;39.67
20150209;A UN Equity;39.04
20150206;A UN Equity;39.34
20150205;A UN Equity;39.53
20150204;A UN Equity;39.11
20150203;A UN Equity;39.62
20150202;A UN Equity;38.69
20150130;A UN Equity;37.77
20150129;A UN Equity;38.46
20150128;A UN Equity;38
20150127;A UN Equity;38.75
20150126;A UN Equity;39.15
20150123;A UN Equity;38.81
20150122;A UN Equity;39.65
20150121;A UN Equity;38.16
20150120;A UN Equity;37.93
20150116;A UN Equity;38.25
20150115;A UN Equity;38.01
20150114;A UN Equity;39.06
20150113;A UN Equity;39.55
20150112;A UN Equity;40.11
20150109;A UN Equity;40.59
20150108;A UN Equity;40.89
20150107;A UN Equity;39.7
20150106;A UN Equity;39.18
20150105;A UN Equity;39.8
20150102;A UN Equity;40.56
20141231;A UN Equity;40.94
20141230;A UN Equity;41.37
20141229;A UN Equity;41.33
20141224;A UN Equity;41.13
20141223;A UN Equity;41.37
20141222;A UN Equity;41.88
20141219;A UN Equity;41.38
20141218;A UN Equity;40.7
20141217;A UN Equity;39.76
20141216;A UN Equity;38.47
20141215;A UN Equity;38.68
20141212;A UN Equity;39.72
20141211;A UN Equity;40.62
20141210;A UN Equity;40.34
20141209;A UN Equity;41.4
20141208;A UN Equity;41.51
20141205;A UN Equity;42.3
20141204;A UN Equity;42.27
20141203;A UN Equity;42.23
20141202;A UN Equity;41.98
20141201;A UN Equity;41.59
20141128;A UN Equity;42.74
20141126;A UN Equity;42.74
20141125;A UN Equity;42.71
20141124;A UN Equity;42.25
20141121;A UN Equity;42.25
20141120;A UN Equity;41.26
20141119;A UN Equity;40.8
20141118;A UN Equity;40.8
20141117;A UN Equity;41.24
20141114;A UN Equity;41.57
20141113;A UN Equity;41.45
20141112;A UN Equity;41.45
20141111;A UN Equity;41.66
20141110;A UN Equity;41.53
20141107;A UN Equity;40.93
20141106;A UN Equity;41.37
20141105;A UN Equity;40.13
20141104;A UN Equity;40.18
20141103;A UN Equity;40.84
20141031;A UN Equity;39.53
20141030;A UN Equity;38.8435
20141029;A UN Equity;38.9722
20141028;A UN Equity;39.0866
20141027;A UN Equity;38.6075
20141024;A UN Equity;38.6504
20141023;A UN Equity;38.4073
20141022;A UN Equity;37.9354
20141021;A UN Equity;38.6147
20141020;A UN Equity;37.4348
20141017;A UN Equity;37.3776
20141016;A UN Equity;36.9771
20141015;A UN Equity;37.0343
20141014;A UN Equity;37.0343
20141013;A UN Equity;37.7494
20141010;A UN Equity;38.3573
20141009;A UN Equity;39.3512
20141008;A UN Equity;40.3738
20141007;A UN Equity;39.3298
20141006;A UN Equity;40.4525
20141003;A UN Equity;40.6956
20141002;A UN Equity;39.9233
20141001;A UN Equity;40.1879
20140930;A UN Equity;40.7456
20140929;A UN Equity;40.8672
20140926;A UN Equity;40.4024
20140925;A UN Equity;40.6813
20140924;A UN Equity;41.4464
20140923;A UN Equity;40.7456
20140922;A UN Equity;41.1532
20140919;A UN Equity;41.6538
20140918;A UN Equity;41.7611
20140917;A UN Equity;42.1115
20140916;A UN Equity;41.4107
20140915;A UN Equity;41.4178
20140912;A UN Equity;41.9041
20140911;A UN Equity;41.1818
20140910;A UN Equity;41.2534
20140909;A UN Equity;41.0817
20140908;A UN Equity;41.2248
20140905;A UN Equity;41.4178
20140904;A UN Equity;41.2534
20140903;A UN Equity;41.5179
20140902;A UN Equity;41.5251
20140829;A UN Equity;40.8744
20140828;A UN Equity;40.8672
20140827;A UN Equity;41.0317
20140826;A UN Equity;41.3678
20140825;A UN Equity;41.2891
20140822;A UN Equity;41.2176
20140821;A UN Equity;41.4679
20140820;A UN Equity;41.711
20140819;A UN Equity;41.8755
20140818;A UN Equity;41.8469
20140815;A UN Equity;41.1747
20140814;A UN Equity;39.7731
20140813;A UN Equity;39.1725
20140812;A UN Equity;38.8578
20140811;A UN Equity;39.4728
20140808;A UN Equity;39.5014
20140807;A UN Equity;39.3655
20140806;A UN Equity;39.7517
20140805;A UN Equity;39.5014
20140804;A UN Equity;40.2308
20140801;A UN Equity;40.0949
20140731;A UN Equity;40.1092
20140730;A UN Equity;40.6026
20140729;A UN Equity;40.0663
20140728;A UN Equity;40.2522
20140725;A UN Equity;40.4239
20140724;A UN Equity;40.6455
20140723;A UN Equity;41.096
20140722;A UN Equity;41.2748
20140721;A UN Equity;40.5526
20140718;A UN Equity;40.1521
20140717;A UN Equity;39.437
20140716;A UN Equity;40.7313
20140715;A UN Equity;40.5168
20140714;A UN Equity;40.6312
20140711;A UN Equity;40.4739
20140710;A UN Equity;40.3953
20140709;A UN Equity;40.6884
20140708;A UN Equity;40.8672
20140707;A UN Equity;41.5608
20140703;A UN Equity;41.804
20140702;A UN Equity;41.5108
20140701;A UN Equity;41.6538
20140630;A UN Equity;41.0746
20140627;A UN Equity;41.1175
20140626;A UN Equity;41.3463
20140625;A UN Equity;41.4536
20140624;A UN Equity;41.2248
20140623;A UN Equity;41.5537
20140620;A UN Equity;42.0257
20140619;A UN Equity;41.9184
20140618;A UN Equity;42.3832
20140617;A UN Equity;42.0972
20140616;A UN Equity;41.7039
20140613;A UN Equity;41.8326
20140612;A UN Equity;41.7611
20140611;A UN Equity;42.1258
20140610;A UN Equity;42.1115
20140609;A UN Equity;42.3903
20140606;A UN Equity;42.1973
20140605;A UN Equity;41.9041
20140604;A UN Equity;41.0817
20140603;A UN Equity;40.9173
20140602;A UN Equity;40.6813
20140530;A UN Equity;40.717
20140529;A UN Equity;40.7313
20140528;A UN Equity;40.8386
20140527;A UN Equity;40.2451
20140523;A UN Equity;40.1736
20140522;A UN Equity;39.7159
20140521;A UN Equity;39.3226
20140520;A UN Equity;38.8364
20140519;A UN Equity;39.3083
20140516;A UN Equity;39.3512
20140515;A UN Equity;38.9651
20140514;A UN Equity;39.9376
20140513;A UN Equity;40.6384
20140512;A UN Equity;40.338
20140509;A UN Equity;39.6087
20140508;A UN Equity;39.5443
20140507;A UN Equity;39.3512
20140506;A UN Equity;39.3369
20140505;A UN Equity;39.5658
20140502;A UN Equity;39.0008
20140430;A UN Equity;38.6433
20140429;A UN Equity;38.1141
20140428;A UN Equity;38.4216
20140425;A UN Equity;38.8936
20140424;A UN Equity;39.53
20140423;A UN Equity;39.3369
20140422;A UN Equity;39.3083
20140417;A UN Equity;39.0795
20140416;A UN Equity;38.3859
20140415;A UN Equity;37.9783
20140414;A UN Equity;37.6493
20140411;A UN Equity;37.7351
20140410;A UN Equity;38.6504
20140409;A UN Equity;39.7374
20140408;A UN Equity;39.0866
20140407;A UN Equity;38.865
20140404;A UN Equity;39.7374
20140403;A UN Equity;40.4739
20140402;A UN Equity;40.524
20140401;A UN Equity;40.3023
20140331;A UN Equity;39.9877
20140328;A UN Equity;39.1081
20140327;A UN Equity;39.1081
20140326;A UN Equity;39.4513
20140325;A UN Equity;39.4442
20140324;A UN Equity;39.5157
20140321;A UN Equity;40.1092
20140320;A UN Equity;40.5454
20140319;A UN Equity;40.4954
20140318;A UN Equity;40.6098
20140317;A UN Equity;39.9018
20140314;A UN Equity;39.6444
20140313;A UN Equity;39.9376
20140312;A UN Equity;40.8029
20140311;A UN Equity;40.6598
20140310;A UN Equity;41.5823
20140307;A UN Equity;41.7754
20140306;A UN Equity;42.0257
20140305;A UN Equity;41.332
20140304;A UN Equity;41.3177
20140303;A UN Equity;40.5526
20140228;A UN Equity;40.7099
20140227;A UN Equity;40.3023
20140226;A UN Equity;40.8315
20140225;A UN Equity;40.6813
20140224;A UN Equity;40.5812
20140221;A UN Equity;40.288
20140220;A UN Equity;40.9673
20140219;A UN Equity;40.002
20140218;A UN Equity;39.437
20140214;A UN Equity;39.5085
20140213;A UN Equity;42.9624
20140212;A UN Equity;42.8265
20140211;A UN Equity;42.7121
20140210;A UN Equity;42.1973
20140207;A UN Equity;42.4905
20140206;A UN Equity;41.6824
20140205;A UN Equity;41.0674
20140204;A UN Equity;41.3392
20140203;A UN Equity;40.1521
20140131;A UN Equity;41.5823
20140130;A UN Equity;42.5548
20140129;A UN Equity;41.2248
20140128;A UN Equity;41.5465
20140127;A UN Equity;41.6896
20140124;A UN Equity;41.3821
20140123;A UN Equity;42.5262
20140122;A UN Equity;43.5702
20140121;A UN Equity;43.513
20140117;A UN Equity;43.4129
20140116;A UN Equity;43.2628
20140115;A UN Equity;43.1483
20140114;A UN Equity;42.8194
20140113;A UN Equity;42.1401
20140110;A UN Equity;42.1401
20140109;A UN Equity;41.7682
20140108;A UN Equity;41.7539
20140107;A UN Equity;41.0817
20140106;A UN Equity;40.5025
20140103;A UN Equity;40.7027
20140102;A UN Equity;40.195
20131231;A UN Equity;40.8958
20131230;A UN Equity;40.9888
20131227;A UN Equity;40.8815
20131224;A UN Equity;40.9244
20131223;A UN Equity;41.1461
20131220;A UN Equity;40.7313
20131219;A UN Equity;41.0317
20131218;A UN Equity;41.3964
20131217;A UN Equity;40.3094
20131216;A UN Equity;39.5014
20131213;A UN Equity;39.4442
20131212;A UN Equity;39.5157
20131211;A UN Equity;39.2154
20131210;A UN Equity;39.5729
20131209;A UN Equity;39.3727
20131206;A UN Equity;39.2082
20131205;A UN Equity;38.2571
20131204;A UN Equity;38.3144
20131203;A UN Equity;37.8495
20131202;A UN Equity;38.0498
20131129;A UN Equity;38.3072
20131127;A UN Equity;38.4788
20131126;A UN Equity;38.3644
20131125;A UN Equity;37.9783
20131122;A UN Equity;38.5074
20131121;A UN Equity;38.6862
20131120;A UN Equity;38.2786
20131119;A UN Equity;38.486
20131118;A UN Equity;38.6504
20131115;A UN Equity;39.2797
20131114;A UN Equity;36.1405
20131113;A UN Equity;36.6124
20131112;A UN Equity;36.6124
20131111;A UN Equity;36.684
20131108;A UN Equity;36.3121
20131107;A UN Equity;35.7901
20131106;A UN Equity;36.498
20131105;A UN Equity;36.6196
20131104;A UN Equity;36.6196
20131101;A UN Equity;36.5409
20131031;A UN Equity;36.2978
20131030;A UN Equity;36.5767
20131029;A UN Equity;36.97
20131028;A UN Equity;36.8198
20131025;A UN Equity;37.0916
20131024;A UN Equity;36.5552
20131023;A UN Equity;36.2191
20131022;A UN Equity;36.4837
20131021;A UN Equity;37.1774
20131018;A UN Equity;37.8924
20131017;A UN Equity;37.6493
20131016;A UN Equity;36.8699
20131015;A UN Equity;36.3836
20131014;A UN Equity;36.7269
20131011;A UN Equity;36.7984
20131010;A UN Equity;36.4623
20131009;A UN Equity;35.7186
20131008;A UN Equity;35.8473
20131007;A UN Equity;36.5624
20131004;A UN Equity;37.0272
</code></pre>
| 0 | 2016-08-09T14:06:06Z | 38,875,817 | <p>I was able to get the solution the following way:</p>
<pre><code>data_store = pandas.DataFrame(columns=('TICKER', 'DATE', 'rolling_vola_40', 'rolling_vola_80', 'f_rolling_vola_40', 'f_rolling_vola_80', 'rolling_vola_prev_annum', 'rolling_vola_post_annum'))
for index, row in data_static.iterrows():
data_output = vol(row['TICKER'], row['DATE'], row['DATE_PREV'], row['DATE_NEXT'])
if type(data_output) != type(None):
data_store = data_store.append(data_output[['TICKER', 'DATE', 'rolling_vola_40', 'rolling_vola_80', 'f_rolling_vola_40', 'f_rolling_vola_80', 'rolling_vola_prev_annum', 'rolling_vola_post_annum']])
data_static = pandas.merge(data_static, data_store[['TICKER', 'DATE', 'rolling_vola_40', 'rolling_vola_80', 'f_rolling_vola_40', 'f_rolling_vola_80', 'rolling_vola_prev_annum', 'rolling_vola_post_annum']], how='left', on=['TICKER', 'DATE'])
data_static.to_csv('test.csv', sep=';', encoding='utf-8')
</code></pre>
| 0 | 2016-08-10T14:09:56Z | [
"python",
"pandas",
"dataframe",
"return"
] |
Reading data from a VSD (Windows Visio Binary) File in Python (Linux) with OLE Tools is very unclear, is there any other way to extract the data? | 38,853,094 | <p>I am trying to read the contents of a Visio Binary .VSD file which contains information from a graph I have made. </p>
<p>I have tried using the OLE Tools and OLEFile but cannot correctly read the contents. I can view the file with the OLETools. When I dump the contents and view it with the 'xxd' command (in terminal) i can't clearly see the text that I saved within the file. There is a lot of extra \x00, \xff etc. and other characters within the file, which when removed make it worse. I've done the exact same with a .doc file and I have been able to open and clearly read the contents. </p>
<p>Can anyone please point me in the correct direction if I am doing this wrong or rather in the direction of other tools that work fine?</p>
| 0 | 2016-08-09T14:07:46Z | 38,855,285 | <p>You have really picked a strong enemy :)</p>
<p>Unlike other office apps Visio .vsd binary file format is not exactly Microsoft's "compound document", that's basically just a wrapper. The format was created by Visio Corp back in 199x, and AFAIK was never actually publicly documented.</p>
<p>I would really recommend you NOT to go with binary .VSD if possible. Latest Visio supports standard openxml format (.vsdx) which is just a bunch of zipped xml files basically.</p>
<p>AFAIK the only known third-party library to understand binary .vsd is aspose diagrams, but it's not free.</p>
| 0 | 2016-08-09T15:49:54Z | [
"python",
"linux",
"ole",
"visio"
] |
Reading data from a VSD (Windows Visio Binary) File in Python (Linux) with OLE Tools is very unclear, is there any other way to extract the data? | 38,853,094 | <p>I am trying to read the contents of a Visio Binary .VSD file which contains information from a graph I have made. </p>
<p>I have tried using the OLE Tools and OLEFile but cannot correctly read the contents. I can view the file with the OLETools. When I dump the contents and view it with the 'xxd' command (in terminal) i can't clearly see the text that I saved within the file. There is a lot of extra \x00, \xff etc. and other characters within the file, which when removed make it worse. I've done the exact same with a .doc file and I have been able to open and clearly read the contents. </p>
<p>Can anyone please point me in the correct direction if I am doing this wrong or rather in the direction of other tools that work fine?</p>
| 0 | 2016-08-09T14:07:46Z | 38,856,029 | <p>Thanks for all the help. </p>
<p>I have found a way to extract plain text from the file and convert it to XHTML and parse that. The main problem is that now I loose any structure the original document may have had. </p>
<p>The tools are libvisio-tools
<a href="https://launchpad.net/ubuntu/trusty/+package/libvisio-tools" rel="nofollow">https://launchpad.net/ubuntu/trusty/+package/libvisio-tools</a></p>
<p>Installing gives you the following programs
vsd2xtml, vsd2raw, vsd2text
which can be run from terminal to convert the files</p>
| 0 | 2016-08-09T16:27:39Z | [
"python",
"linux",
"ole",
"visio"
] |
Using greater than expression to filter text file lines? | 38,853,162 | <p>I have a text file with multiple lines and want to find which lines have values greater than 85%.</p>
<pre class="lang-none prettyprint-override"><code>'workdata worka worka1 size 84% total'
'workdata workb workb1 size 89% total'
'workdata workc workc1 size 63% total'
'workdata workd workd1 size 94% total'
</code></pre>
<p>Can someone please show how I can get just the sentences with 85% or more in the fifth column?</p>
| -4 | 2016-08-09T14:10:53Z | 38,853,212 | <p>You need to extract percent first, and then filter the lines basing on that.</p>
<pre><code>import re
def extract_percent(line):
# doing extraction
try:
return int(re.findall('[0-9]+%', line)[0][:-1])
except:
return 0
print [line for line in lines if extract_percent(line) > 85]
</code></pre>
<p>If nothing is found, 0 is returned.
Otherwise is returned the number before <code>%</code>.
If you have several percent numbers in the string, the first one is returned.</p>
<p>It can become a little bit trickier if the percent can be float,
but it is not so hard also. Just play with the regular expression <code>[0-9]+%</code>.</p>
<p>If the position is fixed (fifth column), you can rewrite the <code>extract_percent</code> function this way:</p>
<pre><code>def extract_percent(line):
try:
return int(line.split()[4][:-1])
except:
return 0
</code></pre>
| 1 | 2016-08-09T14:13:29Z | [
"python"
] |
Using greater than expression to filter text file lines? | 38,853,162 | <p>I have a text file with multiple lines and want to find which lines have values greater than 85%.</p>
<pre class="lang-none prettyprint-override"><code>'workdata worka worka1 size 84% total'
'workdata workb workb1 size 89% total'
'workdata workc workc1 size 63% total'
'workdata workd workd1 size 94% total'
</code></pre>
<p>Can someone please show how I can get just the sentences with 85% or more in the fifth column?</p>
| -4 | 2016-08-09T14:10:53Z | 38,853,349 | <p>If you know the percentage will always be in the 5th column, then just split each row on space, remove the percentage sign, and turn it into a float. Something like this:</p>
<pre><code>lines = open("fileName", "r").read().splitlines()
for row in lines:
if float(row.split()[4].replace("%",""))>85:
print(row)
</code></pre>
| 0 | 2016-08-09T14:19:42Z | [
"python"
] |
Write a CSV in Multiple rows - Python 2.7 | 38,853,343 | <p>Iâm working on a csv file containing data recorders of game players. sample of csv shows 4 players (10 rows) and 13 columns:</p>
<pre><code>Player_ID,Name,Age,DOB,Gender,Game1_result,Date_first_game,Game2_result,`Game3_result,Final_result,Team,Date_last_game,Finals_dates`
101,Ethan,16,1/15/2000,Male,won,3/20/2013,lost,won,lost,yellow,3/20/2013,3/20/2013
101,Ethan,16,1/16/2000,Male,won,12/6/2015,won,won,"won, full",yellow,12/6/2015,12/6/2015
101,Ethan,16,1/17/2000,Male,lost,1/6/2016,won,won,lost,yellow,1/6/2016,1/6/2016
102,Emma,19,6/17/1997,Female,won,1/9/2013,lost,lost,lost,green,1/9/2013,1/9/2013
...........
...........
</code></pre>
<p>I create a python script that converts dates to ages and then output the altered file. I use csv reader and writer to read and write the final output file (csv writes from a single list to which all data is appended). The final file should only include 12 columns (name not written) and all date columns converted to ages of players at that specific date.</p>
<pre><code>import csv
##open data containing file
file1=open('example.csv','rb')
reader=csv.reader(file1)
####code to print headers
####age_converter() definition
final_output=[]
for col in reader:
final_output.append(col[0])#appends Player_ID
final_output.append(col[2])#appends Age
final_output.append(age_converter(col[3]))#appends value of date converted to age
for r in range(4,6):
final_output.append(col[r])#appends gender and game1 results
final_output.append(age_converter(col[6]))#appends date of first game
for r in range(7,11):
final_output.append(col[r])#appends game results (5 columns)
for r in range(11,13):
final_output.append(age_converter(col[r]))#appends last and final dates
with open('output.csv','wb')as outfile:
csv_writer=csv.writer(outfile)
csv_writer.writerow(header)
csv_writer.writerow(final_output)
file1.close()
outfile.close()
</code></pre>
<p>Age converter works fine, however, the output file has all data in one row. I tried to append columns one by one into a list and write it into csv, which works, but it is not practical to type each one of the columns by its index, especially that the original file I am working on has more than 50 columns!
So my question is: How to write data into multiple rows instead of only one?</p>
<p>output sample:</p>
<pre><code>Player_ID,Age,DOB,Gender,Game1_result,Date_first_game,Game2_result,Game3_result,Final_result,Team,Date_last_game,Finals_dates
101,17,Male,won,20,lost,won,lost,yellow,20,20,101,16,16,Male,won,19,won,won,"won, full",yellow,.....................
</code></pre>
| 0 | 2016-08-09T14:19:26Z | 38,853,655 | <p>You are appending all the data to <code>final_output</code>. Instead, make it list of lists as so:</p>
<pre><code>for row in reader:
new_row = []
new_row.append(row[0])
...
final_output.append(new_row)
</code></pre>
<p>And then when writing to file:</p>
<pre><code>csv_writer.writerow(headers)
for row in final_output:
csv_writer.writerow(row)
</code></pre>
<p>Two notes:</p>
<ol>
<li>Always use <code>with open(...) as somefile</code>. when you do, you don't need to close the file.</li>
<li>Checkout csv DictReader class for easier manipulating.</li>
</ol>
| 0 | 2016-08-09T14:32:42Z | [
"python",
"list",
"python-2.7",
"csv"
] |
perspective transformation in Python: from rectangular to trapezoid | 38,853,428 | <p>I am looking for a python function or library that can convert coordinates of rectangle to trapezoid coordinates. So far I found the relevant problem here: <a href="http://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle">http://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle</a> but no code is available. Is there any package or any function in Python that can do this?</p>
| -1 | 2016-08-09T14:23:01Z | 38,853,928 | <p>I wanted to do the same but from trapezoid to rectangular coordinated. The package that do the trick is:</p>
<pre><code>from skimage.transform import ProjectiveTransform
</code></pre>
<p>You can read the complete answer in:</p>
<p><a href="http://stackoverflow.com/questions/33283088/transform-irregular-quadrilateral-to-rectangle-in-python-matplotlib">Transform irregular quadrilateral to rectangle in python matplotlib</a></p>
| 1 | 2016-08-09T14:44:17Z | [
"python",
"geometry",
"computational-geometry",
"projection",
"geometry-surface"
] |
Common rows of two .txt files in python | 38,853,517 | <p>I have 2 big .txt files and each file has 10 columns and 21008 rows. I need to get the common rows of two files and create a new file. The first column of two files include the IDs. Some of the IDs in 2 files are similar but not all of them. The new files would contain the common IDs and of course the complete row. Here is a small example:</p>
<p><strong>input1:</strong></p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842
ENSG00000116085.9 0,086826347 0,152694613 1,758620722
ENSG00000167578.12 0,052093968 0,096016347 1,843137535
ENSG00000167531.2 0,042553194 0,085106388 2
ENSG00000078237.4 0,016129032 0 0 0,031746034
</code></pre>
<p><strong>input2:</strong></p>
<pre><code>ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000116032.5 0,094311371 0,144461095 1,531746311
ENSG00000167578.12 0,062894806 0,101620428 1,615720507
ENSG00000103227.14 0,067720085 0,068472534 1,011111165
ENSG00000078241.8 0,016260162 0,040650405 2,5
</code></pre>
<p><strong>output file:</strong></p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842 ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000167578.12 0,052093968 0,096016347 1,843137535 ENSG00000167578.12 0,062894806 0,101620428 1,615720507
</code></pre>
<p>Thanks</p>
| 1 | 2016-08-09T14:27:02Z | 38,854,023 | <p>Read the first file and iterate line by line, for each line keep the id and line. Do it for the next one only this time find the common lines by id and append then to a list in the output format:</p>
<pre><code>ids = {}
found = []
with open(filepath1) as file1:
for line in file1.lines():
id_ = line.split()[0]
ids[id_] = line
with open(filepath2) as file2:
for line in file2.lines():
id_ = line.split()[0]
if id_ in ids:
found.append("{} {}".format(ids[id_], line))
</code></pre>
| 0 | 2016-08-09T14:49:18Z | [
"python",
"text"
] |
Common rows of two .txt files in python | 38,853,517 | <p>I have 2 big .txt files and each file has 10 columns and 21008 rows. I need to get the common rows of two files and create a new file. The first column of two files include the IDs. Some of the IDs in 2 files are similar but not all of them. The new files would contain the common IDs and of course the complete row. Here is a small example:</p>
<p><strong>input1:</strong></p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842
ENSG00000116085.9 0,086826347 0,152694613 1,758620722
ENSG00000167578.12 0,052093968 0,096016347 1,843137535
ENSG00000167531.2 0,042553194 0,085106388 2
ENSG00000078237.4 0,016129032 0 0 0,031746034
</code></pre>
<p><strong>input2:</strong></p>
<pre><code>ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000116032.5 0,094311371 0,144461095 1,531746311
ENSG00000167578.12 0,062894806 0,101620428 1,615720507
ENSG00000103227.14 0,067720085 0,068472534 1,011111165
ENSG00000078241.8 0,016260162 0,040650405 2,5
</code></pre>
<p><strong>output file:</strong></p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842 ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000167578.12 0,052093968 0,096016347 1,843137535 ENSG00000167578.12 0,062894806 0,101620428 1,615720507
</code></pre>
<p>Thanks</p>
| 1 | 2016-08-09T14:27:02Z | 38,854,439 | <p>You can use a <code>dict</code> to keep track of seen lines from the second file to allow the first to set the order:</p>
<pre><code>d2={}
with open("f2.txt") as f2:
for line in f2:
k,_,v=line.partition(' ')
d2[k]=line.strip()
with open("f1.txt") as f1:
for line in f1:
k,_,v=line.partition(' ')
if k in d2:
print line.strip(), d2[k]
</code></pre>
<p>Prints:</p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842 ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000167578.12 0,052093968 0,096016347 1,843137535 ENSG00000167578.12 0,062894806 0,101620428 1,615720507
</code></pre>
| 0 | 2016-08-09T15:09:19Z | [
"python",
"text"
] |
Common rows of two .txt files in python | 38,853,517 | <p>I have 2 big .txt files and each file has 10 columns and 21008 rows. I need to get the common rows of two files and create a new file. The first column of two files include the IDs. Some of the IDs in 2 files are similar but not all of them. The new files would contain the common IDs and of course the complete row. Here is a small example:</p>
<p><strong>input1:</strong></p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842
ENSG00000116085.9 0,086826347 0,152694613 1,758620722
ENSG00000167578.12 0,052093968 0,096016347 1,843137535
ENSG00000167531.2 0,042553194 0,085106388 2
ENSG00000078237.4 0,016129032 0 0 0,031746034
</code></pre>
<p><strong>input2:</strong></p>
<pre><code>ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000116032.5 0,094311371 0,144461095 1,531746311
ENSG00000167578.12 0,062894806 0,101620428 1,615720507
ENSG00000103227.14 0,067720085 0,068472534 1,011111165
ENSG00000078241.8 0,016260162 0,040650405 2,5
</code></pre>
<p><strong>output file:</strong></p>
<pre><code>ENSG00000137288.5 0,111111112 0,099415205 0,894736842 ENSG00000137288.5 0,167213112 0,134426236 0,803921621
ENSG00000167578.12 0,052093968 0,096016347 1,843137535 ENSG00000167578.12 0,062894806 0,101620428 1,615720507
</code></pre>
<p>Thanks</p>
| 1 | 2016-08-09T14:27:02Z | 38,855,494 | <p>Here is a working solution (using lists) for your problem. At the end, you're going to get a list with all the rows with the same ID in a file named "res" </p>
<pre><code>l1 = []
l2 = []
with open('file1', 'r') as f1:
for line in f1:
line = line.split()
l1.append(line)
with open('file2', 'r') as f2:
for line in f2:
line = line.split()
l2.append(line)
res = [i + j for i, j in zip(l1, l2) if i[0] == j[0]]
target = open('res', 'w')
for i in res:
for j in i:
target.write(j)
target.write(' ')
target.write('\n')
</code></pre>
| 0 | 2016-08-09T15:59:26Z | [
"python",
"text"
] |
python, stub randomness twice in the same function | 38,853,524 | <p>How would I stub the output of <code>pickCard()</code> function which is being called twice in <code>deal()</code> function? I want to test both losing and winning cases.<br>
For example, I would like to have for the winning case, the first time <code>pickCard()</code> is called is given value <code>8</code> to <code>card1</code>, and second given value <code>10</code> to <code>card2</code>. </p>
<p>I have tried using @Mock.patch, but this works only for doing one call.</p>
<p>I have used <code>self.blackjack.pickCard = MagicMock(return_value=8)</code> but again if i use it twice it will overwrite the return value</p>
<p>Here is the code:</p>
<pre><code>import random
class Game:
def __init__(self):
self.cards = [1,2,3,4,5,6,7,8,9,10]
def deal(self):
card1 = self.pickCard()
self.removeCards(card1)
card2 = self.pickCard()
return card1 + card2 > 16
def pickCard(self):
return random.choice(self.cards)
def removeCards(self,card1):
return self.cards.remove(card1)
</code></pre>
<p>The test file is:</p>
<pre><code>import unittest
from mock import MagicMock
import mock
from lib.game import Game
class TestGame(unittest.TestCase):
def setUp(self):
self.game = Game()
def test_0(self):#passing
"""Only cards from 1 to 10 exist"""
self.assertListEqual(self.game.cards, [1,2,3,4,5,6,7,8,9,10])
#Here is where I am finding difficulty writing the test
def test_1(self):
"""Player dealt winning card"""
with mock.patch('lib.game.Game.pickCard') as mock_pick:
mock_pick.side_effect = (8, 10)
g = Game()
g.pickCard()
g.pickCard()
self.assertTrue(self.game.deal())
</code></pre>
<p>EDIT</p>
<p>I ran this test with above code, and I get this stack trace instead of passing</p>
<pre><code>Traceback (most recent call last):
tests/game_test.py line 26 in test_1
self.assertTrue(self.game.deal())
lib/game.py line 8 in deal
card1 = self.pickCard()
/usr/local/lib/python2.7/site-packages/mock/mock.py line 1062 in __call__
return _mock_self._mock_call(*args, **kwargs)
/usr/local/lib/python2.7/site-packages/mock/mock.py line 1121 in _mock_call
result = next(effect)
/usr/local/lib/python2.7/site-packages/mock/mock.py line 109 in next
return _next(obj)
</code></pre>
<p>Do I need to put the two <code>g.pickCard()</code> elsewhere in the test? Or do I need to need to access this in the <code>self.game.deal()</code> method somehow?</p>
| 2 | 2016-08-09T14:27:22Z | 38,853,823 | <p><code>mock.patch</code> is the way to go, but instead of <code>return_value</code> you should specify <code>side_effect=(8, 10)</code></p>
<pre><code>with mock.patch('lib.game.Game.pickCard') as mock_pick:
mock_pick.side_effect = (8, 10)
g = Game()
print(g.pickCard())
print(g.pickCard())
# 8
# 10
</code></pre>
<p><strong>EDIT #1</strong></p>
<p>Pick card was just for demonstration that different cards are picked.
In your code you pick both cards and then call <code>game.deal</code> which picks another two cards which are not mocked which raises <code>StopIteration</code>. Also, since your game object already exists (created in setUp) you should patch this object directly, not create a new game object, hence your <code>test_1</code> should be:</p>
<pre><code>def test_1(self):
"""Player dealt winning card"""
with mock.patch.object(self.game, 'pickCard') as mock_pick:
mock_pick.side_effect = (8, 10)
self.assertTrue(self.game.deal())
</code></pre>
<p>You path object's property <code>pickCard</code> with MagicMock and set it's side effects to 8 and 10 respectively.</p>
| 1 | 2016-08-09T14:39:37Z | [
"python",
"unit-testing"
] |
What is the best way to filter the dictionaries value in the list and remove that dictionary from list | 38,853,553 | <p>For example I have dictionaries inside list,</p>
<pre><code>student_list = [{'name':'ABC', 'marks': 8}, {'name':'DEF', 'marks': 0}, {'name': 'GHI', 'marks': 0}, {'name': 'JKL', 'marks': 0}, {'name': 'JKL', 'marks': 9}]
</code></pre>
<p>Output should be like,</p>
<pre><code>[{'name':'ABC', 'marks': 8}, {'name': 'JKL', 'marks': 9}]
</code></pre>
<p>What is the best way to do this.</p>
<p>I can use <code>forloop</code> to get this output but i want to cater this with the python built_in function or handle this without using <code>forloop</code>.</p>
| -2 | 2016-08-09T14:28:40Z | 38,853,612 | <p>You can use list comprehension.</p>
<pre><code>print [i for i in student_list if i['marks'] > 0]
</code></pre>
<p>As suggested in the comments, you can also use </p>
<pre><code>print [i for i in student_list if i.get('marks', 0)]
</code></pre>
| 2 | 2016-08-09T14:31:03Z | [
"python",
"python-2.7",
"dictionary"
] |
What is the best way to filter the dictionaries value in the list and remove that dictionary from list | 38,853,553 | <p>For example I have dictionaries inside list,</p>
<pre><code>student_list = [{'name':'ABC', 'marks': 8}, {'name':'DEF', 'marks': 0}, {'name': 'GHI', 'marks': 0}, {'name': 'JKL', 'marks': 0}, {'name': 'JKL', 'marks': 9}]
</code></pre>
<p>Output should be like,</p>
<pre><code>[{'name':'ABC', 'marks': 8}, {'name': 'JKL', 'marks': 9}]
</code></pre>
<p>What is the best way to do this.</p>
<p>I can use <code>forloop</code> to get this output but i want to cater this with the python built_in function or handle this without using <code>forloop</code>.</p>
| -2 | 2016-08-09T14:28:40Z | 38,853,790 | <p>You're looking for something a bit advanced like:</p>
<pre><code>print filter(lambda x: True if x['marks'] > 0 else False, student_list)
</code></pre>
<p>But this is really just an equivalent of the list comprehensions that were already suggested. See <a href="https://docs.python.org/2/library/functions.html#filter" rel="nofollow">this</a> relevant section of the docs.</p>
| 1 | 2016-08-09T14:37:47Z | [
"python",
"python-2.7",
"dictionary"
] |
How to compile a Python module to dll and call it in VBA | 38,853,633 | <p>I want to create a user defined function written in Python. Then I want to compile it to a dll, and distribute it and call in EXCEL vba on another computer which doesn't have python installed.</p>
<p>For example, I want to create a function in Python:</p>
<p>def add(a,b):
return a+b</p>
<p>Then, compile it and export it as a dll. On another computer without Python, I can to import this function in EXCEL vba and use it. How to do it?</p>
<p>Thanks,</p>
| -1 | 2016-08-09T14:31:43Z | 38,853,731 | <p>in excel go to the vbe editor Tools>References browse your dll and add a reference, this will make available your dll functions in vba.</p>
<p>as per making it a dll, look what jb suggest here <a href="http://stackoverflow.com/questions/10859369/how-to-compile-a-python-package-to-a-dll">How to compile a Python package to a dll</a> and follow the comments on that as well.</p>
<p>another option would be like stated here <a href="https://code.google.com/archive/p/shedskin/" rel="nofollow">https://code.google.com/archive/p/shedskin/</a></p>
| 0 | 2016-08-09T14:35:27Z | [
"python",
"vba",
"dll"
] |
Python XML: ParseError: junk after document element | 38,853,644 | <p>Trying to parse XML file into ElementTree:</p>
<pre><code>>>> import xml.etree.cElementTree as ET
>>> tree = ET.ElementTree(file='D:\Temp\Slikvideo\JPEG\SV_4_1_mask\index.xml')
</code></pre>
<p>I get following error:</p>
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in
File "C:\Program
Files\Anaconda2\lib\xml\etree\ElementTree.py", line 611, in <strong>init</strong>
self.parse(file) File "", line 38, in parse ParseError: junk after document element: line 3, column 0</p>
</blockquote>
<p>XML file starts like this:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<Version Writer="E:\d\src\Modules\SceneSerialization\src\mitkSceneIO.cpp" Revision="$Revision: 17055 $" FileVersion="1" />
<node UID="OBJECT_2016080819041580480127">
<source UID="OBJECT_2016080819041550469454" />
<data type="LabelSetImage" file="hfbaaa_Bolus.nrrd" />
<properties file="sicaaa" />
</node>
<node UID="OBJECT_2016080819041512769572">
<source UID="OBJECT_2016080819041598947781" />
<data type="LabelSetImage" file="ifbaaa_Bolus.nrrd" />
<properties file="ticaaa" />
</node>
</code></pre>
<p>followed by many more nodes.</p>
<p>I do not see any junk in line 3, column 0? I assume there must be another reason for the error.</p>
<p>The .xml file is generated by external software <a href="http://mitk.org/wiki/MITK" rel="nofollow">MITK</a> so I assume that should be ok.</p>
<p>Working on Win 7, 64 bit, VS2015, Anaconda</p>
| 3 | 2016-08-09T14:32:15Z | 38,853,752 | <p>The root node of your document (<code>Version</code>) is opened <strong>and</strong> closed on line 2. The parser does not expect any nodes after the root node. Solution is to remove the closing forward slash.</p>
| 2 | 2016-08-09T14:36:21Z | [
"python",
"xml"
] |
Python XML: ParseError: junk after document element | 38,853,644 | <p>Trying to parse XML file into ElementTree:</p>
<pre><code>>>> import xml.etree.cElementTree as ET
>>> tree = ET.ElementTree(file='D:\Temp\Slikvideo\JPEG\SV_4_1_mask\index.xml')
</code></pre>
<p>I get following error:</p>
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in
File "C:\Program
Files\Anaconda2\lib\xml\etree\ElementTree.py", line 611, in <strong>init</strong>
self.parse(file) File "", line 38, in parse ParseError: junk after document element: line 3, column 0</p>
</blockquote>
<p>XML file starts like this:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<Version Writer="E:\d\src\Modules\SceneSerialization\src\mitkSceneIO.cpp" Revision="$Revision: 17055 $" FileVersion="1" />
<node UID="OBJECT_2016080819041580480127">
<source UID="OBJECT_2016080819041550469454" />
<data type="LabelSetImage" file="hfbaaa_Bolus.nrrd" />
<properties file="sicaaa" />
</node>
<node UID="OBJECT_2016080819041512769572">
<source UID="OBJECT_2016080819041598947781" />
<data type="LabelSetImage" file="ifbaaa_Bolus.nrrd" />
<properties file="ticaaa" />
</node>
</code></pre>
<p>followed by many more nodes.</p>
<p>I do not see any junk in line 3, column 0? I assume there must be another reason for the error.</p>
<p>The .xml file is generated by external software <a href="http://mitk.org/wiki/MITK" rel="nofollow">MITK</a> so I assume that should be ok.</p>
<p>Working on Win 7, 64 bit, VS2015, Anaconda</p>
| 3 | 2016-08-09T14:32:15Z | 38,854,127 | <p>As @Matthias Wiehl said, ElementTree expects only a single root node and is not well-formed XML and should be fixed at its origin.
As a workaround you can simply add a fake root node to the document.</p>
<pre><code>import xml.etree.cElementTree as ET
import re
with open("index.xml") as f:
xml = f.read()
tree = ET.fromstring(re.sub(r"(<\?xml[^>]+\?>)", r"\1<root>", xml) + "</root>")
</code></pre>
| 1 | 2016-08-09T14:54:11Z | [
"python",
"xml"
] |
Python XML: ParseError: junk after document element | 38,853,644 | <p>Trying to parse XML file into ElementTree:</p>
<pre><code>>>> import xml.etree.cElementTree as ET
>>> tree = ET.ElementTree(file='D:\Temp\Slikvideo\JPEG\SV_4_1_mask\index.xml')
</code></pre>
<p>I get following error:</p>
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in
File "C:\Program
Files\Anaconda2\lib\xml\etree\ElementTree.py", line 611, in <strong>init</strong>
self.parse(file) File "", line 38, in parse ParseError: junk after document element: line 3, column 0</p>
</blockquote>
<p>XML file starts like this:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<Version Writer="E:\d\src\Modules\SceneSerialization\src\mitkSceneIO.cpp" Revision="$Revision: 17055 $" FileVersion="1" />
<node UID="OBJECT_2016080819041580480127">
<source UID="OBJECT_2016080819041550469454" />
<data type="LabelSetImage" file="hfbaaa_Bolus.nrrd" />
<properties file="sicaaa" />
</node>
<node UID="OBJECT_2016080819041512769572">
<source UID="OBJECT_2016080819041598947781" />
<data type="LabelSetImage" file="ifbaaa_Bolus.nrrd" />
<properties file="ticaaa" />
</node>
</code></pre>
<p>followed by many more nodes.</p>
<p>I do not see any junk in line 3, column 0? I assume there must be another reason for the error.</p>
<p>The .xml file is generated by external software <a href="http://mitk.org/wiki/MITK" rel="nofollow">MITK</a> so I assume that should be ok.</p>
<p>Working on Win 7, 64 bit, VS2015, Anaconda</p>
| 3 | 2016-08-09T14:32:15Z | 38,854,205 | <p>Try repairing the document like this. Close the <code>version</code> element at the end</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<Version Writer="E:\d\src\Modules\SceneSerialization\src\mitkSceneIO.cpp" Revision="$Revision: 17055 $" FileVersion="1">
<node UID="OBJECT_2016080819041580480127">
<source UID="OBJECT_2016080819041550469454" />
<data type="LabelSetImage" file="hfbaaa_Bolus.nrrd" />
<properties file="sicaaa" />
</node>
<node UID="OBJECT_2016080819041512769572">
<source UID="OBJECT_2016080819041598947781" />
<data type="LabelSetImage" file="ifbaaa_Bolus.nrrd" />
<properties file="ticaaa" />
</node>
</Version>
</code></pre>
| 0 | 2016-08-09T14:57:55Z | [
"python",
"xml"
] |
SpaCy urllib.error.URLError during Installation | 38,853,776 | <p>i'm just getting started with spaCy under python. Sadly I'm already stuck at installation process (<a href="https://spacy.io/docs/#getting-started" rel="nofollow">https://spacy.io/docs/#getting-started</a>).<br>
After <code>pip install spacy</code> i want to download the model with <code>python -m spacy.en.download</code>and i get the following Error: </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>`Traceback (most recent call last): File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 1240, in do_open h.request(req.get_method(), req.selector, req.data, headers) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py",
line 1083, in request self._send_request(method, url, body, headers) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1128, in _send_request self.endheaders(body) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py",
line 1079, in endheaders self._send_output(message_body) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 911, in _send_output self.send(msg) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py",
line 854, in send self.connect() File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1229, in connect super().connect() File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py",
line 826, in connect (self.host,self.port), self.timeout, self.source_address) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 693, in create_connection for res in getaddrinfo(host, port, 0,
SOCK_STREAM): File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 732, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 8] nodename
nor servname provided, or not known During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 170,
in _run_module_as_main "__main__", mod_spec) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.5/site-packages/spacy/en/download.py",
line 13, in
<module>
plac.call(main) File "/usr/local/lib/python3.5/site-packages/plac_core.py", line 328, in call cmd, result = parser.consume(arglist) File "/usr/local/lib/python3.5/site-packages/plac_core.py", line 207, in consume return cmd, self.func(*(args + varargs
+ extraopts), **kwargs) File "/usr/local/lib/python3.5/site-packages/spacy/en/download.py", line 9, in main download('en', force) File "/usr/local/lib/python3.5/site-packages/spacy/download.py", line 24, in download package = sputnik.install(about.__title__,
about.__version__, about.__models__[lang]) File "/usr/local/lib/python3.5/site-packages/sputnik/__init__.py", line 37, in install index.update() File "/usr/local/lib/python3.5/site-packages/sputnik/index.py", line 84, in update index = json.load(session.open(request,
'utf8')) File "/usr/local/lib/python3.5/site-packages/sputnik/session.py", line 43, in open r = self.opener.open(request) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 465, in open
response = self._open(req, data) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 483, in _open '_open', req) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py",
line 443, in _call_chain result = func(*args) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 1283, in https_open context=self._context, check_hostname=self._check_hostname) File
"/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 1242, in do_open raise URLError(err) urllib.error.URLError:
<urlopen error [Errno 8] nodename nor servname provided, or not known></code></pre>
</div>
</div>
</p>
<p>Somebody got a similar err?</p>
| 1 | 2016-08-09T14:37:26Z | 39,330,872 | <p>The problem was caused by a serverproblem of spacy itself and is fixed now.
(spacy.io/blog/announcement)</p>
| 0 | 2016-09-05T12:41:56Z | [
"python",
"python-3.x",
"pip",
"nltk",
"spacy"
] |
Determines the first integer that is evenly divisible by all other integers in a list of integers | 38,853,789 | <pre><code>def divisible(a):
d = 0
n = len(a)
i = 0
p = 0
while d == 0 and p < n and i < n:
if a[i] % a[p] != 0:
i = i + 1
p = 0
else:
p = p + 1
return d
a = [12, 4, 6]
r = divisible(a)
print(r)
</code></pre>
<p>Can anyone help me plsease? it is python 3.0 +. I can't solve this question, I don't know where I can put d into the function. like let d = a[i] if a[i] can evenly divisible by all other integers. The answer is 12 for this question, can anyone imporve my code plsease? Thank you!!</p>
| 0 | 2016-08-09T14:37:46Z | 38,853,951 | <p>A short solution would be</p>
<pre><code>def divisible(a):
for i in a:
if all(i%j==0 for j in a):
return i
return None
</code></pre>
<p>or a bit longer</p>
<pre><code>def divisible(a):
for i in a:
found=True
for j in a:
if i%j: # everything that is not 0 is true
found=False
break
if found:
return i
return None
</code></pre>
| 2 | 2016-08-09T14:45:14Z | [
"python"
] |
Determines the first integer that is evenly divisible by all other integers in a list of integers | 38,853,789 | <pre><code>def divisible(a):
d = 0
n = len(a)
i = 0
p = 0
while d == 0 and p < n and i < n:
if a[i] % a[p] != 0:
i = i + 1
p = 0
else:
p = p + 1
return d
a = [12, 4, 6]
r = divisible(a)
print(r)
</code></pre>
<p>Can anyone help me plsease? it is python 3.0 +. I can't solve this question, I don't know where I can put d into the function. like let d = a[i] if a[i] can evenly divisible by all other integers. The answer is 12 for this question, can anyone imporve my code plsease? Thank you!!</p>
| 0 | 2016-08-09T14:37:46Z | 38,853,961 | <p>I think you're looking for the least common multiple algorithm, in python3 you could code it like this:</p>
<pre><code>from fractions import gcd
from functools import reduce
def lcm(*args):
return reduce(lambda a, b: a * b // gcd(a, b), args)
print lcm(4, 6, 12)
</code></pre>
<p>But it seems you can't use any functions nor python builtin operators in your algorithm because educational purposes. Then one possible simple solution could just be like this:</p>
<pre><code>def divisible(input_list):
result = None
if 0 in input_list:
return result
for i in input_list:
ok = True
for j in input_list:
if i!=j and i % j != 0:
ok = False
break
if ok:
return i
return result
</code></pre>
| 0 | 2016-08-09T14:45:45Z | [
"python"
] |
Determines the first integer that is evenly divisible by all other integers in a list of integers | 38,853,789 | <pre><code>def divisible(a):
d = 0
n = len(a)
i = 0
p = 0
while d == 0 and p < n and i < n:
if a[i] % a[p] != 0:
i = i + 1
p = 0
else:
p = p + 1
return d
a = [12, 4, 6]
r = divisible(a)
print(r)
</code></pre>
<p>Can anyone help me plsease? it is python 3.0 +. I can't solve this question, I don't know where I can put d into the function. like let d = a[i] if a[i] can evenly divisible by all other integers. The answer is 12 for this question, can anyone imporve my code plsease? Thank you!!</p>
| 0 | 2016-08-09T14:37:46Z | 38,854,557 | <p>I have expended on my previous comment. We don't need to actually compute any multiples, since we expect it to already be in the list. The trick is just to take the max (or min, if negative numbers are allowed), and then validate. </p>
<p>But first, figure out how you are going to handle 0. It is divisible by all other integers, and cannot itself divide any integer, so I just return 0 in this example.</p>
<p>Also decide what you will do if you determine there is no correct answer. I returned None, but an exception may be more appropriate depending on the application.</p>
<pre><code>def divisible(input_list):
# what to do with zero?
if 0 in input_list:
return 0
# get largest magnitude
candidate = max(map(abs, input_list))
# validate
if all([0 == candidate % x for x in input_list]):
return candidate
else:
# handle the case where there is no valid answer
return None
print divisible([12, 4, 6])
print divisible([-12, 4, 6, -3])
print divisible([12, 5, 7])
print divisible([12, 0, 4])
</code></pre>
<p>This has some similarity to janbrohl's answer, but that is an O(n**2) solution, checking every number against every other number. But we know the number we want will be the largest (in magnitude).</p>
<p>Proof by contradiction: Take two positive numbers [a, b] where a < b, and suppose that a is evenly divisible by b. But then a % b == 0. Since a < b, we know that a % b is a. Therefore a=0 or a=nb (for some n). But a < b, therefore a==0. (expand to signed integers on your own. The sign is largely irrelevant for determining divisibility.)</p>
| 0 | 2016-08-09T15:14:30Z | [
"python"
] |
How to use python argparse with args other than sys.argv? | 38,853,812 | <p>I've been all over the documentation and it seems like there's no way to do it, but:</p>
<p>Is there a way to use argparse with any list of strings, instead of only with sys.argv?</p>
<p>Here's my problem: I have a program which looks something like this:</p>
<pre><code># This file is program1.py
import argparse
def main(argv):
parser = argparse.ArgumentParser()
# Do some argument parsing
if __name__ == '__main__':
main(sys.argv)
</code></pre>
<p>This works fine when this program is called straight from the command line. However, I have another python script which runs batch versions of this script with different commandline arguments, which I'm using like this:</p>
<pre><code>import program1
arguments = ['arg1', 'arg2', 'arg3']
program1.main(arguments)
</code></pre>
<p>I still want to be able to parse the arguments, but argparse automatically defaults to using sys.argv instead of the arguments that I give it. Is there a way to pass in the argument list instead of using sys.argv?</p>
| 0 | 2016-08-09T14:38:54Z | 38,853,865 | <p>Just change the script to default to <code>sys.argv[1:]</code> and parse arguments omitting the first one (which is the name of the invoked command)</p>
<pre><code>import argparse,sys
def main(argv=sys.argv[1:]):
parser = argparse.ArgumentParser()
# Do some argument parsing
args = parser.parse_args(argv)
if __name__ == '__main__':
main()
</code></pre>
<p>Or, if you cannot omit the first argument:</p>
<pre><code>import argparse,sys
def main(args=None):
# if None passed, uses sys.argv[1:], else use custom args
parser = argparse.ArgumentParser()
parser.add_argument("--level", type=int)
args = parser.parse_args(args)
# Do some argument parsing
if __name__ == '__main__':
main()
</code></pre>
<p>Last one: if you cannot change the called program, you can still do something</p>
<p>Let's suppose the program you cannot change is called <code>argtest.py</code> (I added a call to print arguments)</p>
<p>Then just change the local <code>argv</code> value of the <code>argtest.sys</code> module:</p>
<pre><code>import argtest
argtest.sys.argv=["dummy","foo","bar"]
argtest.main()
</code></pre>
<p>output:</p>
<pre><code>['dummy', 'foo', 'bar']
</code></pre>
| 1 | 2016-08-09T14:41:22Z | [
"python",
"argparse",
"argv"
] |
How to use python argparse with args other than sys.argv? | 38,853,812 | <p>I've been all over the documentation and it seems like there's no way to do it, but:</p>
<p>Is there a way to use argparse with any list of strings, instead of only with sys.argv?</p>
<p>Here's my problem: I have a program which looks something like this:</p>
<pre><code># This file is program1.py
import argparse
def main(argv):
parser = argparse.ArgumentParser()
# Do some argument parsing
if __name__ == '__main__':
main(sys.argv)
</code></pre>
<p>This works fine when this program is called straight from the command line. However, I have another python script which runs batch versions of this script with different commandline arguments, which I'm using like this:</p>
<pre><code>import program1
arguments = ['arg1', 'arg2', 'arg3']
program1.main(arguments)
</code></pre>
<p>I still want to be able to parse the arguments, but argparse automatically defaults to using sys.argv instead of the arguments that I give it. Is there a way to pass in the argument list instead of using sys.argv?</p>
| 0 | 2016-08-09T14:38:54Z | 38,853,883 | <p>You can pass a list of strings to <code>parse_args</code>:</p>
<pre><code>parser.parse_args(['--foo', 'FOO'])
</code></pre>
| 4 | 2016-08-09T14:42:14Z | [
"python",
"argparse",
"argv"
] |
import json to csv in python | 38,853,819 | <p>When I use the following code, I keep getting an error even though it creates the csv file. I need help and am fairly new to python. </p>
<p>The error i receive is "Traceback (most recent call last):
File "/home/ubuntu/workspace/parse-json.py", line 34, in
values = [ x.encode('utf8') for x in item['fields'].values() ]
TypeError: string indices must be integers"</p>
<pre><code>import json
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
#pprint(data)
# calc number of alert records in json file
x = len(data['alerts'])
count = 0
while (count < x):
#print 'COUNT = ', count
print data['alerts'][count]['message']
print data['alerts'][count]['tags']
print data['alerts'][count]['teams']
print data['alerts'][count]['id']
count = count + 1
import json
import csv
f = open('data.json')
data = json.load(f)
f.close()
f = csv.writer(open('yes.csv', 'wb+'))
for item in data:
values = [ x.encode('utf8') for x in item['fields'].values() ]
f.writerow([item['pk'], item['model']] + values)
</code></pre>
| -1 | 2016-08-09T14:39:25Z | 38,854,041 | <p>You need to check your data structures.
Basically what the error is saying is that <code>item</code> is not a dictionary.</p>
<p>From what I see in your code, you are iterating over <code>data</code>, but data is not a list of dictionaries but a dictionary itself.</p>
<p>If you want to iterate through the elements in a dict, you should do one the following:</p>
<pre><code>for key, value in data.items():
</code></pre>
<p>or if you only want to iterate through the values:</p>
<pre><code>for value in data.values():
</code></pre>
| 0 | 2016-08-09T14:50:08Z | [
"python",
"json",
"csv"
] |
Matching elements between lists in Python - keeping location | 38,853,828 | <p>I have two lists, both fairly long. List A contains a list of integers, some of which are repeated in list B. I can find which elements appear in both by using:</p>
<pre><code>idx = set(list_A).intersection(list_B)
</code></pre>
<p>This returns a set of all the elements appearing in both list A and list B.</p>
<p>However, I would like to find a way to find the matches between the two lists and also retain information about the elements' positions in both lists. Such a function might look like:</p>
<pre><code>def match_lists(list_A,list_B):
.
.
.
return match_A,match_B
</code></pre>
<p>where match_A would contain the positions of elements in list_A that had a match somewhere in list_B and vice-versa for match_B.
I can see how to construct such lists using a for-loop, however this feels like it would be prohibitively slow for long lists.</p>
<p>Regarding duplicates: list_B has no duplicates in it, if there is a duplicate in list_A then return all the matched positions as a list, so match_A would be a list of lists.</p>
| 0 | 2016-08-09T14:39:46Z | 38,854,019 | <p>Try this:</p>
<pre><code>def match_lists(list_A, list_B):
match_A = {}
match_B = {}
for elem in list_A:
if elem in list_B:
match_A[elem] = list_A.index(elem)
match_B[elem] = list_B.index(elem)
return match_A, match_B
</code></pre>
| -1 | 2016-08-09T14:49:03Z | [
"python",
"list",
"iterator",
"match"
] |
Matching elements between lists in Python - keeping location | 38,853,828 | <p>I have two lists, both fairly long. List A contains a list of integers, some of which are repeated in list B. I can find which elements appear in both by using:</p>
<pre><code>idx = set(list_A).intersection(list_B)
</code></pre>
<p>This returns a set of all the elements appearing in both list A and list B.</p>
<p>However, I would like to find a way to find the matches between the two lists and also retain information about the elements' positions in both lists. Such a function might look like:</p>
<pre><code>def match_lists(list_A,list_B):
.
.
.
return match_A,match_B
</code></pre>
<p>where match_A would contain the positions of elements in list_A that had a match somewhere in list_B and vice-versa for match_B.
I can see how to construct such lists using a for-loop, however this feels like it would be prohibitively slow for long lists.</p>
<p>Regarding duplicates: list_B has no duplicates in it, if there is a duplicate in list_A then return all the matched positions as a list, so match_A would be a list of lists.</p>
| 0 | 2016-08-09T14:39:46Z | 38,854,036 | <p>How about this:</p>
<pre><code>def match_lists(list_A, list_B):
idx = set(list_A).intersection(list_B)
A_indexes = []
for i, element in enumerate(list_A):
if element in idx:
A_indexes.append(i)
B_indexes = []
for i, element in enumerate(list_B):
if element in idx:
B_indexes.append(i)
return A_indexes, B_indexes
</code></pre>
| 1 | 2016-08-09T14:50:01Z | [
"python",
"list",
"iterator",
"match"
] |
Matching elements between lists in Python - keeping location | 38,853,828 | <p>I have two lists, both fairly long. List A contains a list of integers, some of which are repeated in list B. I can find which elements appear in both by using:</p>
<pre><code>idx = set(list_A).intersection(list_B)
</code></pre>
<p>This returns a set of all the elements appearing in both list A and list B.</p>
<p>However, I would like to find a way to find the matches between the two lists and also retain information about the elements' positions in both lists. Such a function might look like:</p>
<pre><code>def match_lists(list_A,list_B):
.
.
.
return match_A,match_B
</code></pre>
<p>where match_A would contain the positions of elements in list_A that had a match somewhere in list_B and vice-versa for match_B.
I can see how to construct such lists using a for-loop, however this feels like it would be prohibitively slow for long lists.</p>
<p>Regarding duplicates: list_B has no duplicates in it, if there is a duplicate in list_A then return all the matched positions as a list, so match_A would be a list of lists.</p>
| 0 | 2016-08-09T14:39:46Z | 38,854,050 | <p>That should do the job :)</p>
<pre><code>def match_list(list_A, list_B):
intersect = set(list_A).intersection(list_B)
interPosA = [[i for i, x in enumerate(list_A) if x == dup] for dup in intersect]
interPosB = [i for i, x in enumerate(list_B) if x in intersect]
return interPosA, interPosB
</code></pre>
<p>(Thanks to machine yearning for duplicate edit)</p>
| 3 | 2016-08-09T14:50:26Z | [
"python",
"list",
"iterator",
"match"
] |
Matching elements between lists in Python - keeping location | 38,853,828 | <p>I have two lists, both fairly long. List A contains a list of integers, some of which are repeated in list B. I can find which elements appear in both by using:</p>
<pre><code>idx = set(list_A).intersection(list_B)
</code></pre>
<p>This returns a set of all the elements appearing in both list A and list B.</p>
<p>However, I would like to find a way to find the matches between the two lists and also retain information about the elements' positions in both lists. Such a function might look like:</p>
<pre><code>def match_lists(list_A,list_B):
.
.
.
return match_A,match_B
</code></pre>
<p>where match_A would contain the positions of elements in list_A that had a match somewhere in list_B and vice-versa for match_B.
I can see how to construct such lists using a for-loop, however this feels like it would be prohibitively slow for long lists.</p>
<p>Regarding duplicates: list_B has no duplicates in it, if there is a duplicate in list_A then return all the matched positions as a list, so match_A would be a list of lists.</p>
| 0 | 2016-08-09T14:39:46Z | 38,854,144 | <p>Use <code>dict</code>s or <code>defaultdict</code>s to store the unique values as keys that map to the indices they appear at, then combine the <code>dicts</code>:</p>
<pre><code>from collections import defaultdict
def make_offset_dict(it):
ret = defaultdict(list) # Or set, the values are unique indices either way
for i, x in enumerate(it):
ret[x].append(i)
dictA = make_offset_dict(A)
dictB = make_offset_dict(B)
for k in dictA.viewkeys() & dictB.viewkeys(): # Plain .keys() on Py3
print(k, dictA[k], dictB[k])
</code></pre>
<p>This iterates <code>A</code> and <code>B</code> exactly once each so it works even if they're one-time use iterators, e.g. from a file-like object, and it works efficiently, storing no more data than needed and sticking to cheap hashing based operations instead of repeated iteration.</p>
<p>This isn't the solution to your specific problem, but it preserves all the information needed to solve your problem and then some (e.g. it's cheap to figure out where the matches are located for any given value in either <code>A</code> or <code>B</code>); you can trivially adapt it to your use case or more complicated ones.</p>
| 2 | 2016-08-09T14:54:44Z | [
"python",
"list",
"iterator",
"match"
] |
Matching elements between lists in Python - keeping location | 38,853,828 | <p>I have two lists, both fairly long. List A contains a list of integers, some of which are repeated in list B. I can find which elements appear in both by using:</p>
<pre><code>idx = set(list_A).intersection(list_B)
</code></pre>
<p>This returns a set of all the elements appearing in both list A and list B.</p>
<p>However, I would like to find a way to find the matches between the two lists and also retain information about the elements' positions in both lists. Such a function might look like:</p>
<pre><code>def match_lists(list_A,list_B):
.
.
.
return match_A,match_B
</code></pre>
<p>where match_A would contain the positions of elements in list_A that had a match somewhere in list_B and vice-versa for match_B.
I can see how to construct such lists using a for-loop, however this feels like it would be prohibitively slow for long lists.</p>
<p>Regarding duplicates: list_B has no duplicates in it, if there is a duplicate in list_A then return all the matched positions as a list, so match_A would be a list of lists.</p>
| 0 | 2016-08-09T14:39:46Z | 38,854,415 | <p>This only runs through each list once (requiring only one dict) and also works with duplicates in list_B</p>
<pre><code>def match_lists(list_A,list_B):
da=dict((e,i) for i,e in enumerate(list_A))
for bi,e in enumerate(list_B):
try:
ai=da[e]
yield (e,ai,bi) # element e is in position ai in list_A and bi in list_B
except KeyError:
pass
</code></pre>
| 0 | 2016-08-09T15:08:09Z | [
"python",
"list",
"iterator",
"match"
] |
Trying to get formset to work with two models in django | 38,853,841 | <p>I can't seem to figure out why this isn't working. What I want to do is have my inspection_vals formset show inspeciton_val reading and dimension description for each for some reason django keeps yelling at me that description isn't specified for inspeciton_vals and help would be greatly appreciated. Below I gave more details on what exactly I would like to do :) </p>
<p>Here is my view.py </p>
<pre><code>def update_inspection_vals(request, dim_id=None):
dims = Dimension.objects.get(pk=dim_id)
inspection_inline_formset = inlineformset_factory(Dimension, Inspection_vals, fields=('reading', 'description',))
if request.method == "POST":
formset = inspection_inline_formset(request.POST, request.FILES, instance=dims)
if formset.is_valid():
formset.save()
return redirect('inspection_vals')
else:
formset = inspection_inline_formset(instance=dims)
return render(request, 'app/inspection_vals.html', {'formset': formset})
</code></pre>
<p>models.py (with dimension model and inspection_val model)
inspeciton_val model has a foreign key dimension which links to my dimension model) </p>
<pre><code>class Inspection_vals(models.Model):
created_at = models.DateField()
updated_at = models.DateField()
reading = models.IntegerField(null=True)
reading2 = models.IntegerField(null=True)
reading3 = models.IntegerField(null=True)
reading4 = models.IntegerField(null=True)
state = models.CharField(max_length=255)
state2 = models.CharField(max_length=255)
state3 = models.CharField(max_length=255)
state4 = models.CharField(max_length=255)
approved_by = models.CharField(max_length=255)
approved_at = models.DateField(null=True, blank=True)
dimension = models.ForeignKey(Dimension, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
serial_number = models.IntegerField(default=1)
#sample = models.ForeignKey(Sample, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
class Dimension(models.Model):
description = models.CharField(max_length=255)
style = models.CharField(max_length=255)
created_at = models.DateField()
updated_at = models.DateField()
target = models.IntegerField()
upper_limit = models.IntegerField()
lower_limit = models.IntegerField()
inspection_tool = models.CharField(max_length=255)
critical = models.IntegerField()
units = models.CharField(max_length=255)
metric = models.CharField(max_length=255)
target_strings = models.CharField(max_length=255)
ref_dim_id = models.IntegerField()
nested_number = models.IntegerField()
met_upper = models.IntegerField()
met_lower = models.IntegerField()
valc = models.CharField(max_length=255)
</code></pre>
<p>here is my inspection_vals.html </p>
<pre><code>{% extends "app/layout.html" %}
{% block content %}
<br />
<br />
<br />
<form method="post">
{% csrf_token %}
{% for x in formset %}
{{ x.as_p }}
{% endfor %}
</form>
{% endblock %}
</code></pre>
<p>Screen shot to demonstrate what I would like to see. </p>
<p><a href="http://i.stack.imgur.com/SDG0A.gif" rel="nofollow"><img src="http://i.stack.imgur.com/SDG0A.gif" alt="What I would like to see"></a></p>
| 0 | 2016-08-09T14:40:12Z | 38,856,744 | <p>You are missing the management form in template</p>
<pre><code><form method="post">
{% csrf_token %}
{{ formset.management_form }} # important
{% for x in formset %}
{{ x.as_p }}
{% endfor %}
</form>
</code></pre>
| 0 | 2016-08-09T17:10:17Z | [
"python",
"django",
"python-2.7"
] |
groupby/unstack on columns name | 38,853,916 | <p>I have a dataframe with the following structure</p>
<pre><code> idx value Formula_name
0 123456789 100 Frequency No4
1 123456789 150 Frequency No25
2 123456789 125 Frequency No27
3 123456789 0.2 Power Level No4
4 123456789 0.5 Power Level No25
5 123456789 -1.0 Power Level No27
6 123456789 32 SNR No4
7 123456789 35 SNR No25
8 123456789 37 SNR No27
9 111222333 ...
</code></pre>
<p>So the only way to relate a frequency to its corresponding metric is via the number of the frequency. I know the possible range (from 100 to 200 MHz in steps of 25 MHz), but not which frequencies (or how many) show up in the data, nor which "number" is used to relate the frequency to the metric. </p>
<p>I would like to arrive at a dataframe similar to that:</p>
<pre><code> SNR Power Level
idx 100 125 150 175 200 100 125 150 175 200
0 123456789 32 37 35 NaN NaN 0.2 -1.0 0.5 NaN NaN
1 111222333 ...
</code></pre>
<p>For only one metric, I created two dataframes, one with the frequencies, one with the metric, and merged them on the number:</p>
<pre><code> idx Formula_x value_x number Formula_y value_y
0 123456789 SNR 32 4 frequency 100
1 123456789 SNR 35 25 frequency 150
</code></pre>
<p>Then I would unstack the dataframe:</p>
<pre><code>df.groupby(['idx','value_y']).first()[['value_x']].unstack()
</code></pre>
<p>This works for one metric, but I don't really see how I can apply it to more metrics and access them with a multiindex in the columns. </p>
<p>Any ideas and suggestions would be very welcome. </p>
| 1 | 2016-08-09T14:43:47Z | 38,854,188 | <p>You can use:</p>
<pre><code>print (df)
idx value Formula_name
0 123456789 100.0 Frequency No4
1 123456789 150.0 Frequency No25
2 123456789 125.0 Frequency No27
3 123456789 0.2 Power Level No4
4 123456789 0.5 Power Level No25
5 123456789 -1.0 Power Level No27
6 123456789 32.0 SNR No4
7 123456789 35.0 SNR No25
8 123456789 37.0 SNR No27
#create new columns from Formula_name
df[['a','b']] = df.Formula_name.str.rsplit(n=1, expand=True)
#maping by Series column b - from No4, No25 to numbers 100,150...
maps = df[df.a == 'Frequency'].set_index('b')['value'].astype(int)
df['b'] = df.b.map(maps)
#remove rows where is Frequency, remove column Formula_name
df1 = df[df.a != 'Frequency'].drop('Formula_name', axis=1)
print (df1)
idx value a b
3 123456789 0.2 Power Level 100
4 123456789 0.5 Power Level 150
5 123456789 -1.0 Power Level 125
6 123456789 32.0 SNR 100
7 123456789 35.0 SNR 150
8 123456789 37.0 SNR 125
</code></pre>
<p>Two solutions - with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> and with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> (see <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1463/reshaping-and-pivoting/4771/pivoting-with-aggregating#t=201608091506151326964">SO documentation</a> about aggregating)</p>
<pre><code>df2 = df1.set_index(['idx','a', 'b']).unstack([1,2])
df2.columns = df2.columns.droplevel(0)
df2 = df2.rename_axis(None).rename_axis([None, None], axis=1)
print (df2)
Power Level SNR
100 150 125 100 150 125
123456789 0.2 0.5 -1.0 32.0 35.0 37.0
df3 = df1.pivot_table(index='idx', columns=['a','b'], values='value')
df3 = df3.rename_axis(None).rename_axis([None, None], axis=1)
print (df3)
Power Level SNR
100 125 150 100 125 150
123456789 0.2 -1.0 0.5 32.0 37.0 35.0
</code></pre>
| 2 | 2016-08-09T14:57:11Z | [
"python",
"pandas"
] |
Python 3.5 | Split List and convert to csv | 38,853,966 | <p>I have two lists saved in two values. Those look like:</p>
<pre><code>project_titles = ['T1', 'T2', 'T3']
project_loc = ['L1', 'L2', 'L3']
</code></pre>
<p>Actual I write the values with this Code into a csv:</p>
<pre><code>with open('data.csv', 'w') as f:
csv.writer(f).writerow(project_titles)
</code></pre>
<p>When I turn the csv to an excel I get:</p>
<ul>
<li>Cell A1 = T1</li>
<li>Cell B1 = T2</li>
<li>Cell C1 = T3</li>
</ul>
<p>Thats fine, but I need the following result after the csv export:</p>
<ul>
<li>Cell A1 = T1; Cell B1 = L1</li>
<li>Cell A2 = T2; Cell B2 = L2</li>
<li>Cell A3 = T3; Cell B3 = L3</li>
</ul>
<p>Do you have an idea?</p>
| 0 | 2016-08-09T14:46:15Z | 38,854,046 | <p>You could use <a href="https://docs.python.org/3.5/library/functions.html#zip" rel="nofollow"><code>zip()</code></a> to aggregate elements from two or more lists, then write the resulting rows to the file with <a href="https://docs.python.org/3.5/library/csv.html#csv.csvwriter.writerows" rel="nofollow"><code>csvwriter.writerows()</code></a>:</p>
<pre><code>with open('data.csv', 'w') as f:
writer = csv.writer(f)
writer.writerows(zip(project_titles, project_loc))
</code></pre>
| 3 | 2016-08-09T14:50:22Z | [
"python",
"excel",
"python-3.x",
"csv"
] |
Python client error 'Connection reset by peer' | 38,853,972 | <p>I have written a very small python client to access confluence restful api. I am using https protocol to connect with the confluence. I am running into <code>Connection reset by peer</code> error.
Here is the full stack trace.</p>
<pre><code>/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.
SNIMissingWarning
/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Traceback (most recent call last):
File "wpToConfluence.py", line 15, in <module>
main()
File "wpToConfluence.py", line 11, in main
headers={'content-type': 'application/json'})
File "/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/api.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/api.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/Users/rakesh.kumar/.virtualenvs/wpToConfluence.py/lib/python2.7/site-packages/requests/adapters.py", line 453, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', error(54, 'Connection reset by peer'))
</code></pre>
<p>Here is my client code:</p>
<pre><code>import requests
def main():
auth = open('/tmp/confluence', 'r').readline().strip()
username = 'rakesh.kumar'
response = requests.get("https://<HOST-NAME>/rest/api/content/",
auth=(username, auth),
headers={'content-type': 'application/json'})
print response
if __name__ == "__main__":
main()
</code></pre>
<p>I am running this script in a virtual environment and following packages are installed on that environment:</p>
<pre><code>(wpToConfluence.py)â Python pip list
You are using pip version 6.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
appnope (0.1.0)
backports.shutil-get-terminal-size (1.0.0)
decorator (4.0.10)
ipdb (0.10.1)
ipython (5.0.0)
ipython-genutils (0.1.0)
pathlib2 (2.1.0)
pexpect (4.2.0)
pickleshare (0.7.3)
pip (6.1.1)
prompt-toolkit (1.0.5)
ptyprocess (0.5.1)
Pygments (2.1.3)
requests (2.10.0)
setuptools (25.1.6)
simplegeneric (0.8.1)
six (1.10.0)
traitlets (4.2.2)
urllib3 (1.16)
wcwidth (0.1.7)
</code></pre>
<p>It does complain about the python version number but I am not sure how to update my Mac/Virtual environment python.</p>
<p>I have tried to curl command and Postman both of them work fine for the given parameters.</p>
| 1 | 2016-08-09T14:46:33Z | 38,854,398 | <p>While installing <code>requests</code> library it skips few of <a href="https://github.com/kennethreitz/requests/blob/5a799dd0f505e6c6c2ff67e227f6a3d25c086342/setup.py#L71" rel="nofollow">optional security packages</a> ('pyOpenSSL', 'ndg-httpsclient', and 'pyasn1') which are required for the SSL/Https connection.
You can fix it by either running this command </p>
<pre><code>pip install "requests[security]"
</code></pre>
<p>or </p>
<pre><code>pip install pyopenssl ndg-httpsclient pyasn1
</code></pre>
| 0 | 2016-08-09T15:07:23Z | [
"python",
"python-requests",
"confluence-rest-api"
] |
python csv reader not reading all rows | 38,854,118 | <p>So I've got about 5008 rows in a CSV file, a total of 5009 with the headers. I'm creating and writing this file all within the same script. But when i read it at the end, with either pandas pd.read_csv, or python3's csv module, and print the len, it outputs 4967. I checked the file for any weird characters that may be confusing python but don't see any. All the data is delimited by commas.</p>
<p>I also opened it in sublime and it shows 5009 rows not 4967.</p>
<p>I could try other methods from pandas like merge or concat, but if python wont read the csv correct, that's no use.</p>
<p>This is one method i tried.</p>
<pre><code>df1=pd.read_csv('out.csv',quoting=csv.QUOTE_NONE, error_bad_lines=False)
df2=pd.read_excel(xlsfile)
print (len(df1))#4967
print (len(df2))#5008
df2['Location']=df1['Location']
df2['Sublocation']=df1['Sublocation']
df2['Zone']=df1['Zone']
df2['Subnet Type']=df1['Subnet Type']
df2['Description']=df1['Description']
newfile = input("Enter a name for the combined csv file: ")
print('Saving to new csv file...')
df2.to_csv(newfile, index=False)
print('Done.')
target.close()
</code></pre>
<p>Another way I tried is</p>
<pre><code>dfcsv = pd.read_csv('out.csv')
wb = xlrd.open_workbook(xlsfile)
ws = wb.sheet_by_index(0)
xlsdata = []
for rx in range(ws.nrows):
xlsdata.append(ws.row_values(rx))
print (len(dfcsv))#4967
print (len(xlsdata))#5009
df1 = pd.DataFrame(data=dfcsv)
df2 = pd.DataFrame(data=xlsdata)
df3 = pd.concat([df2,df1], axis=1)
newfile = input("Enter a name for the combined csv file: ")
print('Saving to new csv file...')
df3.to_csv(newfile, index=False)
print('Done.')
target.close()
</code></pre>
<p>But not matter what way I try the CSV file is the actual issue, python is writing it correctly but not reading it correctly.</p>
<p>Edit: Weirdest part is that i'm getting absolutely no encoding errors or any errors when running the code...</p>
<p>Edit2: Tried testing it with nrows param in first code example, works up to 4000 rows. Soon as i specify 5000 rows, it reads only 4967.</p>
<p>Edit3: manually saved csv file with my data instead of using the one written by the program, and it read 5008 rows. Why is python not writing the csv file correctly?</p>
| 2 | 2016-08-09T14:53:50Z | 38,854,533 | <p>My best guess without seeing the file is that you have some lines with too many or not enough commas, maybe due to values like <code>foo,bar</code>. </p>
<p>Please try setting <code>error_bad_lines=True</code>. From Pandas documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html</a> to see if it catches lines with errors in them, and my guess is that there will be 41 such lines. </p>
<blockquote>
<p>error_bad_lines : boolean, default True
Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these âbad linesâ will dropped from the DataFrame that is returned. (Only valid with C parser)</p>
</blockquote>
<p>The <code>csv.QUOTE_NONE</code> option seems to not quote fields and replace the current delimiter with escape_char + delimiter when writing, but you didn't paste your writing code, but on read it's unclear what this option does. <a href="https://docs.python.org/3/library/csv.html#csv.Dialect" rel="nofollow">https://docs.python.org/3/library/csv.html#csv.Dialect</a></p>
| 0 | 2016-08-09T15:13:25Z | [
"python",
"python-3.x",
"csv",
"pandas"
] |
Processing form data with a Python CGI Script | 38,854,189 | <p>Can anyone point out where I am going wrong here. I have two scripts, one for the form and the other for the processing. It looks correct but after two hours of staring at it I cannot see where I am going wrong.
Here are the two scripts, they are very short so please take a look at it.</p>
<p>The Form:</p>
<pre><code>#!/usr/bin/python
import os
import cgi
import cgitb
print("Content-Type: text/html\n\n")
print("")
print'''<html>
<head>
<meta charset="utf-8">
<title>Marks Sonitus Practice</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Practice">
<meta name="author" content="CGI Practice">
</head>
<body>
<form action="process_data.py" method="post">
<html><span> First &nbsp;&nbsp;&nbsp;</span></label>
<input type="text" name="firstname"/>
<input type="submit" name ="submitname" value="Submit Name"></form>
</body>
</html>'''
</code></pre>
<p>The script to process the form:</p>
<pre><code>#!/usr/bin/python
import os
import cgi
import cgitb
cgitb.enable(display=0,logdir="/var/www/cgi-bin/error-logs")
file_name = "/var/www/cgi-bin/practice/process_practice.py"
f = os.path.abspath(os.path.join(file_name))
try:
open(f)
except:
print"This file could not be found!"
form = cgi.FieldStorage(f)
firstname = form.getvalue('firstname')
print firstname
</code></pre>
<p>Can anyone show me where I am going wrong with this?</p>
| 0 | 2016-08-09T14:57:14Z | 38,976,264 | <p>Well thats simple in your form replace the <code>action=process_data.py</code>to
<code>action=http://localhost/cgi-bin/practice/process_practice.py</code> or may be <code>action=http://localhost/cgi-bin/practice/process_data.py</code> whatever the name of that script is.</p>
| 0 | 2016-08-16T13:28:15Z | [
"python",
"html",
"forms",
"cgi"
] |
Processing form data with a Python CGI Script | 38,854,189 | <p>Can anyone point out where I am going wrong here. I have two scripts, one for the form and the other for the processing. It looks correct but after two hours of staring at it I cannot see where I am going wrong.
Here are the two scripts, they are very short so please take a look at it.</p>
<p>The Form:</p>
<pre><code>#!/usr/bin/python
import os
import cgi
import cgitb
print("Content-Type: text/html\n\n")
print("")
print'''<html>
<head>
<meta charset="utf-8">
<title>Marks Sonitus Practice</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Practice">
<meta name="author" content="CGI Practice">
</head>
<body>
<form action="process_data.py" method="post">
<html><span> First &nbsp;&nbsp;&nbsp;</span></label>
<input type="text" name="firstname"/>
<input type="submit" name ="submitname" value="Submit Name"></form>
</body>
</html>'''
</code></pre>
<p>The script to process the form:</p>
<pre><code>#!/usr/bin/python
import os
import cgi
import cgitb
cgitb.enable(display=0,logdir="/var/www/cgi-bin/error-logs")
file_name = "/var/www/cgi-bin/practice/process_practice.py"
f = os.path.abspath(os.path.join(file_name))
try:
open(f)
except:
print"This file could not be found!"
form = cgi.FieldStorage(f)
firstname = form.getvalue('firstname')
print firstname
</code></pre>
<p>Can anyone show me where I am going wrong with this?</p>
| 0 | 2016-08-09T14:57:14Z | 38,978,566 | <p>Changing it from:</p>
<pre><code>action=process_data.py
</code></pre>
<p>to:</p>
<pre><code>action=http://localhost/cgi-bin/practice/process_data.py
</code></pre>
<p>Worked.</p>
| 0 | 2016-08-16T15:14:28Z | [
"python",
"html",
"forms",
"cgi"
] |
How do I assign a group # to a set of rows in a pandas data frame? | 38,854,207 | <p>A dataframe has a time column with int values that start at zero. I want to group my data frame into 100 groups (for example) where the step is <code>ts = df['time'].max()/100</code>. One naive way to do it, is to test each value of the 'time' column if is greater than <code>t</code> <code>and</code> less than <code>t+ts</code>, where <code>t</code> is a <code>np.linspace</code> vector that starts at <code>0</code> and ends at <code>df['time'].max()</code>.</p>
<p>Here is what my data frame looks like:</p>
<pre><code>df.head()
0 1 2 3 time
0 1 1 1 1130165891 59559371
1 2 1 1 1158784502 88177982
2 2 1 1 1158838664 88232144
3 2 1 1 1158838931 88232411
4 2 1 1 1158839132 88232612
</code></pre>
| 0 | 2016-08-09T14:57:58Z | 38,854,369 | <p>You can use <code>pd.cut</code> to generate the groups:</p>
<pre><code>df.groupby(pd.cut(df['time'], 2)).mean()
Out:
0 1 2 3 time
time
(59530697.759, 73895991.5] 1 1 1 1130165891 59559371
(73895991.5, 88232612] 2 1 1 1158825307 88218787
</code></pre>
<p>This has only 2 groups and starts at the minimum because the dataset is very small. You can change the number of groups. Instead of passing the number of groups, you can pass the break points as well (with our without np.linspace).</p>
<pre><code>df.groupby(pd.cut(df['time'], [0, 6*10**7, np.inf], include_lowest=True)).mean()
Out:
0 1 2 3 time
time
[0, 60000000] 1 1 1 1130165891 59559371
(60000000, inf] 2 1 1 1158825307 88218787
</code></pre>
<p>I took the mean in both examples to show you how it works. You can use a different method on the groupby object.</p>
| 2 | 2016-08-09T15:06:07Z | [
"python",
"pandas",
"dataframe"
] |
Create multiplication table? | 38,854,223 | <p>Am a beginner in Programming and am practicing how to use nested for loops to make a multiplication table in python 2.7.5.
Here is my code</p>
<pre><code>x=range(1,11)
y=range(1,11)
for i in x:
for j in y:
print i*j
pass
</code></pre>
<p>well,the result is correct but it does not appear in a square matrix form as i wish.Please help me improve the code</p>
| 1 | 2016-08-09T14:58:34Z | 38,854,295 | <p>You should print without a line break. </p>
<pre><code>x = range(1,11)
y = range(1,11)
for i in x:
for j in y:
print i*j, # will not break the line
print # will break the line
</code></pre>
| 8 | 2016-08-09T15:02:04Z | [
"python",
"python-2.7"
] |
Create multiplication table? | 38,854,223 | <p>Am a beginner in Programming and am practicing how to use nested for loops to make a multiplication table in python 2.7.5.
Here is my code</p>
<pre><code>x=range(1,11)
y=range(1,11)
for i in x:
for j in y:
print i*j
pass
</code></pre>
<p>well,the result is correct but it does not appear in a square matrix form as i wish.Please help me improve the code</p>
| 1 | 2016-08-09T14:58:34Z | 38,854,317 | <p>you may add formatting to keep constant cell width</p>
<pre><code>x = range(1,11)
y = range(1,11)
for i in x:
for j in y:
# substitute value for brackets
# force 4 characters, n stands for number
print '{:4n}'.format(i*j), # comma prevents line break
print # print empty line
</code></pre>
| 2 | 2016-08-09T15:03:16Z | [
"python",
"python-2.7"
] |
Create multiplication table? | 38,854,223 | <p>Am a beginner in Programming and am practicing how to use nested for loops to make a multiplication table in python 2.7.5.
Here is my code</p>
<pre><code>x=range(1,11)
y=range(1,11)
for i in x:
for j in y:
print i*j
pass
</code></pre>
<p>well,the result is correct but it does not appear in a square matrix form as i wish.Please help me improve the code</p>
| 1 | 2016-08-09T14:58:34Z | 38,854,521 | <p>Python's print statement adds new line character by default to the numbers you wish to have in your output. I guess you would like to have just a trailing spaces for inner loop and a new line character at the end of the outer loop.</p>
<p>You can achieve this by using</p>
<pre><code>print i * j, # note the comma at the end (!)
</code></pre>
<p>and adding just a new line at the end of outer loop block:</p>
<pre><code>print ''
</code></pre>
<p>To learn more about the trailing coma, and why it works, look here: <a href="http://stackoverflow.com/questions/493386/how-to-print-in-python-without-newline-or-space">"How to print in Python without newline or space?"</a>. Mind that it works differently in Python 3.</p>
<p>The final code should look like:</p>
<pre><code>x=range(1,11)
y=range(1,11)
for i in x:
for j in y:
print i*j,
print ''
</code></pre>
<p>You can also look for '\t' special character which would allow you to get better formatting (even this old resource is good enough: <a href="https://docs.python.org/2.0/ref/strings.html" rel="nofollow">https://docs.python.org/2.0/ref/strings.html</a>) </p>
| 1 | 2016-08-09T15:12:55Z | [
"python",
"python-2.7"
] |
Do union types actually exist in python? | 38,854,282 | <p>Since python is dynamically typed, of course we can do something like this:</p>
<pre><code>def f(x):
return 2 if x else "s"
</code></pre>
<p>But is the way python was actually intended to be used? or in other words, do union types exist in the sense they do in racket for example? Or do we only use them like this:</p>
<pre><code>def f(x):
if x:
return "x"
</code></pre>
<p>where the only "union" we need is with None?</p>
| 2 | 2016-08-09T15:01:15Z | 38,854,368 | <p>Union typing is only needed when you have a statically typed language, as you need to declare that an object can return one of multiple types (in your case an <code>int</code> or <code>str</code>, or in the other example <code>str</code> or <code>NoneType</code>).</p>
<p>Python deals in <em>objects</em> only, so there is never a need to even consider 'union types'. Python functions return what they return, if the programmer wants to return different types for different results then that's their choice. The choice is then an architecture choice, and makes no difference to the Python interpreter (so there is nothing to 'benchmark' here).</p>
<p>Python 3.5 does introduce a standard for creating optional type <em>hints</em>, and that standard includes <a href="https://docs.python.org/3/library/typing.html#typing.Union"><code>Union[...]</code></a> and <a href="https://docs.python.org/3/library/typing.html#typing.Optional"><code>Optional[...]</code></a> annotations.</p>
| 6 | 2016-08-09T15:06:03Z | [
"python",
"types",
"unions",
"dynamic-typing"
] |
Do union types actually exist in python? | 38,854,282 | <p>Since python is dynamically typed, of course we can do something like this:</p>
<pre><code>def f(x):
return 2 if x else "s"
</code></pre>
<p>But is the way python was actually intended to be used? or in other words, do union types exist in the sense they do in racket for example? Or do we only use them like this:</p>
<pre><code>def f(x):
if x:
return "x"
</code></pre>
<p>where the only "union" we need is with None?</p>
| 2 | 2016-08-09T15:01:15Z | 38,854,474 | <p>Adding to @MartijnPieters answer:</p>
<blockquote>
<p>But is the way python was actually intended to be used?</p>
</blockquote>
<p>Returning different type depending on the param is never a good practice in any language. This makes testing, maintaining and extending the code really difficult and IMHO is an anti-pattern (but of course sometimes necessary evil). The results should at least be related via having common interface.</p>
<p>The only reason <code>union</code> was introduced to C was due to performance gain. But in Python you don't have this performance gain due to dynamic nature of the language (as Martijn noticed). Actually introducing <code>union</code> would lower performance since the size of <code>union</code> is always the size of the biggest member. Thus Python will never have C-like <code>union</code>.</p>
| 1 | 2016-08-09T15:10:59Z | [
"python",
"types",
"unions",
"dynamic-typing"
] |
Returning dict Value with a variable Key | 38,854,337 | <p>I am using the characters in a word to search the keys of a dictionary. The dictionary is SCRABBLE_LETTER_VALUES: { 'a' : 1, 'b' : 3, ...} and so on.</p>
<p>Here is my incomplete code:</p>
<pre><code>"""
Just a test example
word = 'pie'
n = 3
"""
def get_word_score(word, n):
"""
Returns the score for a word. Assumes the word is a
valid word.
The score for a word is the sum of the points for letters
in the word multiplied by the length of the word, plus 50
points if all n letters are used on the first go.
Letters are scored as in Scrabble; A is worth 1, B is
worth 3, C is worth 3, D is worth 2, E is worth 1, and so on.
word: string (lowercase letters)
returns: int >= 0
"""
score = 0
for c in word:
if SCRABBLE_LETTER_VALUES.has_key(c):
score += SCRABBLE_LETTER_VALUES.get("""value""")
</code></pre>
<p>Now this code is incomplete because I'm still learning python, so I'm still thinking through this problem, but I am stuck on the aspect of returning a value with a key that changes each iteration. </p>
<p>My though was maybe I could set c equal to the key it matches and then return the value, but I'm not sure how to do that. Also, I wanted to check to see if I am indeed on the right thought process, so to speak. </p>
<p>Just FYI this code base does enter the loop successfully, I am simply not able to retrieve the value.</p>
<p>Thanks for the advice!</p>
| 1 | 2016-08-09T15:04:13Z | 38,854,481 | <p>You put zero in <code>score</code> in every iteration. You should initialize it before the <code>for</code> loop.</p>
<pre><code>score = 0
for c in word:
score += SCRABBLE_LETTER_VALUES.get(c, 0)
return score
</code></pre>
| -1 | 2016-08-09T15:11:19Z | [
"python",
"dictionary",
"key",
"value"
] |
Returning dict Value with a variable Key | 38,854,337 | <p>I am using the characters in a word to search the keys of a dictionary. The dictionary is SCRABBLE_LETTER_VALUES: { 'a' : 1, 'b' : 3, ...} and so on.</p>
<p>Here is my incomplete code:</p>
<pre><code>"""
Just a test example
word = 'pie'
n = 3
"""
def get_word_score(word, n):
"""
Returns the score for a word. Assumes the word is a
valid word.
The score for a word is the sum of the points for letters
in the word multiplied by the length of the word, plus 50
points if all n letters are used on the first go.
Letters are scored as in Scrabble; A is worth 1, B is
worth 3, C is worth 3, D is worth 2, E is worth 1, and so on.
word: string (lowercase letters)
returns: int >= 0
"""
score = 0
for c in word:
if SCRABBLE_LETTER_VALUES.has_key(c):
score += SCRABBLE_LETTER_VALUES.get("""value""")
</code></pre>
<p>Now this code is incomplete because I'm still learning python, so I'm still thinking through this problem, but I am stuck on the aspect of returning a value with a key that changes each iteration. </p>
<p>My though was maybe I could set c equal to the key it matches and then return the value, but I'm not sure how to do that. Also, I wanted to check to see if I am indeed on the right thought process, so to speak. </p>
<p>Just FYI this code base does enter the loop successfully, I am simply not able to retrieve the value.</p>
<p>Thanks for the advice!</p>
| 1 | 2016-08-09T15:04:13Z | 38,854,483 | <p>You can do the following:</p>
<pre><code>score = 0
for c in word:
score += SCRABBLE_LETTER_VALUES.get(c, 0)
return score
</code></pre>
<p><code>get()</code> will return the value of the key if the dictionary contains it, otherwise it will return the default value passed as second argument (0 in the snippet).</p>
| 2 | 2016-08-09T15:11:24Z | [
"python",
"dictionary",
"key",
"value"
] |
How to draw a tiled triangle with python turtle | 38,854,370 | <p>I am trying to draw a tiled equilateral triangle that looks like this</p>
<p><a href="http://i.stack.imgur.com/X7ybH.png" rel="nofollow"><img src="http://i.stack.imgur.com/X7ybH.png" alt="enter image description here"></a></p>
<p>using python's turtle. I would like to be able to have either 16,25,36,49 or 64 triangles.</p>
<p>My initial attempts are clumsy because I havent figured out how to neatly move the turtle from one triangle to the next.</p>
<p>Here it is my (partially correct) code</p>
<pre><code>def draw_triangle(this_turtle, size,flip):
"""Draw a triangle by drawing a line and turning through 120 degrees 3 times"""
this_turtle.pendown()
this_turtle.fill(True)
for _ in range(3):
if flip:
this_turtle.left(120)
this_turtle.forward(size)
if not flip:
this_turtle.right(120)
this_turtle.penup()
myturtle.goto(250,0)
for i in range(4):
for j in range(4):
draw_triangle(myturtle, square_size,(j%2 ==0))
# move to start of next triangle
myturtle.left(120)
#myturtle.forward(square_size)
myturtle.goto(-250,(i+1)*square_size)
</code></pre>
<p>There must be an elegant way of doing this?</p>
| 2 | 2016-08-09T15:06:09Z | 38,856,185 | <p>I found this an interesting problem, if modified so that the turtle must draw the figure just by moving and without jumps.</p>
<p>The solution I found is ugly, but it can be a starting point...</p>
<pre><code>def n_tri(t, size, n):
for k in range(n):
for i in range(k-1):
t.left(60)
t.forward(size)
t.left(120)
t.forward(size)
t.right(180)
t.left(60)
t.forward(size)
t.right(120)
t.forward(k * size)
t.left(60)
t.right(180)
t.forward(n * size)
t.right(180)
</code></pre>
<p>You can see how the pattern looks <a href="https://youtu.be/GLBPceRyCE8" rel="nofollow">here</a></p>
| 1 | 2016-08-09T16:36:45Z | [
"python",
"turtle-graphics",
"tiling"
] |
using Python list to set conditional pandas statement | 38,854,437 | <p>I have a calculation that I need to aggregate, but it will not work with GROUPBY in PANDAs. So, I'm stuck with iterating over the groups manually. The groups are defined by 2 fields of 'object' type values, which are essentially the categories.</p>
<p>I think an elegant solution may be to create 2 lists from the unique values in the 2 independent columns with categorical values.
Then create a 'for' loop, and using string values or something, iterate through my PANDAs conditional statement to create a DataFrame; which then eventually does my aggregate calc. This occurs over and over, with only the dataframe with aggregate calculation being kept in memory, with some append of a counter value like '1' to the end of 'df_'. In order to not overwrite each time through the loop. Here is my psuedo code.</p>
<pre><code>cats1=['blue','yellow','pink']
cats2=['dog','horse','cow','sheep']
lengths=list(itertools.product(cats1,cats2))
for x,y,z in zip(cats1,cats2,lengths):
df = main_df[ (main_df['col2']==x) & (main_df['col3']==y) ]
df['aggcalc'] = df['col1'].agg.mean()
locals()['df_{0}'.format(z)] = df
</code></pre>
<p>The last line will hopefully create the persistent dataframe based on the number of combinations of 'cats1' and 'cats2'. ie, "df_1", "df_2", etc... Then the "df" in the 1st 2 lines just gets overwritten each time in the 'for' loop. Is this correct thinking?</p>
<p>EDIT..............
Here is a simpler way to look at it.
I want to loop through all possible combinations from 2 independent, varying-length lists. Additionally, I want in each loop to have a 'counter', 'z'. This is the current way to write this and subsequent output:</p>
<pre><code> for x,y in list(itertools.product(cats1,cats2)):
print x,y
blue dog
blue horse
blue cow
blue sheep
yellow dog
yellow horse
yellow cow
yellow sheep
pink dog
pink horse
pink cow
pink sheep
</code></pre>
<p>How do I add to this output a 'z' variable which will make the output look like</p>
<pre><code> blue dog 0
blue horse 1
blue cow 2
blue sheep 3
yellow dog 4
</code></pre>
<p>...etc</p>
| -1 | 2016-08-09T15:09:18Z | 38,858,417 | <p>The simple answer to your edit is to just use <code>enumerate</code>:</p>
<pre><code>for z, (x, y) in enumerate(itertools.product(cats1, cats2)):
print x, y, z
blue dog 0
blue horse 1
blue cow 2
blue sheep 3
yellow dog 4
yellow horse 5
yellow cow 6
yellow sheep 7
pink dog 8
pink horse 9
pink cow 10
pink sheep 11
</code></pre>
<p>I strongly suspect that you're missing a simpler solution with <code>groupby</code> though, and so I'd recommend posting a new question with dummy data and details of what aggregation you're trying to perform.</p>
| 0 | 2016-08-09T18:56:09Z | [
"python",
"python-2.7",
"pandas"
] |
BitVector operations impossible | 38,854,440 | <p>I want to perform an xor operation on two BitVectors. While trying to turn one of the strings into a bitVector to then proceed into the xor operation, I get the following error:</p>
<pre><code>ValueError: invalid literal for int() with base 10: '\x91'
</code></pre>
<p>How can I bypass this problem? I just want to xor two expressions, but one of them is a string, and it needs to be turned to a bitvector first right? However, trying to turn the string into BitVector is giving the error above.</p>
<pre><code> to_be_xored = BitVector.BitVector(bitstring= variable)
</code></pre>
<p>where variable is the string, and to_be_xored is the desired Bitvector.</p>
| 0 | 2016-08-09T15:09:24Z | 38,854,661 | <p><code>bitstring</code> is for sequences of <code>'0'</code>s and <code>'1'</code>s. To use text use <code>textstring</code> instead.</p>
| 2 | 2016-08-09T15:20:30Z | [
"python",
"python-3.x",
"bitvector"
] |
Conda/Python: Import Error - image not found in jupyter notebook only | 38,854,513 | <p>I'm getting an import failure of <code>scipy.io</code> in a <code>jupyter</code> notebook. The confusing part is that I'm not getting the same, or any, error in an iPython terminal but I am in a standard Python terminal. This leads me to think that somehow, my <code>jupyter</code> session is using a different linking path than my other sessions but I haven't been able to figure out how to approach/debug/fix that.</p>
<p>My questions:</p>
<ol>
<li>Has anyone else ran into this or something similar?</li>
<li>Shouldn't jupyter be using the same library path in the terminal and notebook sessions?</li>
<li>I've included my path settings from <code>conda info</code> below. Does anything jump out at anyone regarding how/why this is happening?</li>
</ol>
<p><strong>In an IPython terminal</strong></p>
<pre><code>$ ipython
Python 3.5.2 |Anaconda custom (x86_64)| (default, Jul 2 2016, 17:52:12)
Type "copyright", "credits" or "license" for more information.
IPython 4.2.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import sys
In [2]: print(sys.executable)
/Users/jhamman/anaconda/bin/python
In [3]: import scipy.io
In [4]:
</code></pre>
<p><strong>In the Standard Python Interpreter</strong></p>
<pre><code>$ python
Python 3.5.2 |Anaconda custom (x86_64)| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.executable)
/Users/jhamman/anaconda/bin/python
>>> import scipy.io
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/__init__.py", line 97, in <module>
from .matlab import loadmat, savemat, whosmat, byteordercodes
File "/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/matlab/__init__.py", line 13, in <module>
from .mio import loadmat, savemat, whosmat
File "/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/matlab/mio.py", line 12, in <module>
from .miobase import get_matfile_version, docfiller
File "/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/matlab/miobase.py", line 22, in <module>
from scipy.misc import doccer
File "/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/misc/__init__.py", line 51, in <module>
from scipy.special import comb, factorial, factorial2, factorialk
File "/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/special/__init__.py", line 636, in <module>
from ._ufuncs import *
ImportError: dlopen(/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/special/_ufuncs.so, 2): Library not loaded: /usr/local/lib/libgcc_s.1.dylib
Referenced from: /Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/special/_ufuncs.so
Reason: image not found
</code></pre>
<p><strong>In Jupyter Notebook</strong></p>
<pre><code>import sys
print(sys.executable)
/Users/jhamman/anaconda/bin/python
import scipy.io
ImportError Traceback (most recent call last)
<ipython-input-8-05f698096e44> in <module>()
----> 1 import scipy.io
/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/__init__.py in <module>()
95
96 # matfile read and write
---> 97 from .matlab import loadmat, savemat, whosmat, byteordercodes
98
99 # netCDF file support
/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/matlab/__init__.py in <module>()
11
12 # Matlab file read and write utilities
---> 13 from .mio import loadmat, savemat, whosmat
14 from . import byteordercodes
15
/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/matlab/mio.py in <module>()
10 from scipy._lib.six import string_types
11
---> 12 from .miobase import get_matfile_version, docfiller
13 from .mio4 import MatFile4Reader, MatFile4Writer
14 from .mio5 import MatFile5Reader, MatFile5Writer
/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/io/matlab/miobase.py in <module>()
20 byteord = ord
21
---> 22 from scipy.misc import doccer
23
24 from . import byteordercodes as boc
/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/misc/__init__.py in <module>()
49 from .common import *
50 from numpy import who, source, info as _info
---> 51 from scipy.special import comb, factorial, factorial2, factorialk
52
53 import sys
/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/special/__init__.py in <module>()
634 from __future__ import division, print_function, absolute_import
635
--> 636 from ._ufuncs import *
637
638 from .basic import *
ImportError: dlopen(/Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/special/_ufuncs.so, 2): Library not loaded: /usr/local/lib/libgcc_s.1.dylib
Referenced from: /Users/jhamman/anaconda/lib/python3.5/site-packages/scipy/special/_ufuncs.so
Reason: image not found
</code></pre>
<p>One last note, here's the dump from <code>conda info</code>:</p>
<pre><code>$ conda info -a
Current conda install:
platform : osx-64
conda version : 4.1.11
conda-env version : 2.5.2
conda-build version : 1.21.3
python version : 3.5.2.final.0
requests version : 2.10.0
root environment : /Users/jhamman/anaconda (writable)
default environment : /Users/jhamman/anaconda
envs directories : /Users/jhamman/anaconda/envs
package cache : /Users/jhamman/anaconda/pkgs
channel URLs : https://repo.continuum.io/pkgs/free/osx-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/osx-64/
https://repo.continuum.io/pkgs/pro/noarch/
config file : None
offline mode : False
is foreign system : False
# conda environments:
#
root * /Users/jhamman/anaconda
sys.version: 3.5.2 |Anaconda custom (x86_64)| (defaul...
sys.prefix: /Users/jhamman/anaconda
sys.executable: /Users/jhamman/anaconda/bin/python3
conda location: /Users/jhamman/anaconda/lib/python3.5/site-packages/conda
conda-build: /Users/jhamman/anaconda/bin/conda-build
conda-convert: /Users/jhamman/anaconda/bin/conda-convert
conda-develop: /Users/jhamman/anaconda/bin/conda-develop
conda-env: /Users/jhamman/anaconda/bin/conda-env
conda-index: /Users/jhamman/anaconda/bin/conda-index
conda-inspect: /Users/jhamman/anaconda/bin/conda-inspect
conda-metapackage: /Users/jhamman/anaconda/bin/conda-metapackage
conda-pipbuild: /Users/jhamman/anaconda/bin/conda-pipbuild
conda-render: /Users/jhamman/anaconda/bin/conda-render
conda-server: /Users/jhamman/anaconda/bin/conda-server
conda-sign: /Users/jhamman/anaconda/bin/conda-sign
conda-skeleton: /Users/jhamman/anaconda/bin/conda-skeleton
user site dirs:
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: <not set>
CONDA_ENVS_PATH: <not set>
DYLD_LIBRARY_PATH: <not set>
PATH: /Users/jhamman/anaconda/bin:/opt/local/bin:/opt/local/sbin
PYTHONHOME: <not set>
PYTHONPATH: <not set>
</code></pre>
| 1 | 2016-08-09T15:12:41Z | 38,861,038 | <p>This appears to be a bug with the conda build:</p>
<p><a href="https://github.com/ContinuumIO/anaconda-issues/issues/899" rel="nofollow">https://github.com/ContinuumIO/anaconda-issues/issues/899</a></p>
<p>One commentor (@stuarteberg) on the issue stated:</p>
<blockquote>
<p>The new latest scipy version (0.18.0) also has the same problem. In case it's somehow useful, the conda-forge package for scipy is not broken in this way.</p>
</blockquote>
<p>@andykitchen found that downgrading scipy to 0.17.0 fixed the problem:</p>
<blockquote>
<p>Yup can confirm I had this problem as well, the fix for me was to also to downgrade to 0.17.0</p>
</blockquote>
<pre><code>conda install --force scipy=0.17.0
</code></pre>
| 1 | 2016-08-09T21:53:55Z | [
"python",
"scipy",
"anaconda",
"jupyter-notebook",
"conda"
] |
how to convert a (possibly negative) Pandas TimeDelta in minutes (float)? | 38,854,582 | <p>I have a dataframe like this</p>
<pre><code>df[['timestamp_utc','minute_ts','delta']].head()
Out[47]:
timestamp_utc minute_ts delta
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000
</code></pre>
<p>Where <code>df['delta']=df.minute_ts-df.timestamp_utc</code></p>
<pre><code>timestamp_utc datetime64[ns]
minute_ts datetime64[ns]
delta timedelta64[ns]
</code></pre>
<p>Problem is, I would like to get the <strong>number of (possibly negative) minutes</strong> between <code>timestamp_utc</code> and <code>minutes_ts</code>, disregarding the seconds component. </p>
<p>So for the first row I would like to get <code>-120</code>. Indeed,<code>2015-05-21 12:06:00</code> is 120 minutes before <code>2015-05-21 14:06:33.414</code>.</p>
<p>What is the most pandaesque way to do it?</p>
<p>Many thanks!</p>
| 2 | 2016-08-09T15:16:06Z | 38,854,647 | <p>You can use:</p>
<pre><code>df['a'] = df['delta'] / np.timedelta64(1, 'm')
print (df)
timestamp_utc minute_ts delta \
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000
a
0 -120.5569
1 -119.5569
2 -118.5569
3 -117.5569
4 -116.5569
</code></pre>
<p>And then convert <code>float</code> to <code>int</code>:</p>
<pre><code>df['a'] = (df['delta'] / np.timedelta64(1, 'm')).astype(int)
print (df)
timestamp_utc minute_ts delta a
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000 -120
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000 -119
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000 -118
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000 -117
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000 -116
</code></pre>
| 1 | 2016-08-09T15:19:53Z | [
"python",
"datetime",
"pandas"
] |
how to convert a (possibly negative) Pandas TimeDelta in minutes (float)? | 38,854,582 | <p>I have a dataframe like this</p>
<pre><code>df[['timestamp_utc','minute_ts','delta']].head()
Out[47]:
timestamp_utc minute_ts delta
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000
</code></pre>
<p>Where <code>df['delta']=df.minute_ts-df.timestamp_utc</code></p>
<pre><code>timestamp_utc datetime64[ns]
minute_ts datetime64[ns]
delta timedelta64[ns]
</code></pre>
<p>Problem is, I would like to get the <strong>number of (possibly negative) minutes</strong> between <code>timestamp_utc</code> and <code>minutes_ts</code>, disregarding the seconds component. </p>
<p>So for the first row I would like to get <code>-120</code>. Indeed,<code>2015-05-21 12:06:00</code> is 120 minutes before <code>2015-05-21 14:06:33.414</code>.</p>
<p>What is the most pandaesque way to do it?</p>
<p>Many thanks!</p>
| 2 | 2016-08-09T15:16:06Z | 38,855,580 | <p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/timedeltas.html" rel="nofollow">Timedelta object</a> in Pandas, and then use floor division in a list comprehension to calculate the minutes. Note that the seconds property of <code>Timedelta</code> returns the number of seconds (>= 0 and less than 1 day), so that you must explicitly convert days to the corresponding minutes.</p>
<pre><code>df = pd.DataFrame({'minute_ts': [pd.Timestamp('2015-05-21 12:06:00'),
pd.Timestamp('2015-05-21 12:07:00'),
pd.Timestamp('2015-05-21 12:08:00'),
pd.Timestamp('2015-05-21 12:09:00'),
pd.Timestamp('2015-05-21 12:10:00')],
'timestamp_utc': [pd.Timestamp('2015-05-21 14:06:33.414')] * 5})
df['minutes_neg'] = [td.days * 24 * 60 + td.seconds//60
for td in [pd.Timedelta(delta)
for delta in df.minute_ts - df.timestamp_utc]]
df['minutes_pos'] = [td.days * 24 * 60 + td.seconds//60
for td in [pd.Timedelta(delta)
for delta in df.timestamp_utc - df.minute_ts]]
>>> df
minute_ts timestamp_utc minutes_neg minutes_pos
0 2015-05-21 12:06:00 2015-05-21 14:06:33.414 -121 120
1 2015-05-21 12:07:00 2015-05-21 14:06:33.414 -120 119
2 2015-05-21 12:08:00 2015-05-21 14:06:33.414 -119 118
3 2015-05-21 12:09:00 2015-05-21 14:06:33.414 -118 117
4 2015-05-21 12:10:00 2015-05-21 14:06:33.414 -117 116
</code></pre>
<p>Note that the minutes are off by one because of floor division. For example, 90 // 60 = 1, but -90 // 60 = -2. You could add one to the result if it is negative, but there is the edge case of exactly one minute (measured at millisecond precision) would be off by one minute.</p>
| 1 | 2016-08-09T16:03:19Z | [
"python",
"datetime",
"pandas"
] |
How to select a drop down option based on text of a variable in python and selenium | 38,854,606 | <p>All, I'm just learning python as well as selenium. and i'm stuck on how to select from a dropdown menu based on a variable. </p>
<p>I am able to select it based on the Text within the dropdown menu. like below ... </p>
<pre><code>CreateJob = driver.find_element_by_partial_link_text('Create Activity')
time.sleep(5)
CreateJob.click()
time.sleep(5)
select = Select(driver.find_element_by_name('worktype'))
print ("select.options")
time.sleep(3)
select.select_by_visible_text("THE ITEM I WANT") # orig working
</code></pre>
<p>However, what I need is the string "THE ITEM I WANT" to be defined by a variable so I only have to change it once in the code. </p>
<p>I have tried the following but no luck... Any idea's?</p>
<pre><code>createjob1 = "THE ITEM I WANT"
#select.select_by_visible_text(.,'%s')]" % createjob1) # not working
#select.select_by_visible_text('%s') % "createjob1" # not working
#select.select_by_visible_text('%s') % "createjob1"
</code></pre>
| 0 | 2016-08-09T15:17:29Z | 38,854,781 | <p>Directly pass variable <code>createjob1</code> into <code>select_by_visible_text()</code> as below :</p>
<pre><code>createjob1 = "THE ITEM I WANT"
select.select_by_visible_text(createjob1)
</code></pre>
| 0 | 2016-08-09T15:26:18Z | [
"python",
"selenium"
] |
'try' for 'duration' in python 3 | 38,854,726 | <p>I am figuring out how to have my python try to complete an action ( that may never be able to happen ) until what is the equivalent of a timer runs out in which case it runs a separate function. </p>
<p>The exact scenario is bypassing the "Warning" screen that outlook provides when something of an automation system tries accessing it. When the initial command is sent to retrieve data from or otherwise manipulate outlook the entire python script just stops and waits ( as best as I can tell ) waiting for the user to click "Allow" on the outlook program before it will continue. What I'd like to happen is that upon it trying to do the manipulation of outlook there be a timer that starts. If the timer reaches X value, skip that command that was sent to outlook and do a different set of actions.</p>
<p>I feel that this may lead into something called "Threading" in order to have simultaneous processes running but I also feel that I may be over complicating the concept. If I were to do a mockup of what I think may be written to accomplish this, this is what I'd come up with...</p>
<pre><code>time1 = time.clock()
try:
mail = inbox.Items[0].Sender
except if time1 > time1+10:
outlookWarningFunc()
</code></pre>
<p>I am 99.9% sure that "except" isn't ever used in such a manner hence why it isn't a functional piece of code but it was the closest thing I could think of to at least convey what I am trying to get to. </p>
<p>I appreciate your time. Thank you. </p>
| 3 | 2016-08-09T15:23:27Z | 38,854,986 | <p>One of the solutions is this:</p>
<pre><code>import threading
mail = None
def func():
global mail
mail = inbox.Items[0].Sender
thread = threading.Thread(target=func)
thread.start()
thread.join(timeout=10)
if thread.is_alive():
# operation not completed
outlookWarningFunc()
# you must do cleanup here and stop the thread
</code></pre>
<p>You start a new thread which performs the operation and wait 10 seconds for it until it completes or the time is out. Then, you check if job is done. If the thread is alive, it means that the task was not completed yet.</p>
<p>The pitfall of this solution is that the thread is still running in the background, so you must do cleanup actions which allows the thread to complete or raise an exception.</p>
| 4 | 2016-08-09T15:35:56Z | [
"python",
"time",
"outlook"
] |
Zip File with Python Shutil like File Explorer | 38,854,745 | <h2>Background</h2>
<p>I am trying to zip a directory with python shutil like this:</p>
<pre><code>shutil.make_archive("~/Desktop/zipfile", 'zip', "~/Documents/foldertozip")
</code></pre>
<p>but the result only zips the files <em>inside</em> "foldertozip". So for instance,</p>
<pre><code>foldertozip
-- file1
-- file2
zipfile.zip
-- file1
-- file2
</code></pre>
<p>On the other hand, if I zip it from windows file explorer or mac finder, I get the following:</p>
<pre><code>foldertozip
-- file1
-- file2
zipfile.zip
-- foldertozip
-- file1
-- file2
</code></pre>
<h2>Question</h2>
<p>How can I use shutil to do the same thing that I could do from a file explorer and include the base directory? I know I could copy "foldertozip" to a folder with the same name and then zip that folder, but I would prefer a cleaner solution if at all possible.</p>
| 1 | 2016-08-09T15:24:21Z | 38,854,947 | <p><code>make_archive</code> will do what you want if you pass both <code>root_dir</code> and <code>base_dir</code>. See <a href="https://docs.python.org/2/library/shutil.html#shutil.make_archive" rel="nofollow">the docs</a>.</p>
<pre><code>import shutil
shutil.make_archive('~/Desktop/zipfile', 'zip', '~/Documents/', 'foldertozip')
</code></pre>
| 2 | 2016-08-09T15:33:55Z | [
"python",
"directory",
"zip",
"folder",
"shutil"
] |
Zip File with Python Shutil like File Explorer | 38,854,745 | <h2>Background</h2>
<p>I am trying to zip a directory with python shutil like this:</p>
<pre><code>shutil.make_archive("~/Desktop/zipfile", 'zip', "~/Documents/foldertozip")
</code></pre>
<p>but the result only zips the files <em>inside</em> "foldertozip". So for instance,</p>
<pre><code>foldertozip
-- file1
-- file2
zipfile.zip
-- file1
-- file2
</code></pre>
<p>On the other hand, if I zip it from windows file explorer or mac finder, I get the following:</p>
<pre><code>foldertozip
-- file1
-- file2
zipfile.zip
-- foldertozip
-- file1
-- file2
</code></pre>
<h2>Question</h2>
<p>How can I use shutil to do the same thing that I could do from a file explorer and include the base directory? I know I could copy "foldertozip" to a folder with the same name and then zip that folder, but I would prefer a cleaner solution if at all possible.</p>
| 1 | 2016-08-09T15:24:21Z | 38,855,180 | <p>From the documentation of make_archive: </p>
<pre><code>shutil.make_archive(base_name, format[, root_dir[, base_dir[, verbose[, dry_run[, owner[, group[, logger]]]]]]])
</code></pre>
<p>Create an archive file (eg. zip or tar) and returns its name.</p>
<p><strong>base_name</strong> is the name of the file to create, including the path, minus any format-specific extension. format is the archive format: one of âzipâ, âtarâ, âbztarâ or âgztarâ.</p>
<p><strong>root_dir</strong> is a directory that will be the root directory of the archive; ie. we typically chdir into root_dir before creating the archive.</p>
<p><strong>base_dir</strong> is the directory where we start archiving from; ie. base_dir will be the common prefix of all files and directories in the archive.</p>
<p>If I understand the question correctly, you need to have base_dir equal to "foldertozip" and root_dir equal to the parent directory of "foldertozip". </p>
<p>Suppose foldertozip is under "Documents"</p>
<p>So something like this should work: </p>
<pre><code>shutil.make_archive("~/Documents/zipfile", "zip", "~/Documents/", "foldertozip")
</code></pre>
<p>Let us know if this works as expected for you!</p>
| 1 | 2016-08-09T15:44:56Z | [
"python",
"directory",
"zip",
"folder",
"shutil"
] |
Python 3: how to tests exceptions within with? | 38,854,796 | <p>I have problems to test exceptions which would be raised within a with in python 3.4. I just can't get the tests run for this peace of code:</p>
<pre><code>import logging
...
class Foo(object):
...
def foo(self, src, dst):
try:
with pysftp.Connection(self._host, username=self._username, password=self._password) as connection:
connection.put(src, dst)
connection.close()
except (
ConnectionException,
CredentialException,
SSHException,
AuthenticationException,
HostKeysException,
PasswordRequiredException
) as e:
self._log.error(e)
</code></pre>
<p>And this is how I want to test it:</p>
<pre><code>import logging
...
class TestFoo(TestCase):
@parameterized.expand([
('ConnectionException', ConnectionException),
('CredentialException', CredentialException),
('SSHException', SSHException),
('AuthenticationException', AuthenticationException),
('HostKeysException', HostKeysException),
('PasswordRequiredException', PasswordRequiredException),
])
@patch('pysftp.Connection', spec_set=pysftp.Connection)
def test_foo_exceptions(self, _, ex, sftp_mock):
"""
NOTE: take a look at:
http://stackoverflow.com/questions/37014904/mocking-python-class-in-unit-test-and-verifying-an-instance
to get an understanding of __enter__ and __exit__
"""
sftp_mock.return_value = Mock(
spec=pysftp.Connection,
side_effect=ex,
__enter__ = lambda self: self,
__exit__ = lambda *args: None
)
foo = Foo('host', 'user', 'pass', Mock(spec_set=logging.Logger))
foo.foo('src', 'dst')
self.assertEqual(foo._log.error.call_count, 1)
</code></pre>
<p>But it fails - output:</p>
<pre><code>Failure
...
AssertionError: 0 != 1
</code></pre>
| 1 | 2016-08-09T15:27:00Z | 38,856,121 | <p>Your <code>sftp_mock.return_value</code> object is never called, so the <code>side_effect</code> is never triggered and no exception is raised. It would only be called if the <em>return value</em> of <code>pysftp.Connection(...)</code> was itself called again.</p>
<p>Set the side effect <em>directly on the mock</em>:</p>
<pre><code>sftp_mock.side_effect = ex
</code></pre>
<p>Note that now the <code>pysftp.Connection(...)</code> expression raises the exception and it no longer matters that the return value of that expression would have been used as a context manager in a <code>with</code> statement.</p>
<p>Note that your exceptions will complain about not getting any arguments; pass in <em>instances</em> of your exceptions, not the type:</p>
<pre><code>@parameterized.expand([
('ConnectionException', ConnectionException('host', 1234)),
# ... etc.
])
</code></pre>
| 1 | 2016-08-09T16:32:45Z | [
"python",
"unit-testing",
"mocking",
"nose",
"python-mock"
] |
how can I make my django app automatically scrape in the background on heroku | 38,854,897 | <p>I am trying to figure out how to have my app use a function that scrapes sites in the backround, becuase it takes a long time and causes an error if ran in the foreground. So I followed the tutorial on Heroku's site that has a function that counts words and is ran in the background. It works. So I was ready to put my function in there via import at first. So I imported it and created a function that use it. I got this traceback</p>
<pre><code> Traceback (most recent call last):
File "my_raddqueue.py", line 2, in <module>
from src.blog.my_task import conn, is_page_ok
File "/Users/ray/Desktop/myheroku/practice/src/blog/my_task.py", line 5, in <module>
from .my_scraps import p_panties
File "/Users/ray/Desktop/myheroku/practice/src/blog/my_scraps.py", line 3, in <module>
from .models import Post
File "/Users/ray/Desktop/myheroku/practice/src/blog/models.py", line 3, in <module>
from taggit.managers import TaggableManager
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/taggit/managers.py", line 7, in <module>
from django.contrib.contenttypes.models import ContentType
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 159, in <module>
class ContentType(models.Model):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 160, in ContentType
app_label = models.CharField(max_length=100)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 1072, in __init__
super(CharField, self).__init__(*args, **kwargs)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 166, in __init__
self.db_tablespace = db_tablespace or settings.DEFAULT_INDEX_TABLESPACE
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/conf/__init__.py", line 55, in __getattr__
self._setup(name)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/django/conf/__init__.py", line 41, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
</code></pre>
<p>I even tried to create the function in my_task.py ran it and got the same traceback</p>
<p>this is my file structure</p>
<p><a href="http://i.stack.imgur.com/dWu0y.png" rel="nofollow"><img src="http://i.stack.imgur.com/dWu0y.png" alt="enter image description here"></a></p>
<p>below are the files and the code I think is relevant to the reproduction of the problem</p>
<p>the function I want to use located in my_scraps.py</p>
<pre><code>import requests
from bs4 import BeautifulSoup
from .models import Post
import random
import re
from django.contrib.auth.models import User
import os
def p_panties():
def swappo():
user_one = ' "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0" '
user_two = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)" '
user_thr = ' "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko" '
user_for = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0" '
agent_list = [user_one, user_two, user_thr, user_for]
a = random.choice(agent_list)
return a
headers = {
"user-agent": swappo(),
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
"accept-encoding": "gzip,deflate,sdch",
"accept-language": "en-US,en;q=0.8",
}
pan_url = 'http://www.example.org'
shtml = requests.get(pan_url, headers=headers)
soup = BeautifulSoup(shtml.text, 'html5lib')
video_row = soup.find_all('div', {'class': 'post-start'})
name = 'pan videos'
if os.getenv('_system_name') == 'OSX':
author = User.objects.get(id=2)
else:
author = User.objects.get(id=3)
def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
soupdata = BeautifulSoup(youtube_page.text, 'html5lib')
video_row = soupdata.find_all('p')[0]
entries = [{'text': div,
} for div in video_row]
tubby = str(entries[0]['text'])
urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', tubby)
cleaned_url = urls[0].replace('?&amp;autoplay=1', '')
return cleaned_url
def yt_id(code):
the_id = code
youtube_id = the_id.replace('https://www.youtube.com/embed/', '')
return youtube_id
def strip_hd(hd, move):
str = hd
new_hd = str.replace(move, '')
return new_hd
entries = [{'href': div.a.get('href'),
'text': strip_hd(strip_hd(div.h2.text, 'â Official video HD'), 'â Oficial video HD').lstrip(),
'embed': youtube_link(div.a.get('href')),
'comments': strip_hd(strip_hd(div.h2.text, 'â Official video HD'), 'â Oficial video HD').lstrip(),
'src': 'https://i.ytimg.com/vi/' + yt_id(youtube_link(div.a.get('href'))) + '/maxresdefault.jpg',
'name': name,
'url': div.a.get('href'),
'author': author,
'video': True
} for div in video_row][:13]
for entry in entries:
post = Post()
post.title = entry['text']
title = post.title
if not Post.objects.filter(title=title):
post.title = entry['text']
post.name = entry['name']
post.url = entry['url']
post.body = entry['comments']
post.image_url = entry['src']
post.video_path = entry['embed']
post.author = entry['author']
post.video = entry['video']
post.status = 'draft'
post.save()
post.tags.add("video", "Musica")
return entries
</code></pre>
<p>my_task.py</p>
<pre><code> import os
import redis
from rq import Worker, Queue, Connection
from .my_scraps import p_panties
import requests
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
def is_page_ok(url):
response = requests.get(url)
if response.status_code == 200:
return "{0} is up".format(url)
else:
return "{0} is not OK. Status {1}".format(url, response.status_code)
def do_this():
a = p_panties()
return a
</code></pre>
<p>my_raddqueue.py</p>
<pre><code>from rq import Queue
from src.blog.my_task import conn, do_this
q = Queue('important', connection=conn)
result = q.enqueue(do_this)
print("noted")
</code></pre>
<p>this line</p>
<pre><code>from .my_scraps import p_panties
</code></pre>
<p>will cause that traceback as well even if I'm not using it. After I gave up trying to use the functionI was trying to use and see if the other one worked, they didn't and I couldnt figure out why until I started deleting or commenting things out one by one when I commented or deleted this line out it worked. What is my Issue. all I want to do is have my app scrape at predesignated times of the day in my heroku app. How can I achieve this? Is my approach here all wrong? Ive seen something called APSscheduler SHOULD I be using that instead. Any input on improving my code would be appreciated. haven't been coding that long. A lot of this came from my own head so if it looks unprofessional that's why thanks you in advance</p>
| 0 | 2016-08-09T15:31:46Z | 38,858,992 | <p>Im not sure about Heroku. But normally you can achieve such automated tasks in django through Celery. </p>
<p>You have awesome documentation here. <a href="http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html" rel="nofollow">http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html</a></p>
| 1 | 2016-08-09T19:31:58Z | [
"python",
"django",
"heroku",
"redis"
] |
Pig//Spark jobs not seeing Python modules | 38,854,914 | <p>I have a <strong>recurring problem</strong> with my hadoop cluster in that occasionally a <strong>functioning code stops seeing python modules</strong> that are in the proper location. I'm looking for tips from someone who might have faced the same problem.</p>
<p>When I first started programming and a code stopped working I asked a question here on SO and someone told me to just go to bed and in the morning it should work, or some other "you're a dummy, you must have changed something" kind of comment. </p>
<p>I run the code several times, it works, I go to sleep, in the morning I try to run it again and it fails. Sometimes I kill jobs with CTRL+C, and sometimes I use CTRL+Z. But this just takes up resources and doesn't cause any other issue besides that - the code still runs. <strong>I have yet to have see this problem right after the code works. This usually happens the morning after, when I come into work after the code worked when I left 10 hours ago. Restarting the cluster typically solves the issue</strong></p>
<p>I'm currently checking to see if the cluster restarts itself for some reason, or if some part of it is failing, but so far the ambari screens show everything green. I'm not sure if there is some automated maintenance or something that is known to screw things up.</p>
<p>Still working my way through the elephant book, sorry if this topic is clearly addressed on page XXXX, I just haven't made it to that page yet.</p>
<p>I looked through all the error logs, but the only meaningful thing I see is in stderr:</p>
<pre><code> File "/data5/hadoop/yarn/local/usercache/melvyn/appcache/application_1470668235545_0029/container_e80_1470668235545_0029_01_000002/format_text.py", line 3, in <module>
from formatting_functions import *
ImportError: No module named formatting_functions
</code></pre>
| 2 | 2016-08-09T15:32:31Z | 39,107,892 | <p>So we solved the problem. The issue is particular to our set up. We have all of our datanodes nfs mounted. Occasionally a node fails, and someone has to bring it back up and remount it. </p>
<p>Our script specifies the path to libraries like:'</p>
<pre><code> pig -Dmapred.child.env="PYTHONPATH=$path_to_mnt$hdfs_library_path" ...
</code></pre>
<p>so pig couldn't find the libraries, because $path_to_mnt was invalid for one of the nodes.</p>
| 0 | 2016-08-23T18:00:23Z | [
"python",
"hadoop",
"apache-pig",
"pyspark"
] |
matplotlib - autosize of text according to shape size | 38,854,940 | <p>I'm adding a text inside a shape by:
<code>ax.text(x,y,'text', ha='center', va='center',bbox=dict(boxstyle='circle', fc="w", ec="k"),fontsize=10)</code> (ax is AxesSubplot)</p>
<p>The problem is that I couldn't make the circle size constant while changing the string length. I want the text size adjust to the circle size and not the other way around.
The circle is even completely gone if the string is an empty one.</p>
<p>The only bypass to the problem I had found is dynamically to set the fontsize param according to the len of the string, but that's too ugly and not still the circle size is not completely constant.</p>
<p><strong>EDIT (adding a MVCE):</strong></p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.text(0.5,0.5,'long_text', ha='center', va='center',bbox=dict(boxstyle='circle', fc="w", ec="k"),fontsize=10)
ax.text(0.3,0.7,'short', ha='center', va='center',bbox=dict(boxstyle='circle', fc="w", ec="k"),fontsize=10)
plt.show()
</code></pre>
<p>Trying to make both circles the same size although the string len is different. Currently looks like this:</p>
<p><a href="http://i.stack.imgur.com/FMYaW.png" rel="nofollow"><img src="http://i.stack.imgur.com/FMYaW.png" alt="enter image description here"></a></p>
| 0 | 2016-08-09T15:33:34Z | 38,860,355 | <p>I have a very dirty and hard-core solution which requires quite deep knowledge of matplotlib. It is not perfect but might give you some ideas how to start.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import Circle
import numpy as np
plt.close('all')
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
t1 = ax.text(0.5,0.5,'long_text', ha='center', va='center',fontsize=10)
t2 = ax.text(0.3,0.7,'short', ha='center', va='center', fontsize=10)
t3 = ax.text(0.1,0.7,'super-long-text-that-is-long', ha='center', va='center', fontsize=10)
fig.show()
def text_with_circle(text_obj, axis, color, border=1.5):
# Get the box containing the text
box1 = text_obj.get_window_extent()
# It turned out that what you get from the box is
# in screen pixels, so we need to transform them
# to "data"-coordinates. This is done with the
# transformer-function below
transformer = axis.transData.inverted().transform
# Now transform the corner coordinates of the box
# to data-coordinates
[x0, y0] = transformer([box1.x0, box1.y0])
[x1, y1] = transformer([box1.x1, box1.y1])
# Find the x and y center coordinate
x_center = (x0+x1)/2.
y_center = (y0+y1)/2.
# Find the radius, add some extra to make a nice border around it
r = np.max([(x1-x0)/2., (y1-y0)/2.])*border
# Plot the a circle at the center of the text, with radius r.
circle = Circle((x_center, y_center), r, color=color)
# Add the circle to the axis.
# Redraw the canvas.
return circle
circle1 = text_with_circle(t1, ax, 'g')
ax.add_artist(circle1)
circle2 = text_with_circle(t2, ax, 'r', 5)
ax.add_artist(circle2)
circle3 = text_with_circle(t3, ax, 'y', 1.1)
ax.add_artist(circle3)
fig.canvas.draw()
</code></pre>
<p>At the moment you have to run this in ipython, because the figure has to be drawn BEFORE you <code>get_window_extent()</code>. Therefore the <code>fig.show()</code> has to be called AFTER the text is added, but BEFORE the circle can be drawn! Then we can get the coordinates of the text, figures out where the middle is and add a circle around the text with a certain radius. When this is done we redraw the canvas to update with the new circle. Ofcourse you can customize the circle a lot more (edge color, face color, line width, etc), look into <a href="http://matplotlib.org/api/patches_api.html?highlight=circle#matplotlib.patches.Circle" rel="nofollow">the Circle class</a>.</p>
<p>Example of output plot:
<a href="http://i.stack.imgur.com/iZnrv.png" rel="nofollow"><img src="http://i.stack.imgur.com/iZnrv.png" alt="enter image description here"></a></p>
| 0 | 2016-08-09T21:01:31Z | [
"python",
"matplotlib"
] |
NoBrokersAvailable: NoBrokersAvailable-Kafka Error | 38,854,957 | <p>i have already started to learn Kafka. Trying basic operations on it. I have stucked on a point which about the 'Brokers'. </p>
<p>My kafka is running but when i want to create a partition.</p>
<pre><code> from kafka import TopicPartition
(ERROR THERE) consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
consumer.assign([TopicPartition('foobar', 2)])
msg = next(consumer)
</code></pre>
<blockquote>
<p>traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/group.py", line 284, in <strong>init</strong>
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 202, in <strong>init</strong>
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 791, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable</p>
</blockquote>
| 1 | 2016-08-09T15:34:26Z | 38,859,436 | <p>It looks like you want to start consuming messages instead of creating partions. Nevertheless - can you reach kafka at port 1234? 9092 is kafkas default port maybe you can try this one. If you found the right port but your application still produces errors you can try to use a console consumer to test your setup:</p>
<p><code>bin/kafka-console-producer.sh --broker-list localhost:<yourportnumber> --topic foobar</code></p>
<p>The console consumer is part of the standard kafka distribution. Maybe that gets you a little closer to the source of the problem.</p>
| 1 | 2016-08-09T20:01:04Z | [
"python",
"apache-kafka",
"kafka-consumer-api",
"kafka-producer-api",
"python-kafka"
] |
NoBrokersAvailable: NoBrokersAvailable-Kafka Error | 38,854,957 | <p>i have already started to learn Kafka. Trying basic operations on it. I have stucked on a point which about the 'Brokers'. </p>
<p>My kafka is running but when i want to create a partition.</p>
<pre><code> from kafka import TopicPartition
(ERROR THERE) consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
consumer.assign([TopicPartition('foobar', 2)])
msg = next(consumer)
</code></pre>
<blockquote>
<p>traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/group.py", line 284, in <strong>init</strong>
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 202, in <strong>init</strong>
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 791, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable</p>
</blockquote>
| 1 | 2016-08-09T15:34:26Z | 38,867,470 | <p>You cannot create partitions within a consumer. Partitions are created when you create a topic. For example, using command line tool:</p>
<pre class="lang-sh prettyprint-override"><code>bin/kafka-topics.sh \
--zookeeper localhost:2181 \
--create --topic myNewTopic \
--partitions 10 \
--replication-factor 3
</code></pre>
<p>This creates a new topic "myNewTopic" with 10 partitions (numbered from 0 to 9) and replication factor 3. (see <a href="http://docs.confluent.io/3.0.0/kafka/post-deployment.html#admin-operations" rel="nofollow">http://docs.confluent.io/3.0.0/kafka/post-deployment.html#admin-operations</a> and <a href="https://kafka.apache.org/documentation.html#quickstart_createtopic" rel="nofollow">https://kafka.apache.org/documentation.html#quickstart_createtopic</a>)</p>
<p>Within your consumer, if you call <code>assign()</code>, it means you want to consume the corresponding partition and this partition must exist already.</p>
| 1 | 2016-08-10T08:01:03Z | [
"python",
"apache-kafka",
"kafka-consumer-api",
"kafka-producer-api",
"python-kafka"
] |
Python Console GUI Updating whilst getting keyboard input | 38,855,001 | <p>I was wondering if it is possible to have a command-line interface that in essence blinks the current system time every second but allows the user to keep entering data via raw_input or something?</p>
<p>Thanks,</p>
<p>Ryan</p>
| 0 | 2016-08-09T15:36:34Z | 38,855,087 | <p>No it's not at the same time anyway. Here's an example for showing the time in terminal.</p>
<pre><code>import time
while True:
#time.sleep(1) #Uncommet to have it only print the time every second
x = time.strftime("%H:%M:%S")
print x
</code></pre>
| 2 | 2016-08-09T15:41:45Z | [
"python",
"python-2.7",
"date",
"input"
] |
Conversion from CMYK to RGB with Pillow is different from that of Photoshop | 38,855,022 | <p>I need to convert an image from CMYK to RGB in python. I used Pillow in this way:</p>
<pre><code>img = Image.open('in.jpg')
img = img.convert('RGB')
img.save('out.jpg')
</code></pre>
<p>The code works, but if I convert the same image with Photoshop I have a different result as shown below:-</p>
<p><a href="http://i.stack.imgur.com/S9MTH.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/S9MTH.jpg" alt="a"></a></p>
<p>The only operation done in photoshop is to change method from CMYK to RGB.
Why there is this difference between the two RGB images? It can be a color profile problem?</p>
| 2 | 2016-08-09T15:38:06Z | 38,855,893 | <p><strong>SOLVED</strong></p>
<p>The problem is that Pillow does not know the input ICC profile, while photoshop had one set as default.</p>
<p>Photoshop use for</p>
<p><strong>CMYK</strong>: U.S. Web Coated (SWOP) v2</p>
<p><strong>RGB</strong>: sRGB IEC61966-2.1</p>
<p>So I've solved in this way:</p>
<pre><code>img = Image.open('in.jpg')
img = ImageCms.profileToProfile(img, 'USWebCoatedSWOP.icc', 'sRGB Color Space Profile.icm', renderingIntent=0, outputMode='RGB')
img.save('out.jpg', quality=100)
</code></pre>
| 2 | 2016-08-09T16:20:15Z | [
"python",
"rgb",
"photoshop",
"pillow",
"image-conversion"
] |
Inverting the "numpy.ma.compressed" operation | 38,855,058 | <p>I want to create an array from a compressed masked array and a corresponding mask. Its easier to explain this with an example:</p>
<pre><code>>>> x=np.ma.array(np.arange(4).reshape((2,2)), mask = [[True,True],[False,False]])
>>> y=x.compressed()
>>> y
array([ 2, 3])
</code></pre>
<p>Now I want to create an array in the same shape as x where the masked values get a standard value (for example -1) and the rest is filled up with a given array. It should work like this:</p>
<pre><code>>>> z = decompress(y, mask=[[True,True],[False,False]], default=-1)
>>> z
array([[-1, -1],
[ 2, 3]])
</code></pre>
<p>The question is: Is there any method like "decompress", or do i need to code it myself? In Fortran this is done by the methods "pack" and "unpack".
Thanks for any suggestions.</p>
| 0 | 2016-08-09T15:39:58Z | 38,856,546 | <p>While I've answered a number of <code>ma</code> questions I'm by no means an expert with it. But I'll explore the issue</p>
<p>Let's generalize your a array a bit:</p>
<pre><code>In [934]: x=np.ma.array(np.arange(6).reshape((2,3)), mask = [[True,True,False],[False,False,True]])
In [935]: x
Out[935]:
masked_array(data =
[[-- -- 2]
[3 4 --]],
mask =
[[ True True False]
[False False True]],
fill_value = 999999)
In [936]: y=x.compressed()
In [937]: y
Out[937]: array([2, 3, 4])
</code></pre>
<p><code>y</code> has no information about <code>x</code> except a subset of values. Note it is 1d</p>
<p><code>x</code> stores its values in 2 arrays (actually these are properties that access underlying <code>._data</code>, <code>._mask</code> attributes):</p>
<pre><code>In [938]: x.data
Out[938]:
array([[0, 1, 2],
[3, 4, 5]])
In [939]: x.mask
Out[939]:
array([[ True, True, False],
[False, False, True]], dtype=bool)
</code></pre>
<p>My guess is that to <code>de-compress</code> we need to make a <code>empty</code> masked array with the correct dtype, shape and mask, and copy the values of <code>y</code> into its <code>data</code>. But what values should be put into the masked elements of <code>data</code>? </p>
<p>Or another way to put the problem - is it possible to copy values from <code>y</code> back onto <code>x</code>?</p>
<p>A possible solution is to copy the new values to <code>x[~x.mask]</code>:</p>
<pre><code>In [957]: z=2*y
In [958]: z
Out[958]: array([4, 6, 8])
In [959]: x[~x.mask]=z
In [960]: x
Out[960]:
masked_array(data =
[[-- -- 4]
[6 8 --]],
mask =
[[ True True False]
[False False True]],
fill_value = 999999)
In [961]: x.data
Out[961]:
array([[0, 1, 4],
[6, 8, 5]])
</code></pre>
<p>Or to make a new array</p>
<pre><code>In [975]: w=np.zeros_like(x)
In [976]: w[~w.mask]=y
In [977]: w
Out[977]:
masked_array(data =
[[-- -- 2]
[3 4 --]],
mask =
[[ True True False]
[False False True]],
fill_value = 999999)
In [978]: w.data
Out[978]:
array([[0, 0, 2],
[3, 4, 0]])
</code></pre>
<p>Another approach is to make a regular array, <code>full</code> with the invalid values, copy <code>y</code> in like this, and turn the whole thing into a masked array. It's possible that there is a masked array constructor that lets you specify the valid values only along with the mask. But I'd have to dig into the docs for that.</p>
<p>===============</p>
<p>Another sequence of operations that will do this, using <code>np.place</code> for set values </p>
<pre><code>In [1011]: w=np.empty_like(x)
In [1014]: np.place(w,w.mask,999)
In [1015]: np.place(w,~w.mask,[1,2,3])
In [1016]: w
Out[1016]:
masked_array(data =
[[-- -- 1]
[2 3 --]],
mask =
[[ True True False]
[False False True]],
fill_value = 999999)
In [1017]: w.data
Out[1017]:
array([[999, 999, 1],
[ 2, 3, 999]])
</code></pre>
<p>====================</p>
<p>Look at </p>
<pre><code>https://github.com/numpy/numpy/blob/master/numpy/ma/core.py
class _MaskedBinaryOperation:
</code></pre>
<p>This class is used to implement masked <code>ufunc</code>. It evaluates the <code>ufunc</code> at valid cells (non-masked) and returns a new masked array with the valid ones, leaving the masked values unchanged (from the original) </p>
<p>For example with a simple masked array, <code>+1</code> does not changed the masked value.</p>
<pre><code>In [1109]: z=np.ma.masked_equal([1,0,2],0)
In [1110]: z
Out[1110]:
masked_array(data = [1 -- 2],
mask = [False True False],
fill_value = 0)
In [1111]: z.data
Out[1111]: array([1, 0, 2])
In [1112]: z+1
Out[1112]:
masked_array(data = [2 -- 3],
mask = [False True False],
fill_value = 0)
In [1113]: _.data
Out[1113]: array([2, 0, 3])
In [1114]: z.compressed()+1
Out[1114]: array([2, 3])
</code></pre>
<p><code>_MaskedUnaryOperation</code> might be simpler to follow, since it only has to work with 1 masked array.</p>
<p>Example, regular log has problems with the masked 0 value:</p>
<pre><code>In [1115]: z.log()
...
/usr/local/bin/ipython3:1: RuntimeWarning: divide by zero encountered in log
#!/usr/bin/python3
Out[1116]:
masked_array(data = [0.0 -- 0.6931471805599453],
mask = [False True False],
fill_value = 0)
</code></pre>
<p>but the masked log skips the masked entry:</p>
<pre><code>In [1117]: np.ma.log(z)
Out[1117]:
masked_array(data = [0.0 -- 0.6931471805599453],
mask = [False True False],
fill_value = 0)
In [1118]: _.data
Out[1118]: array([ 0. , 0. , 0.69314718])
</code></pre>
<hr>
<p>oops - <code>_MaskedUnaryOperation</code> might not be that useful. It evaluates the <code>ufunc</code> at all values <code>np.ma.getdata(z)</code>, with a <code>errstate</code> context to block warnings. It then uses the mask to copy masked values on to the result (<code>np.copyto(result, d, where=m)</code>). </p>
| 1 | 2016-08-09T16:57:39Z | [
"python",
"numpy",
"masked-array"
] |
TypeError: Can't convert 'NoneType' object to str implicitly Error while running python script | 38,855,106 | <p>When trying to run a python script for website scanning the error TypeError: Can't convert 'NoneType' object to str implicitly keeps occuring the python code is </p>
<p>main.py</p>
<pre><code>from general import *
from domain_name import *
from ip_address import *
from robots_txt import *
from whois import *
from nmap import *
ROOT_DIR = 'Targets'
create_dir(ROOT_DIR)
def gather_info(name, url):
domain_name = get_domain_name(url)
ip_address = get_ip_address(url)
robots_txt = get_robots_txt(url)
whois = get_whois(domain_name)
nmap = get_nmap(input('Nmap Options:'), ip_address)
create_report(name, url, domain_name, robots_txt, whois, nmap)
def create_report(name, full_url, domain_name, robots_txt, whois, nmap):
project_dir = ROOT_DIR + '/' + name
create_dir(project_dir)
write_file(project_dir + '/full_url.txt', full_url)
write_file(project_dir + '/domain_name.txt', domain_name)
write_file(project_dir + '/robots_txt.txt', robots_txt)
write_file(project_dir + '/whois.txt', whois)
write_file(project_dir + '/nmap.txt', nmap)
gather_info(input('Target Name:'), input('Target Domain:'))
</code></pre>
<p>nmap.py</p>
<pre><code>import os
def get_nmap(options, ip):
command = "nmap " + options + " " + ip
process = os.popen(command)
results = str(process.read())
return results
</code></pre>
<p>When I run it it rrturns </p>
<pre><code>python ./main.py
Target Name:test
Target Domain:https://www.google.ie
Nmap Options:-F
Traceback (most recent call last):
File "./main.py", line 31, in <module>
gather_info(input('Target Name:'), input('Target Domain:'))
File "./main.py", line 17, in gather_info
nmap = get_nmap(input('Nmap Options:'), ip_address)
File "/Projects/WebScanner/nmap.py", line 9, in get_nmap
command = "nmap " + options + " " + ip
TypeError: Can't convert 'NoneType' object to str implicitly
</code></pre>
<p>I have searched through other similar errors but cannot find a solution to fix it any help would be great.</p>
| -2 | 2016-08-09T15:42:22Z | 38,856,755 | <p>Python is telling you that it can't take an object of type <code>NoneType</code> and make it into a string for concatenation. Some where in your code your assigning either the <code>options</code> variable or the <code>ip</code> variable to None. Like Lost said above, print out <code>ip</code> and <code>options</code> and see if they print out <code>None</code>. Or you could say <code>command = "nmap " + str(options) + " " + str(ip)</code>, to see if your <code>ip</code> variable or <code>options</code> variable is <code>None</code>.</p>
| 0 | 2016-08-09T17:10:47Z | [
"python"
] |
How to cleanly ask user for input and allow several types? | 38,855,160 | <p>So this is a prompt for user input, and it works just fine. I was printing some names and associated (1 based) numbers to the console for the user to choose one. I am also giving the option to quit by entering q.
The condition for the number to be valid is a) it is a number and b) it is smaller or equal than the number of names and greater than 0.</p>
<pre><code>while True:
number = str(input("Enter number, or q to quit. \n"))
if number == "q":
sys.exit()
try:
number = int(number)
except:
continue
if number <= len(list_of_names) and number > 0:
name = list_of_names[number-1]
break
</code></pre>
<p>There is no problem with this code, except I find it hard to read, and not very beautiful. Since I am new to python I would like to ask you guys, how would you code this prompt more cleanly? To be more specific: How do I ask the user for input that can be either a string, or an integer? </p>
| 1 | 2016-08-09T15:44:13Z | 38,855,298 | <p>Just downcase it.</p>
<pre><code>number = str(input("Enter number, or q to quit. \n"))
number = number.lower()
</code></pre>
<p>That will make the q lower case so it doesn't matter if they press it with shift if they press something else just make a if statement that sets a while loop true.</p>
| 1 | 2016-08-09T15:50:08Z | [
"python",
"python-3.x",
"coding-style",
"code-cleanup"
] |
How to cleanly ask user for input and allow several types? | 38,855,160 | <p>So this is a prompt for user input, and it works just fine. I was printing some names and associated (1 based) numbers to the console for the user to choose one. I am also giving the option to quit by entering q.
The condition for the number to be valid is a) it is a number and b) it is smaller or equal than the number of names and greater than 0.</p>
<pre><code>while True:
number = str(input("Enter number, or q to quit. \n"))
if number == "q":
sys.exit()
try:
number = int(number)
except:
continue
if number <= len(list_of_names) and number > 0:
name = list_of_names[number-1]
break
</code></pre>
<p>There is no problem with this code, except I find it hard to read, and not very beautiful. Since I am new to python I would like to ask you guys, how would you code this prompt more cleanly? To be more specific: How do I ask the user for input that can be either a string, or an integer? </p>
| 1 | 2016-08-09T15:44:13Z | 38,855,416 | <p>A bit simpler:</p>
<pre><code>while True:
choice = str(input("Enter number, or q to quit. \n"))
if choice.lower() == "q":
sys.exit()
elif choice.isdigit() and (0 < int(choice) <= len(list_of_names)):
name = list_of_names[int(choice)-1]
break
</code></pre>
| 1 | 2016-08-09T15:55:29Z | [
"python",
"python-3.x",
"coding-style",
"code-cleanup"
] |
Check if a row in one data frame exist in another data frame | 38,855,204 | <p>I have a data frame A like this:</p>
<p><a href="http://i.stack.imgur.com/uu12A.png" rel="nofollow"><img src="http://i.stack.imgur.com/uu12A.png" alt="enter image description here"></a></p>
<p>And another data frame B which looks like this:</p>
<p><a href="http://i.stack.imgur.com/DI78z.png" rel="nofollow"><img src="http://i.stack.imgur.com/DI78z.png" alt="enter image description here"></a></p>
<p>I want to add a column 'Exist' to data frame A so that if User and Movie both exist in data frame B then 'Exist' is True, otherwise it is False.
So A should become like this:
<a href="http://i.stack.imgur.com/sPxfb.png" rel="nofollow"><img src="http://i.stack.imgur.com/sPxfb.png" alt="enter image description here"></a></p>
| 0 | 2016-08-09T15:46:06Z | 38,855,394 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with parameter <code>indicator</code>, then remove column <code>Rating</code> and use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>df = pd.merge(df1, df2, on=['User','Movie'], how='left', indicator='Exist')
df.drop('Rating', inplace=True, axis=1)
df['Exist'] = np.where(df.Exist == 'both', True, False)
print (df)
User Movie Exist
0 1 333 False
1 1 1193 True
2 1 3 False
3 2 433 False
4 3 54 True
5 3 343 False
6 3 76 True
</code></pre>
| 1 | 2016-08-09T15:54:13Z | [
"python",
"pandas",
"dataframe"
] |
Pandas - Parse time data with and without miliseconds | 38,855,222 | <p>How do you parse time data if the time is in the format <code>2007-08-06T18:11:44.688Z</code>, but treats no milliseconds as <code>2007-08-06T18:11:44Z</code>? </p>
<p><code>pd.to_datetime(x.split('Z')[0], errors='coerce', format='%Y-%m-%dT%H:%M:%S.%f')</code> to remove remove the Zulu marker fails due to the <code>.</code> being missing sometimes.</p>
<p><code>pd.to_datetime(x.split('.')[0], errors='coerce', format='%Y-%m-%dT%H:%M:%S')</code> to remove the milliseconds fails due to the <code>.</code> being missing sometimes.</p>
<p><code>pd.to_datetime(x.split('.|Z')[0], errors='coerce', format='%Y-%m-%dT%H:%M:%S')</code> fails sometimes too, even though it looks like it should split on both cases witt the 0 member being the part we want and thus always give a valid time in seconds.</p>
| 1 | 2016-08-09T15:46:59Z | 38,855,954 | <p>IIUC you can simply use <code>pd.to_datetime(df_column_or_series)</code> without specifying the <code>format</code> parameter should properly parse both your datetime formats</p>
<p>having or not having <code>Zulu</code> marker, doesn't change anything - you will have the same dtype after your string is converted to pandas datetime dtype:</p>
<pre><code>In [366]: pd.to_datetime(pd.Series(['2007-08-06T18:11:44.688Z'])).dtypes
Out[366]: dtype('<M8[ns]')
In [367]: pd.to_datetime(pd.Series(['2007-08-06T18:11:44.688'])).dtypes
Out[367]: dtype('<M8[ns]')
In [368]: pd.to_datetime(pd.Series(['2007-08-06T18:11:44'])).dtypes
Out[368]: dtype('<M8[ns]')
In [369]: pd.to_datetime(pd.Series(['2007-08-06'])).dtypes
Out[369]: dtype('<M8[ns]')
In [371]: pd.to_datetime(pd.Series(['2007-08-06T18:11:44.688']), format='%Y-%m-%dT%H:%M:%S.%f').dtypes
Out[371]: dtype('<M8[ns]')
</code></pre>
| 1 | 2016-08-09T16:23:29Z | [
"python",
"pandas"
] |
Read .txt file and export selective data to .csv | 38,855,260 | <p>I'm looking for help, I have multipath output from centos server in a .txt file, which looks like this.</p>
<pre class="lang-none prettyprint-override"><code>asm (393040300403de) dm-12 HITACHI
size=35G queue_if_no_path
|- 1:0:0:18 sda 65:48 active ready running
`- 3:0:0:18 sdbc 70:368 active ready running
3600300300a4c dm-120 HITACHI
size=50G queue_if_no_path
|- 1:0:0:98 sdc 70:48 active ready running
`- 3:0:0:98 sdca 131:368 active ready running
</code></pre>
<p>It should look like this when exported to a .csv file.</p>
<pre class="lang-none prettyprint-override"><code>DISKS_NAME LUN LUNID DM-NAME SIZE MULTPATH
asm 393040300403de 03de dm-12 35G sda sdbc
No_device 3600300300a4c 0a4c dm-120 50G sdc sdca
</code></pre>
<p>This is as far i got, but this just reads every line and puts it into a different column every time it finds a space</p>
<pre><code>import csv
readfile = 'multipath.txt'
writefile = 'data.csv'
with open(readfile,'r') as a, open(writefile, 'w') as b:
o=csv.writer(b)
for line in a:
o.writerow(line.split())
</code></pre>
| 0 | 2016-08-09T15:48:32Z | 38,855,832 | <p>Assuming that you only have the two types of entry as described in your above sample, you can define each line as a factor of the number of elements within it that will be seperated by <code>line.split()</code>. For example:</p>
<pre><code>disk_name = ""
... # other parameters you need to keep track of across lines. I'd suggest creating a class for each lun/disk_name.
for line in a:
line_data = line.split()
if len(line_data) == 4:
# this will match and 'asm (393040300403de) dm-12 HITACHI'
disk_name, lun, dm_name, _ = line_data
# process these variables accordingly (instantiate a new class member)
continue # to read the next line
else if len(line_data) == 3:
# this will match '3600300300a4c dm-120 HITACHI'
lun, dm_name, _ = line_data
disk_name = "No_device"
# process these variables accordingly
continue
if len(line_data) == 2:
# this will match 'size=35G queue_if_no_path'
size, _ = line_data
# process the size accordingly, associate with the disk_name from earlier
continue
if len(line_data) == 7:
# this will match '|- 1:0:0:18 sda 65:48 active ready running' etc.
_, _, path, _, _, _, _ = line_data
# process the path accordingly, associate with the disk_name from earlier
continue
</code></pre>
<p>Of course, using a regex to work if the line contains the <em>type</em> of data that you need, rather than just the right number of items, will be more flexible. But this should get you started.</p>
<p>By processing the lines in this order, you'll always pick up a new <code>disk_name</code>/<code>lun</code>, and then assign the following "data" lines to that disk. When you hit a new disk, the lines following that will be associated with the new disk, etc.</p>
| 0 | 2016-08-09T16:17:18Z | [
"python"
] |
Django urls and file structure for templates | 38,855,496 | <p>I'm using Django 1.9.8 and started learning by following <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">the official tutorial</a>. The official tutorial emphasised reusabilitiy and "plugability". From there I followed <a href="http://blog.narenarya.in/right-way-django-authentication.html" rel="nofollow">this</a> tutorial on authorization. Although I was able to get the authorization tutorial to work, one thing about it that I didn't like (or just don't understand) is why the project's <code>urls.py</code> file contains several app specific urls, rather than placing them in the app's <code>urls.py</code> file and just including that file in the project's <code>urls.py</code> file. That seems to go against what the official tutorial emphasizes. I understand each project may have different URL's for login/logout/register, etc... depending on the API and will still have to be edited, but I feel like changing them in one place makes more sense and keeps things neater.</p>
<p>The name of the project is authtest and the name of the app is log</p>
<pre><code>#log/urls.py
from django.conf.urls import url, include
from django.contrib import admin
from django.contrib.auth import views
from log.forms import LoginForm
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'', include('log.urls')),
url(r'^login/$', views.login, {'template_name': 'login.html', 'authentication_form': LoginForm }), #move this to authtest/urls.py
url(r'^logout/$', views.logout, {'next_page': '/login'}), #move this to authtest/urls.py
]
</code></pre>
<p>Now for the app's <code>urls.py</code> file</p>
<pre><code>#authtest/urls.py
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.home, name='home'),
]
</code></pre>
<p>This works 100%, so now for the first question. <strong>Is there any reason I shouldn't move the log app specific urls (login & logout) out of the project's urls.py file (log/urls.py) and put them into the app's urls.py file (authtest/urls.py)?</strong> Maybe there are reasons for authentication not to, but what about if I was making a different app?</p>
<p>Now for my second question, which I suppose depends on the answer to the first question. The authorization tutorial places the login.html, logout.html, and home.html templates in the project's root templates folder. The Django tutorial suggests putting them within an app's templates directory, and within that directory, another directory named whatever the app is called (for namespacing). <strong>What do I have to change if I move the app specific template files from the project's templates folder, to the log app's templates folder?</strong> </p>
<p>This is the current file structure from the authorization tutorial I followed</p>
<pre><code> authtest
|...authtest
|...|...settings.py
|...|...urls.py
|...log
|...|...settings.py
|...|...urls.py
|...|...views.py
|...manage.py
|...templates
|...|...base.html
|...|...home.html
|...|...login.html
|...static
</code></pre>
<p>This is how I assumed it should be based on how the official tutorial suggests to use templates.</p>
<pre><code>authtest
|...authtest
|...|...settings.py
|...|...urls.py
|...log
|...|...urls.py
|...|...views.py
|...|...templates
|...|...|...log #namespace of the log app
|...|...|...|...base.html
|...|...|...|...home.html
|...|...|...|...login.html
|...manage.py
|...templates
|...static
</code></pre>
<p>When I move the files I get the following error <code>TemplateDoesNotExist at /login/</code> when I visit <code>http://localhost:8080/login/</code>. I'm assuming it's solely the urls.py files, but I'm not sure exactly what I have to change. </p>
<p><strong>edited for settings.py templates directive</strong></p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ["templates"],
#'DIRS': [os.path.join(BASE_DIR, 'templates')], #I also tried this
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
</code></pre>
| 0 | 2016-08-09T15:59:26Z | 38,859,073 | <p>In your <code>settings.py</code> you need to add the <code>loaders</code> key in <code>OPTIONS</code> section. This specifies how django finds your <code>template files</code>. If you weren't specifying the <code>OPTIONS</code> key, the <code>APP_DIRS</code> settings should have been enough.</p>
<pre><code>TEMPLATES = [
{
# See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND
'BACKEND': 'django.template.backends.django.DjangoTemplates',
# See: https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs
'DIRS': [
str(APPS_DIR.path('templates')),
],
'OPTIONS': {
# See: https://docs.djangoproject.com/en/dev/ref/settings/#template-debug
'debug': DEBUG,
# See: https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders
# https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types
'loaders': [
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
],
# See: https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors
},
},
</code></pre>
<p>]</p>
<p><a href="https://docs.djangoproject.com/en/1.10/ref/templates/api/#loader-types" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/templates/api/#loader-types</a> for more info </p>
| 1 | 2016-08-09T19:36:18Z | [
"python",
"django",
"templates",
"django-urls"
] |
Update Database Using AJAX in Django | 38,855,543 | <p>So I have an AJAX command, which passes information to a views.py method (I have verified that the passing works from the HTML->urls.py->views.py, so that's all good), but once I have it in "views.py", I have no idea how to get it to update in the database itself.</p>
<p>I have tried to avoid using a forms.py file if possible, but if that is the only option I'll bend to it.</p>
<p>The AJAX function is as follows:</p>
<pre><code>$.ajax({
url : '/perform/acts/update/{{ act.id }}/',
type : "POST",
data : {
'csrfmiddlewaretoken' : "{{ csrf_token }}",
furtherData : furtherData
},
success : function(result) {}
});
</code></pre>
<p>The views.py function is...so far, lacking, to say the least, but this is where I'm sort of lost:</p>
<pre><code>def update_act(request, furtherData_id):
if request.method == 'POST':
?
return HttpResponse(?)
</code></pre>
<p>A big reason for doing this this way was performing updates without reloading and without having to add another module. I have been using Django for only a couple weeks, so it could be something easy that I'm missing...</p>
<p>Any help much appreciated!</p>
| 0 | 2016-08-09T16:01:46Z | 38,856,036 | <pre><code>def update_act(request,furtherData_id):
from django.http import JsonResponse
if request.method == 'POST':
obj=MyObj.objects.get(pk=furtherData_id)
obj.data=request.POST['furtherData']
obj.save()
return JsonResponse({'result':'ok'})
else:
return JsonResponse({'result':'nok'}
</code></pre>
| 4 | 2016-08-09T16:27:55Z | [
"python",
"ajax",
"django",
"django-models",
"django-views"
] |
Update Database Using AJAX in Django | 38,855,543 | <p>So I have an AJAX command, which passes information to a views.py method (I have verified that the passing works from the HTML->urls.py->views.py, so that's all good), but once I have it in "views.py", I have no idea how to get it to update in the database itself.</p>
<p>I have tried to avoid using a forms.py file if possible, but if that is the only option I'll bend to it.</p>
<p>The AJAX function is as follows:</p>
<pre><code>$.ajax({
url : '/perform/acts/update/{{ act.id }}/',
type : "POST",
data : {
'csrfmiddlewaretoken' : "{{ csrf_token }}",
furtherData : furtherData
},
success : function(result) {}
});
</code></pre>
<p>The views.py function is...so far, lacking, to say the least, but this is where I'm sort of lost:</p>
<pre><code>def update_act(request, furtherData_id):
if request.method == 'POST':
?
return HttpResponse(?)
</code></pre>
<p>A big reason for doing this this way was performing updates without reloading and without having to add another module. I have been using Django for only a couple weeks, so it could be something easy that I'm missing...</p>
<p>Any help much appreciated!</p>
| 0 | 2016-08-09T16:01:46Z | 38,856,658 | <p>Your view function:<br></p>
<pre><code>def my_view_action(request, any_pk_id):
from django.http import JsonResponse
if request.method=='POST' and request.is_ajax():
try:
obj = MyModel.objects.get(pk=any_pk_id)
obj.data_attr = request.POST['attr_name']
obj.save()
return JsonResponse({'status':'Success', 'msg': 'save successfully'})
except MyModel.DoesNotExist:
return JsonResponse({'status':'Fail', 'msg': 'Object does not exist'})
else:
return JsonResponse({'status':'Fail', 'msg':'Not a valid request'}
</code></pre>
<p><br> </p>
<blockquote>
<p><strong>This function directly save data on your DB, For validate it first use form then proceed to save action.</strong>
<br></p>
<blockquote>
<p>---Steps---<br>
- Create a form for model.<br>
- Fill data on this model via request/object.<br>
- Run validate on form then save to db via form or via model.<br></p>
</blockquote>
</blockquote>
<p>For more info
<a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#the-save-method" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#the-save-method</a></p>
| 1 | 2016-08-09T17:05:18Z | [
"python",
"ajax",
"django",
"django-models",
"django-views"
] |
python csv module issues with embedded JSON strings (Python + Oracle + CSV + JSON) | 38,855,599 | <p>I am importing a small amount of baseline data into tables immediately after creating them. Only one table is giving me trouble, and this is because one of the fields is JSON.</p>
<p>I have not found a syntax engine capable of properly interpreting escapped quotes and commas within the JSON. I have not tried them all and am, of course, open to suggestions based on any experience with a similar problem.</p>
<p>I don't know if it matters, but I'm using Toad for Oracle to export the CSV files as a baseline for development rebuilds of data. Toad has no option to replace the delimiters in the CSV and while it wouldn't be hard for me to manually change a single CSV file doing so as a maintenance task would be a PITA.</p>
<p>Here is a sample of the CSV data causing the problem:</p>
<pre><code>"RULE_ID","NAME","DISPLAY_DESC","NOTES","RULE","SOURCE_ID","RULE_META","RULE_SCOPE","ACTIVE"
265.00,"RoadKill Report Processor","Report Processor","Loads a long-run-thread for each report matched by the handler method.","MvcsReportProcessManager",41.00,"{
\"handler\" : \"processReports\",
\"consumer_prototype\" :
\"RoadKill_report_processor.AssetDataReportHostConsumer\",
\"match_expression\" : \"^MVCS_.*\",
\"schedule\" : [
\"0:30-4:00|mon-sun|*|*\",
\"!*|*|1|jan\",
\"!*|*|25|dec\",
\"!*|thu|22-28|nov\"
],
\"wake_interval\" : \"30m\",
\"interval\" : \"24h\"
}","INST",0.00
321.00,"RoadKill AG Processor","Asset Group Reflection","Loads a long-run-thread to download Asset Groups daily.","MvcsAssetGroupDownloader",41.00,"{
\"handler\" : \"replicateAssetGroups\",
\"consumer_prototype\" :
\"RoadKill_report_processor.AssetGroupConsumer\",
\"schedule\" : [
\"00:30-17:00|mon-sun|*|*\",
\"!*|*|1|jan\",
\"!*|*|25|dec\",
\"!*|thu|22-28|nov\"
],
\"wake_interval\" : \"30m\",
\"interval\" : \"24h\"
}","INST",1.00
322.00,"RoadKill Asset Processor","Asset Reflection","Loads a long-run-thread to download Assets daily.","MvcsAssetAPIHostDownloader",41.00,"{
\"handler\" : \"replicateAssets\",
\"consumer_prototype\" :
\"RoadKill_report_processor.\",
\"schedule\" : [
\"00:30-17:00|mon-sun|*|*\",
\"!*|*|1|jan\",
\"!*|*|25|dec\",
\"!*|thu|22-28|nov\"
],
\"wake_interval\" : \"30m\",
\"interval\" : \"24h\"
}","INST",1.00
323.00,"RoadKill Vuln Processor","Vuln Reflection","Loads a long-run-thread to download Vulns daily.","MvcsAssetAPIVulnDownloader",41.00,"{
\"handler\" : \"replicateVulns\",
\"consumer_prototype\" :
\"RoadKill_report_processor.AssetAPIHostDetectionConsumer\",
\"schedule\" : [
\"00:30-17:00|mon-sun|*|*\",
\"!*|*|1|jan\",
\"!*|*|25|dec\",
\"!*|thu|22-28|nov\"
],
\"wake_interval\" : \"30m\",
\"interval\" : \"24h\"
}","INST",1.00
141.00,"RoadKill Manager","RoadKill Sync","Loads RoadKill instances and dispatches an entry point for that source + instance (one for each instance rule).","MvcsInstanceDispatchRule",41.00,"{
\"handler\" : \"startInstanceRules\",
\"schedule\" : [
\"0:00-23:59|mon-sun|*|*\"
],
\"wake_interval\" : \"30m\"
}","CORE",1.00
</code></pre>
<p>And here is what the python csv module returns as a row when it attempts to parse a row:</p>
<pre><code>>>> [(o,v) for o,v in enumerate(row)]
[(0, '265.00'), (1, 'RoadKill Report Processor'), (2, 'Report Processor'), (3, 'Loads a long-run-thread for each report matched by the handler method.'), (4, 'MvcsReportProcessManager'), (5, '41.00'), (6, '{\n \\handler\\" : \\"processReports\\"'), (7, '')]
</code></pre>
<p>Finally, here is the csv reader code:</p>
<pre><code>col_offsets = None
for f in os.listdir(testdatadir):
#split filename. get tablename.
fname = os.path.basename(f)
if fname and\
fname.startswith('mvcs_') and\
fname.endswith('.csv'):
tblname = fname.split('.')[0]
tobj = get_class_by_tablename(tblname)
with open(testdatadir+'/'+fname, 'r') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',',
quotechar='"')
for count,row in enumerate(csvreader):
if not count:
col_offsets = getColumnOffsets(row)
elif not col_offsets:
raise Exception('Missing column offsets.')
else:
tinst = tobj(
**{colname.lower() : row[offset] for
offset,colname in col_offsets})
try:
session.add(tinst)
except Exception as e:
logger.warn(str(e))
logger.warn('on adding:')
logger.warn(str(tinst))
</code></pre>
| 0 | 2016-08-09T16:04:27Z | 38,856,485 | <p>Line 69-70 (from <a href="https://docs.python.org/3.5/library/csv.html#dialects-and-formatting-parameters" rel="nofollow">https://docs.python.org/3.5/library/csv.html#dialects-and-formatting-parameters</a> ) Dialects. Escapechar is set to None by default.</p>
<p>Modified to backslash</p>
<pre><code>csvreader = csv.reader(csvfile, delimiter=',',
quotechar='"', dialect='unix', escapechar='\\')
</code></pre>
| 0 | 2016-08-09T16:53:47Z | [
"python",
"json",
"python-3.x",
"csv",
"parsing"
] |
Unable to convert strptime properly | 38,855,625 | <p>I was trying to take a string such as '2016-07-30T20:00:00' and convert it via </p>
<pre><code>(string, '%Y-%d-%mT%H:%M:%S')
</code></pre>
<p>but I keep getting the error...</p>
<pre><code>time data '2016-07-30T20:00:00' does not match format '%Y-%d-%mT%H:%M:%S'
</code></pre>
<p>when I had a string such as '2016-08-08T00:00:00' it would work but I'm not sure whats causing the error for the example above.</p>
| 0 | 2016-08-09T16:05:43Z | 38,855,683 | <p>Clearly, you swapped between the month and the days, use:</p>
<pre><code>(string, '%Y-%m-%dT%H:%M:%S')
# --^--^--
</code></pre>
| 2 | 2016-08-09T16:09:20Z | [
"python",
"django"
] |
UnicodeDecodeError Loading with sqlalchemy | 38,855,631 | <p>I am querying a MySQL database with sqlalchemy and getting the following error:</p>
<pre><code>UnicodeDecodeError: 'utf8' codec can't decode bytes in position 498-499: unexpected end of data
</code></pre>
<p>A column in the table was defined as <code>Unicode(500)</code> so this error suggests to me that there is an entry that was truncated because it was longer than 500 characters. Is there a way to handle this error and still load the entry? Is there a way to find the errant entry and delete it other than trying to load every entry one by one (or in batches) until I get the error?</p>
| 7 | 2016-08-09T16:06:11Z | 38,910,412 | <p>Make the column you are storing into a <code>BLOB</code>. After loading the data, do various things such as </p>
<pre><code> SELECT MAX(LENGTH(col)) FROM ... -- to see what the longest is in _bytes_.
</code></pre>
<p>Copy the data into another <code>BLOB</code> column and do</p>
<pre><code> ALTER TABLE t MODIFY col2 TEXT CHARACTER SET utf8 ... -- to see if it converts correctly
</code></pre>
<p>If that succeeds, then do</p>
<pre><code> SELECT MAX(CHAR_LENGTH(col2)) ... -- to see if the longest is more than 500 _characters_.
</code></pre>
<p>After you have tried a few things like that, we can see what direction to take next.</p>
| 0 | 2016-08-12T05:17:05Z | [
"python",
"mysql",
"unicode",
"utf-8",
"sqlalchemy"
] |
UnicodeDecodeError Loading with sqlalchemy | 38,855,631 | <p>I am querying a MySQL database with sqlalchemy and getting the following error:</p>
<pre><code>UnicodeDecodeError: 'utf8' codec can't decode bytes in position 498-499: unexpected end of data
</code></pre>
<p>A column in the table was defined as <code>Unicode(500)</code> so this error suggests to me that there is an entry that was truncated because it was longer than 500 characters. Is there a way to handle this error and still load the entry? Is there a way to find the errant entry and delete it other than trying to load every entry one by one (or in batches) until I get the error?</p>
| 7 | 2016-08-09T16:06:11Z | 38,979,387 | <p>In short, you should change:</p>
<pre><code>Unicode(500)
</code></pre>
<p>to:</p>
<pre><code>Unicode(500, unicode_errors='ignore', convert_unicode='force')
</code></pre>
<p>(Python 2 code follows, but the principles hold in python 3; only some of the output will differ.)</p>
<p>What's going on is that when you decode a bytestring, it complains if the bytestring can't be decoded, with the error you saw. </p>
<pre><code>>>> u = u'ABCDEFGH\N{TRADE MARK SIGN}'
>>> u
u'ABCDEFGH\u2122'
>>> print(u)
ABCDEFGHâ¢
>>> s = u.encode('utf-8')
>>> s
'ABCDEFGH\xe2\x84\xa2'
>>> truncated = s[:-1]
>>> truncated
'ABCDEFGH\xe2\x84'
>>> truncated.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cliffdyer/.virtualenvs/edx-platform/lib/python2.7/encodings/utf_8.py",
line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 8-9: unexpected
end of data
</code></pre>
<p>Python provides different optional modes of handling decode errors, though. Raising an exception is the default, but you can also truncate the text or convert the malformed part of the string to the official unicode replacement character.</p>
<pre><code>>>> trunc.decode('utf-8', errors='replace')
u'ABCDEFGH\ufffd'
>>> trunc.decode('utf-8', errors='ignore')
u'ABCDEFGH'
</code></pre>
<p>This is exactly what's happening within the column handling.</p>
<p>Looking at the Unicode and String classes in <a href="https://bitbucket.org/zzzeek/sqlalchemy/src/323e6e7f9f6a731103cfd19d774024f7f0f84377/lib/sqlalchemy/sql/sqltypes.py?at=master&fileviewer=file-view-default#sqltypes.py-110" rel="nofollow">sqlalchemy/sql/sqltypes.py</a>, it looks like there is a <code>unicode_errors</code> argument that you can pass to the constructor which passes its value through to the encoder's errors argument. There is also a note that you will need to set <code>convert_unicode='force'</code> to make it work.</p>
<p>Thus <code>Unicode(500, unicode_errors='ignore', convert_unicode='force')</code> should solve your problem, if you're okay with truncating the ends of your data.</p>
<p>If you have some control over the database, you should be able to prevent this issue in the future by defining your database to use the <a href="https://dev.mysql.com/doc/refman/5.6/en/charset-unicode-utf8mb4.html" rel="nofollow"><code>utf8mb4</code></a> character set. (Don't just use <code>utf8</code>, or it will fail on four byte utf8 characters, including most emojis). Then you will be guaranteed to have valid utf-8 stored in and returned from your database.</p>
| 2 | 2016-08-16T15:54:07Z | [
"python",
"mysql",
"unicode",
"utf-8",
"sqlalchemy"
] |
UnicodeDecodeError Loading with sqlalchemy | 38,855,631 | <p>I am querying a MySQL database with sqlalchemy and getting the following error:</p>
<pre><code>UnicodeDecodeError: 'utf8' codec can't decode bytes in position 498-499: unexpected end of data
</code></pre>
<p>A column in the table was defined as <code>Unicode(500)</code> so this error suggests to me that there is an entry that was truncated because it was longer than 500 characters. Is there a way to handle this error and still load the entry? Is there a way to find the errant entry and delete it other than trying to load every entry one by one (or in batches) until I get the error?</p>
| 7 | 2016-08-09T16:06:11Z | 39,161,038 | <p>In short, your MySQL setup is incorrect in that it truncates UTF-8 characters in mid-sequence. I would check twice that MySQL actually expects the character encoding of UTF-8 within the sessions and in the tables themselves.</p>
<hr>
<p>I would suggest switching to PostgreSQL (seriously) to avoid this kind of problem: not only does PostgreSQL understand UTF-8 properly in default configurations, but also it would not ever truncate a string to fit into the value, choosing to raise an error instead:</p>
<pre><code>psql (9.5.3, server 9.5.3)
Type "help" for help.
testdb=> create table foo(bar varchar(4));
CREATE TABLE
testdb=> insert into foo values ('aaaaa');
ERROR: value too long for type character varying(4)
</code></pre>
<p>This is also not unlike the Zen of Python:</p>
<blockquote>
<p>Explicit is better than implicit.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Errors should never pass silently.<br>
Unless explicitly silenced.<br>
In the face of ambiguity, refuse the temptation to guess.</p>
</blockquote>
| 0 | 2016-08-26T07:46:55Z | [
"python",
"mysql",
"unicode",
"utf-8",
"sqlalchemy"
] |
Writing API Results to CSV in Python | 38,855,641 | <p>I am looking for some assistance with writing API results to a .CSV file using Python. At this point, I'm successfully writing to .CSV, but I cannot seem to nail down the code behind the .CSV format I'm looking for, which is the standard one field = one column format.</p>
<p>Any help is appreciated! Details are below. Thanks!</p>
<p><strong>My code:</strong></p>
<pre><code>import requests
import json
import csv
urlcomp = 'http://url_ommitted/api/reports/completion?status=COMPLETED&from=2016-06-01&to=2016-08-06'
headers = {'authorization': "Basic API Key Ommitted", 'accept': "application/json", 'accept': "text/csv"}
## API Call to retrieve report
rcomp = requests.get(urlcomp, headers=headers)
## API Results
data = rcomp.text
## Write API Results to CSV
with open('C:\_Python\\testCompletionReport.csv', "wb") as csvFile:
writer = csv.writer(csvFile, delimiter=',')
for line in data:
writer.writerow(line)
</code></pre>
<p>The code above creates a .CSV with the correct output, but it's writing each character from the API results into a new cell in Column A of the output file. <strong>Screenshot below:</strong></p>
<p><a href="http://i.stack.imgur.com/8XIaD.png" rel="nofollow"><img src="http://i.stack.imgur.com/8XIaD.png" alt="Screenshot below:"></a></p>
<p>I've also attempted the code below, which writes the entire API result set into a single cell in the .CSV output file.</p>
<p><strong>Code:</strong></p>
<pre><code>data = rcomp.text
with open('C:\_Python\\CompletionReportOutput.csv', 'wb') as csvFile:
writer = csv.writer(csvFile, delimiter = ',')
writer.writerow([data])
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="http://i.stack.imgur.com/O88Yx.png" rel="nofollow"><img src="http://i.stack.imgur.com/O88Yx.png" alt="enter image description here"></a></p>
<p><strong>Below is a screenshot of some sample API result data returned from my call:</strong>
<a href="http://i.stack.imgur.com/QcuUm.png" rel="nofollow"><img src="http://i.stack.imgur.com/QcuUm.png" alt="enter image description here"></a></p>
<p><strong>Example of what I'm looking for in the final .CSV output file:</strong>
<a href="http://i.stack.imgur.com/kfEtP.png" rel="nofollow"><img src="http://i.stack.imgur.com/kfEtP.png" alt="enter image description here"></a></p>
<p><strong>EDIT - Sample API Response:</strong></p>
<p>"PACKAGE CREATED","PACKAGE ID","PACKAGE NAME","PACKAGE STATUS","PACKAGE TRASHED","PACKAGE UPDATED","SENDER ID","SENDER NAME","SENDER COMPANY","SENDER CREATED","SENDER EMAIL","SENDER FIRSTNAME","SENDER LANGUAGE","SENDER LASTNAME","SENDER PHONE","SENDER TITLE","SENDER UPDATED","SENDER ACTIVATED","SENDER LOCKED","SENDER STATUS","SENDER TYPE"
"Thu Aug 04 14:52:57 CDT 2016","ulw5MTQo8WjBfoCTKqz9LNCFpV4=","TestOne to TestTwo - Flatten PDF Removed","COMPLETED","false","Thu Aug 04 14:53:30 CDT 2016","tKpohv2kZ2oU","","","2016-08-03 14:12:06.904","testaccount@test.com","John","en","Smith","","","2016-08-03 14:12:06.942118","null","null","INVITED","REGULAR"
"Thu Aug 04 09:39:22 CDT 2016","IJV3U_yjPlxS-TVQgMrNgVUUSss=","TestOne to TestTwo - Email Test","COMPLETED","false","Thu Aug 04 10:11:29 CDT 2016","tKpohv2kZ2oU","","","2016-08-03 14:12:06.904","testaccount@test.com","John","en","Smith","","","2016-08-03 14:12:06.942118","null","null","INVITED","REGULAR"</p>
<p><strong>SECOND EDIT - Output from Lee's suggestion:</strong></p>
<p><a href="http://i.stack.imgur.com/xiTTd.png" rel="nofollow"><img src="http://i.stack.imgur.com/xiTTd.png" alt="enter image description here"></a></p>
| 1 | 2016-08-09T16:06:36Z | 38,856,010 | <pre><code>csvFile = open('C:\_Python\\CompletionReportOutput.csv', 'w')
writer = csv.writer(csvFile, delimiter = ' ')
for row in data.split('\n'):
writer.writerow(row)
</code></pre>
| 1 | 2016-08-09T16:26:04Z | [
"python",
"python-2.7",
"python-3.x",
"csv",
"export-to-csv"
] |
Writing API Results to CSV in Python | 38,855,641 | <p>I am looking for some assistance with writing API results to a .CSV file using Python. At this point, I'm successfully writing to .CSV, but I cannot seem to nail down the code behind the .CSV format I'm looking for, which is the standard one field = one column format.</p>
<p>Any help is appreciated! Details are below. Thanks!</p>
<p><strong>My code:</strong></p>
<pre><code>import requests
import json
import csv
urlcomp = 'http://url_ommitted/api/reports/completion?status=COMPLETED&from=2016-06-01&to=2016-08-06'
headers = {'authorization': "Basic API Key Ommitted", 'accept': "application/json", 'accept': "text/csv"}
## API Call to retrieve report
rcomp = requests.get(urlcomp, headers=headers)
## API Results
data = rcomp.text
## Write API Results to CSV
with open('C:\_Python\\testCompletionReport.csv', "wb") as csvFile:
writer = csv.writer(csvFile, delimiter=',')
for line in data:
writer.writerow(line)
</code></pre>
<p>The code above creates a .CSV with the correct output, but it's writing each character from the API results into a new cell in Column A of the output file. <strong>Screenshot below:</strong></p>
<p><a href="http://i.stack.imgur.com/8XIaD.png" rel="nofollow"><img src="http://i.stack.imgur.com/8XIaD.png" alt="Screenshot below:"></a></p>
<p>I've also attempted the code below, which writes the entire API result set into a single cell in the .CSV output file.</p>
<p><strong>Code:</strong></p>
<pre><code>data = rcomp.text
with open('C:\_Python\\CompletionReportOutput.csv', 'wb') as csvFile:
writer = csv.writer(csvFile, delimiter = ',')
writer.writerow([data])
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="http://i.stack.imgur.com/O88Yx.png" rel="nofollow"><img src="http://i.stack.imgur.com/O88Yx.png" alt="enter image description here"></a></p>
<p><strong>Below is a screenshot of some sample API result data returned from my call:</strong>
<a href="http://i.stack.imgur.com/QcuUm.png" rel="nofollow"><img src="http://i.stack.imgur.com/QcuUm.png" alt="enter image description here"></a></p>
<p><strong>Example of what I'm looking for in the final .CSV output file:</strong>
<a href="http://i.stack.imgur.com/kfEtP.png" rel="nofollow"><img src="http://i.stack.imgur.com/kfEtP.png" alt="enter image description here"></a></p>
<p><strong>EDIT - Sample API Response:</strong></p>
<p>"PACKAGE CREATED","PACKAGE ID","PACKAGE NAME","PACKAGE STATUS","PACKAGE TRASHED","PACKAGE UPDATED","SENDER ID","SENDER NAME","SENDER COMPANY","SENDER CREATED","SENDER EMAIL","SENDER FIRSTNAME","SENDER LANGUAGE","SENDER LASTNAME","SENDER PHONE","SENDER TITLE","SENDER UPDATED","SENDER ACTIVATED","SENDER LOCKED","SENDER STATUS","SENDER TYPE"
"Thu Aug 04 14:52:57 CDT 2016","ulw5MTQo8WjBfoCTKqz9LNCFpV4=","TestOne to TestTwo - Flatten PDF Removed","COMPLETED","false","Thu Aug 04 14:53:30 CDT 2016","tKpohv2kZ2oU","","","2016-08-03 14:12:06.904","testaccount@test.com","John","en","Smith","","","2016-08-03 14:12:06.942118","null","null","INVITED","REGULAR"
"Thu Aug 04 09:39:22 CDT 2016","IJV3U_yjPlxS-TVQgMrNgVUUSss=","TestOne to TestTwo - Email Test","COMPLETED","false","Thu Aug 04 10:11:29 CDT 2016","tKpohv2kZ2oU","","","2016-08-03 14:12:06.904","testaccount@test.com","John","en","Smith","","","2016-08-03 14:12:06.942118","null","null","INVITED","REGULAR"</p>
<p><strong>SECOND EDIT - Output from Lee's suggestion:</strong></p>
<p><a href="http://i.stack.imgur.com/xiTTd.png" rel="nofollow"><img src="http://i.stack.imgur.com/xiTTd.png" alt="enter image description here"></a></p>
| 1 | 2016-08-09T16:06:36Z | 38,861,353 | <p>So, I eventually stumbled onto a solution. Not sure if this is the "correct" way of handling this, but the code below wrote the API results directly into a .CSV with the correct column formatting.</p>
<pre><code># Get JSON Data
rcomp = requests.get(urlcomp, headers=headers)
# Write to .CSV
f = open('C:\_Python\Two\\newfile.csv', "w")
f.write(rcomp.text)
f.close()
</code></pre>
| 0 | 2016-08-09T22:20:58Z | [
"python",
"python-2.7",
"python-3.x",
"csv",
"export-to-csv"
] |
Python - Split a row into columns - csv data | 38,855,648 | <p>Am trying to read data from csv file, split each row into respective columns. </p>
<p>But my regex is failing when a particular column has <strong>commas with in itself</strong>. </p>
<p>eg: a,b,c,"d,e, g,",f </p>
<p>I want result like: </p>
<pre><code>a b c "d,e, g," f
</code></pre>
<p>which is 5 columns.</p>
<p>Here is the regex am using to split the string by comma </p>
<blockquote>
<p>,(?=(?:"[^"]<em>?(?:[^"]</em>)*))|,(?=[^"]+(?:,)|,+|$)</p>
</blockquote>
<p>but it fails for few strings while it works for others. </p>
<p>All am looking for is, when I read data from csv using pyspark into dataframe/rdd, I want to load/preserve all the columns without any mistakes </p>
<p>Thank You</p>
| 2 | 2016-08-09T16:06:55Z | 38,855,714 | <p>Much easier with the help of the newer <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><strong><code>regex</code></strong></a> module:</p>
<pre><code>import regex as re
string = 'a,b,c,"d,e, g,",f'
rx = re.compile(r'"[^"]*"(*SKIP)(*FAIL)|,')
parts = rx.split(string)
print(parts)
# ['a', 'b', 'c', '"d,e, g,"', 'f']
</code></pre>
<p>It supports the <code>(*SKIP)(*FAIL)</code> mechanism, which ignores everything betweem double quotes in this example.<br>
<hr>
If you have escaped double quotes, you could use:</p>
<pre><code>import regex as re
string = '''a,b,c,"d,e, g,",f, this, one, with "escaped \"double",quotes:""'''
rx = re.compile(r'".*?(?<!\\)"(*SKIP)(*FAIL)|,')
parts = rx.split(string)
print(parts)
# ['a', 'b', 'c', '"d,e, g,"', 'f', ' this', ' one', ' with "escaped "double",quotes:""']
</code></pre>
<p>See a demo for the latter on <a href="https://regex101.com/r/iN8rU3/1" rel="nofollow"><strong>regex101.com</strong></a>.
<hr>
For nearly 50 points, I feel to provide the <code>csv</code> methods as well:</p>
<pre><code>import csv
string = '''a,b,c,"d,e, g,",f, this, one, with "escaped \"double",quotes:""'''
# just make up an iterable, normally a file would go here
for row in csv.reader([string]):
print(row)
# ['a', 'b', 'c', 'd,e, g,', 'f', ' this', ' one', ' with "escaped "double"', 'quotes:""']
</code></pre>
| 3 | 2016-08-09T16:10:38Z | [
"python",
"regex",
"csv",
"pyspark",
"rdd"
] |
Python - Split a row into columns - csv data | 38,855,648 | <p>Am trying to read data from csv file, split each row into respective columns. </p>
<p>But my regex is failing when a particular column has <strong>commas with in itself</strong>. </p>
<p>eg: a,b,c,"d,e, g,",f </p>
<p>I want result like: </p>
<pre><code>a b c "d,e, g," f
</code></pre>
<p>which is 5 columns.</p>
<p>Here is the regex am using to split the string by comma </p>
<blockquote>
<p>,(?=(?:"[^"]<em>?(?:[^"]</em>)*))|,(?=[^"]+(?:,)|,+|$)</p>
</blockquote>
<p>but it fails for few strings while it works for others. </p>
<p>All am looking for is, when I read data from csv using pyspark into dataframe/rdd, I want to load/preserve all the columns without any mistakes </p>
<p>Thank You</p>
| 2 | 2016-08-09T16:06:55Z | 38,855,739 | <p>Try <code>\,(?=([^"\\]*(\\.|"([^"\\]*\\.)*[^"\\]*"))*[^"]*$)</code>.</p>
<p>Used <a href="https://www.stackoverflow.com/questions/6462578/alternative-to-regex-match-all-instances-not-inside-quotes#6464500">this answer which explains how to match everything that is not in quotes ignoring escaped quotes</a> and <a href="http://regexr.com/" rel="nofollow">http://regexr.com/</a> to test.</p>
<p>Note that - as other answers to your question state - there are better ways to parse CSV than use a regex.</p>
| 2 | 2016-08-09T16:11:44Z | [
"python",
"regex",
"csv",
"pyspark",
"rdd"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.