title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Error in QuickSort implementation | 38,727,749 | <p>The code is linked here : <a href="http://ideone.com/eba7CB" rel="nofollow">http://ideone.com/eba7CB</a> <br>I can't seem to find the error. Any help and criticism are appreciated.<br></p>
<pre><code>ar = []
def quick(l, r):
if (r-l) <= 1:
return
pivot = ar[l]
i = l+1
for j in range(l+1,r):
if ar[j] < pivot:
ar[i],ar[j] = ar[j],ar[i]
i+=1
ar[i-1],ar[l] = ar[l],ar[i-1]
# print i,j
quick(l,i)
quick(i+1,r)
def qSort():
l = 0
r = len(ar)
quick(l,r)
ar = [4, 2, 13, 10, 7, 3]
qSort()
print ar
</code></pre>
<p>The output is [2, 3, 4, 10, 7, 13]</p>
| -6 | 2016-08-02T18:16:40Z | 38,728,459 | <p>replace
quick(i+1,r)
by:
quick(i,r)</p>
| 0 | 2016-08-02T18:58:05Z | [
"python",
"python-2.7",
"sorting",
"quicksort"
] |
Creating individual columns to write to new csv | 38,727,770 | <p>A CSV returns the following values</p>
<pre><code>"1,323104,564382"
"2,322889,564483"
"3,322888,564479"
"4,322920,564425"
"5,322942,564349"
"6,322983,564253"
"7,322954,564154"
"8,322978,564121"
</code></pre>
<p>How would i take the " marks off each end of the rows, it seems to make individual columns when i do this.</p>
<pre><code>reader=[[i[0].replace('\'','')] for i in reader]
</code></pre>
<p>does not change the file at all</p>
| 0 | 2016-08-02T18:17:32Z | 38,727,871 | <p>How do you open the file? If you are trying to treat a read only file like a string, it's obviously not going to work. If you're working with CSV files, instead of trying to reinvent the wheel you could probably skip all of this parsing and just use the csv package. </p>
| 0 | 2016-08-02T18:23:54Z | [
"python",
"csv"
] |
Creating individual columns to write to new csv | 38,727,770 | <p>A CSV returns the following values</p>
<pre><code>"1,323104,564382"
"2,322889,564483"
"3,322888,564479"
"4,322920,564425"
"5,322942,564349"
"6,322983,564253"
"7,322954,564154"
"8,322978,564121"
</code></pre>
<p>How would i take the " marks off each end of the rows, it seems to make individual columns when i do this.</p>
<pre><code>reader=[[i[0].replace('\'','')] for i in reader]
</code></pre>
<p>does not change the file at all</p>
| 0 | 2016-08-02T18:17:32Z | 38,727,913 | <p>It seems strictly easier to peel the quotes off first, and then feed it to the csv reader, which simply takes any iterable over lines as input.</p>
<pre><code>import csv
import sys
f = open(sys.argv[1])
contents = f.read().replace('"', '')
reader = csv.reader(contents.splitlines())
for x,y,z in reader:
print x,y,z
</code></pre>
| 0 | 2016-08-02T18:26:26Z | [
"python",
"csv"
] |
Creating individual columns to write to new csv | 38,727,770 | <p>A CSV returns the following values</p>
<pre><code>"1,323104,564382"
"2,322889,564483"
"3,322888,564479"
"4,322920,564425"
"5,322942,564349"
"6,322983,564253"
"7,322954,564154"
"8,322978,564121"
</code></pre>
<p>How would i take the " marks off each end of the rows, it seems to make individual columns when i do this.</p>
<pre><code>reader=[[i[0].replace('\'','')] for i in reader]
</code></pre>
<p>does not change the file at all</p>
| 0 | 2016-08-02T18:17:32Z | 38,727,966 | <p>Assuming every line is wrapped by two double quotes, we can do this:</p>
<pre><code>f = open("filename.csv", "r")
newlines = []
for line in f: # we could use a list comprehension, but for simplicity, we won't.
newlines.append(line[1:-1])
f.close()
f2 = open("filename.csv", "w")
for index, line in enumerate(f2):
f2.write(newlines[index])
f2.close()
</code></pre>
<p><code>[1:-1]</code> uses a list-indexing operation to get the second letter of the string to the last letter of the string, each represented by the indexes <code>1</code> and <code>-1</code>.</p>
<p><a href="https://docs.python.org/3.5/library/functions.%E2%80%A6" rel="nofollow"><code>enumerate()</code></a> is a helper function that turns an iterable into <code>(0, first_element), (1, second_element), ...</code> pairs.</p>
<p>Iterating over a file gets you its lines.</p>
| 0 | 2016-08-02T18:29:15Z | [
"python",
"csv"
] |
Nosetests not seeing test classes that unittest discover does | 38,727,784 | <p>I have what I can only consider a very odd problem with the way nosetests identifies classes that are valid tests classes.</p>
<p>I'm initialising a test class as a generic <code>type</code> with specific inheritance from a base tests class by using:</p>
<pre><code>def random_test_class(n):
print "Generating %d random test cases..." % n
def make_test(genre, ss, ps):
return lambda self: self.compose(genre, ss, ps)
return type('TestEverything', (TestBase,),
{ 'test_%d' % i: make_test(genre, ss, ps)
for (i, (genre, ss, ps)) in zip(xrange(n), generate_settings())
})
</code></pre>
<p>and then initialising the class proper with</p>
<pre><code>class TestEverything(random_test_class(100)):
pass
</code></pre>
<p>Now, when I call my standard testing framework with <code>python -m unittest discover</code>, the testing is all very happy and it sees the <code>TestEverything</code> class as a test class defining 100 test methods (<code>test_1</code>, <code>test_2</code>, etc...). However, if I use <code>nosetests ./ -m "test_*"</code> it refuses to see <code>TestEverything</code> as a valid test class, and doesn't run any of its test methods.</p>
<p>How can I solve this? I really need the <code>xunit</code> output framework that <code>nosetests</code> provide, but I would very much like to avoid going through all the metaclass faff that is required to properly initialise a class with a specific test metaclass.</p>
| 0 | 2016-08-02T18:18:17Z | 38,737,042 | <p>When deciding whether or not to include methods in tests, Nose checks the <em>name of the function</em> that implements the method rather than the attribute name. Change you <code>make_test</code> so that the test created has a name that Nose will pick up.</p>
<pre><code>def make_test(genre, ss, ps):
def test(self):
self.compose(genre, ss, ps)
return test
</code></pre>
<p>This will match the regular expression you give in your question. <code>test_*</code> matches any name that has <code>test</code> in it followed by any number of underscores. </p>
<p>Note that if you wanted the functions to have different names you could pass the index to <code>make_test</code> and set <code>test.__name__</code>:</p>
<pre><code>def make_test(i, genre, ss, ps):
def test(self):
self.compose(genre, ss, ps)
test.__name__ = 'test_%d' % i
return test
</code></pre>
| 1 | 2016-08-03T07:26:08Z | [
"python",
"python-2.7",
"unit-testing",
"nose",
"python-unittest"
] |
ImportError: No module named 'UserString' in pyspark | 38,727,861 | <p>When I run spark using python3 on cluster. This error keeps come up:</p>
<pre><code>py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 10 in stage 0.0 failed 4 times, most recent failure: Lost task 10.3 in stage 0.0 (TID 24, us-lax-office-dev-03.vpc.supplyframe.com): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/glin/spark-1.6.0/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/glin/spark-1.6.0/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/glin/spark-1.6.0/python/lib/pyspark.zip/pyspark/serializers.py", line 419, in loads
return pickle.loads(obj, encoding=encoding)
ImportError: No module named 'UserString'
</code></pre>
<p>I know that in python3 "UserString" is not a module anymore it is a part of the collections module. But I cannot fix it. Is there anyone who can help????</p>
<p>If I change my master node to local[*] this error will goes away.....I'm so comfused</p>
| 0 | 2016-08-02T18:23:19Z | 38,746,632 | <blockquote>
<p>I know that in python3 "UserString" is not a module anymore it is a part of the collections module. But I cannot fix it.</p>
</blockquote>
<p>Why cannot you fix it? Instead of </p>
<pre><code>import UserString
</code></pre>
<p>can you try</p>
<pre><code>from collections import UserString
</code></pre>
| 0 | 2016-08-03T14:36:05Z | [
"python",
"apache-spark",
"mapreduce",
"pyspark"
] |
How to access MultiIndex column after groupby in pandas? | 38,727,863 | <p>With a single-indexed dataframe, the columns are available in the group by object: </p>
<pre><code>df1 = pd.DataFrame({'a':[2,2,4,4], 'b': [5,6,7,8]})
df1.groupby('a')['b'].sum() ->
a
2 11
4 15
</code></pre>
<p>But in a MultiIndex dataframe when not grouping by level, the columns are no longer accessible in the group by object</p>
<pre><code>df = pd.concat([df1, df1], keys=['c', 'd'], axis=1)
df ->
c d
a b a b
0 2 5 2 5
1 2 6 2 6
2 4 7 4 7
3 4 8 4 8
df.groupby([('c','a')])[('c','b')].sum() ->
KeyError: "Columns not found: 'b', 'c'"
</code></pre>
<p>As a workaround, this works but it's not efficient since it doesn't use the cpythonized aggregator, not to mention it's awkward looking. </p>
<pre><code>df.groupby([('c','a')]).apply(lambda df: df[('c', 'b')].sum())
</code></pre>
<p>Is there a way to access MultiIndex column in groupby object that I missed?</p>
| 3 | 2016-08-02T18:23:27Z | 38,728,733 | <p>Adding a comma after your <code>('c','b')</code> tuple seems to work: </p>
<pre><code>df.groupby([('c','a')])[('c','b'),].sum()
</code></pre>
<p>I'm guessing that without the comma, pandas is just interpreting them as separate items.</p>
| 2 | 2016-08-02T19:14:56Z | [
"python",
"pandas",
"grouping",
"multi-index"
] |
How to access MultiIndex column after groupby in pandas? | 38,727,863 | <p>With a single-indexed dataframe, the columns are available in the group by object: </p>
<pre><code>df1 = pd.DataFrame({'a':[2,2,4,4], 'b': [5,6,7,8]})
df1.groupby('a')['b'].sum() ->
a
2 11
4 15
</code></pre>
<p>But in a MultiIndex dataframe when not grouping by level, the columns are no longer accessible in the group by object</p>
<pre><code>df = pd.concat([df1, df1], keys=['c', 'd'], axis=1)
df ->
c d
a b a b
0 2 5 2 5
1 2 6 2 6
2 4 7 4 7
3 4 8 4 8
df.groupby([('c','a')])[('c','b')].sum() ->
KeyError: "Columns not found: 'b', 'c'"
</code></pre>
<p>As a workaround, this works but it's not efficient since it doesn't use the cpythonized aggregator, not to mention it's awkward looking. </p>
<pre><code>df.groupby([('c','a')]).apply(lambda df: df[('c', 'b')].sum())
</code></pre>
<p>Is there a way to access MultiIndex column in groupby object that I missed?</p>
| 3 | 2016-08-02T18:23:27Z | 38,729,152 | <p>Maybe this helps explain the syntax:</p>
<pre><code>df.groupby([('c','a')]).sum()
c d
b a b
(c, a)
2 11 4 11
4 15 8 15
df.groupby([('c','a')])[('c','b'),('d','b')].sum()
c d
b b
(c, a)
2 11 11
4 15 15
</code></pre>
| 0 | 2016-08-02T19:40:48Z | [
"python",
"pandas",
"grouping",
"multi-index"
] |
Split python string every nth character iterating over starting character | 38,727,914 | <p>I'm trying to find an elegant way to split a python string every nth character, iterating over which character to start with.</p>
<p>For example, suppose I have a string containing the following:</p>
<pre><code>ANDTLGY
</code></pre>
<p>I want to split the string into a set of 3 characters looking like this:</p>
<pre><code>['AND','NDT','DTL','TLG','LGY']
</code></pre>
| 1 | 2016-08-02T18:26:27Z | 38,727,978 | <pre><code>a='ANDTLGY'
def nlength_parts(a,n):
return map(''.join,zip(*[a[i:] for i in range(n)]))
print nlength_parts(a,3)
</code></pre>
<p>hopefully you can explain to the professor how it works ;) </p>
| 3 | 2016-08-02T18:30:03Z | [
"python",
"string",
"loops"
] |
Split python string every nth character iterating over starting character | 38,727,914 | <p>I'm trying to find an elegant way to split a python string every nth character, iterating over which character to start with.</p>
<p>For example, suppose I have a string containing the following:</p>
<pre><code>ANDTLGY
</code></pre>
<p>I want to split the string into a set of 3 characters looking like this:</p>
<pre><code>['AND','NDT','DTL','TLG','LGY']
</code></pre>
| 1 | 2016-08-02T18:26:27Z | 38,728,030 | <p>Simple way is to use <a class='doc-link' href="http://stackoverflow.com/documentation/python/1019/string-formatting/7681/string-slicing#t=201608021833540090269">string slicing</a> together with <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/737/list-comprehensions#t=201608021834453569628">list comprehensions</a>:</p>
<pre><code>s = 'ANDTLGY'
[s[i:i+3] for i in range(len(s)-2)]
#output:
['AND', 'NDT', 'DTL', 'TLG', 'LGY']
</code></pre>
| 5 | 2016-08-02T18:33:24Z | [
"python",
"string",
"loops"
] |
Split python string every nth character iterating over starting character | 38,727,914 | <p>I'm trying to find an elegant way to split a python string every nth character, iterating over which character to start with.</p>
<p>For example, suppose I have a string containing the following:</p>
<pre><code>ANDTLGY
</code></pre>
<p>I want to split the string into a set of 3 characters looking like this:</p>
<pre><code>['AND','NDT','DTL','TLG','LGY']
</code></pre>
| 1 | 2016-08-02T18:26:27Z | 38,728,072 | <p>how about</p>
<pre><code>a='ANDTLGY'
def chopper(s,chop=3):
if len(s) < chop:
return []
return [s[0:chop]] + chopper(s[1:],chop)
</code></pre>
<p>this returns</p>
<pre><code>['AND', 'NDT', 'DTL', 'TLG', 'LGY']
</code></pre>
| 2 | 2016-08-02T18:35:47Z | [
"python",
"string",
"loops"
] |
Simple way to run sklearn function with n_jobs > 1 inside parallel pool | 38,727,931 | <p>Is there some ways to run sklearn (supporting n_jobs argument) inside parallel loop? When I try to run sklearn function with n_jobs >1 inside multiprocessing.Pool, I've got the warning</p>
<pre><code>UserWarning: Multiprocessing-backed parallel loops cannot be nested, setting n_jobs=1
for s in split_list(seeds, n_jobs))
</code></pre>
<p>So does exist some parallel library, which allows nested parallelisation?</p>
| 0 | 2016-08-02T18:26:57Z | 38,744,445 | <p>This Warning comes from <code>joblib</code>, the multiprocessing library used in <code>sklearn</code>.
It occurs as its parallel mechanism relies on <code>multiprocessing.Pool</code> which uses <code>daemonic</code> workers that cannot spawn subprocess. </p>
<p>I don't see any simple way to by pass this restriction with <code>sklearn</code>.
You might want to create and manage your process by hand.
If you know what you are doing, you could create your <code>Process</code> and used them to run <code>sklearn</code> functions with <code>n_jobs > 1</code>.<br>
This will imply a lot a care managing the process and not running them all at once.
It is also important to not make them <code>daemonic</code>.
For instance:</p>
<pre><code>def target(j):
from time import sleep
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_jobs=2)
rf.fit(np.random.random(size=(100, 100)), np.random.random(100)> .6)
print(j, 'done')
pr = [mp.Process(target=target, args=(i,)) for i in range(10)];\
[p.start() for p in pr]
[p.join() for p in pr]
</code></pre>
<p>Note that all the Processes are run simultaneously and this could lead to worst performance than the sequential implementation.</p>
<p>That being said, their is not much usecases where using nested parallelism is a good idea.
All the core should be used for the task which is the more time consumming, the other tasks running sequentially.</p>
| 0 | 2016-08-03T13:01:34Z | [
"python",
"python-2.7",
"parallel-processing",
"multiprocessing",
"nested-loops"
] |
python dict in comma separated csv file | 38,728,058 | <p>Python dict is in a format like this: </p>
<pre><code>'{"a":1, "b":2, "c":3}'
</code></pre>
<p>Notice it use comma to separate different key:value pairs.</p>
<p>The problem is I have a CSV file, which is separate columns by comma too : </p>
<pre><code>'
"id", "gender", "age", "name"
"001", "male", "14", "{"first":"Mike", "last":"Green"}"
"002", "female", "15", "{"first":"Kate", "last":"Spear"}"
'
</code></pre>
<p>When I do<br>
<code>pandas.read_csv('csvfile.csv', sep = ',', names=["id", "gender", "age", "name"])</code></p>
<p>I got: </p>
<pre><code>'
"id", "gender", "age", "name"
"001", "male", "14", "{"first":"Mike"
"002", "female", "15", "{"first":"Kate"
'
</code></pre>
<p>The reason I guess is csv reader regards the comma follows first name in dict as a separator in csv files. Since I only specified 4 columns named " "id", "gender", "age", "name"", so it ignore last names. </p>
<p>Any thoughts or possible solution to this? Thanks!</p>
| 0 | 2016-08-02T18:34:55Z | 38,728,783 | <p>You can change the delimiter that <code>read_csv</code> uses. If you can change the csv files to use a semicolon for separating columns, you can then use <code>read_csv(file.csv, sep=';'...)</code> </p>
<p>Alternatively you can fix the quoting from </p>
<pre><code>"001", "male", "14", "{"first":"Mike", "last":"Green"}"
</code></pre>
<p>to </p>
<pre><code>"001", "male", "14", "{'first':'Mike', 'last':'Green'}"
</code></pre>
<p>Of course both methods mean editing the csv file. </p>
<p>The second looks sounder. The regular expression <code>(\{[^"]*)(")([^}]*\})</code> could be used to match quotes inside braces. (untested)</p>
| 0 | 2016-08-02T19:17:46Z | [
"python",
"csv",
"pandas",
"dictionary"
] |
django - how to bind records of table with calculated count values from another table | 38,728,128 | <p>In my application I have a list of trainings. One field on this list should display number of booking for each of training.
To show what I mean I prepared SQL query:</p>
<pre><code>SELECT *
FROM club_training a
LEFT JOIN
(SELECT training_id, count(*)
FROM club_booking
group by training_id) b
ON a.id = b.training_id
</code></pre>
<p>Could you give me some advice how to do it in django?
I used <code>Booking.objects.all().values('training_id').annotate(booked_amount=Count('training_id'))</code> in my code, but the result is that all count values for all trainings are displayed for each training on the list. Should be displayed one count value which is apropriate for each training.</p>
<p><a href="http://i.stack.imgur.com/nfkLF.png" rel="nofollow"><img src="http://i.stack.imgur.com/nfkLF.png" alt="enter image description here"></a></p>
<p>views.py</p>
<pre><code>class HomePageView(TemplateView):
"""Home Page with list of trainings"""
template_name = 'club/training_list.html'
def get_context_data(self, **kwargs):
now = datetime.datetime.now()
context = super(HomePageView, self).get_context_data(**kwargs)
context['trainings'] = Training.objects.filter(state="A", training_date__gte=now).order_by('training_date', 'start_time')
for each_training in context['trainings']:
each_training.diff = each_training.availability - each_training.counter
each_training.counter = Booking.objects.all().values('training_id').annotate(booked_amount=Count('training_id'))
return context
</code></pre>
<p>models.py</p>
<pre><code>class Training(models.Model):
"""Class for plan training"""
STATE = (
('A', 'Active'),
('I', 'Inactive'),
)
name = models.ForeignKey('TrnDesc')
instructor = models.ForeignKey('Instructor')
start_time = models.TimeField(blank=True)
end_time = models.TimeField(default='00:00:00')
availability = models.PositiveIntegerField(default=15)
state = models.CharField(max_length=1, choices=STATE, default='A')
training_date = models.DateField(default=date.today)
counter = models.PositiveIntegerField(default=0)
def __str__(self):
return self.name.name
class Booking(models.Model):
"""Data of people which book fitness classes"""
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
email = models.CharField(max_length=50)
phone = models.CharField(max_length=10)
training = models.ForeignKey('Training')
def __str__(self):
return self.training.name.name
</code></pre>
<p>training_list.html</p>
<pre><code>{% extends 'club/base.html' %}
{% block content %}
<ul class="nav nav-pills">
<li role="presentation" class="active"><a href="#">Fitness Classes</a></li>
<li role="presentation"><a href="#">Join Us</a></li>
<li role="presentation"><a href="#">Contact Us</a></li>
</ul>
<br></br>
{% regroup trainings by training_date as date_list %}
{% for date in date_list %}
<div class="panel panel-default">
<div class="panel-heading">{{date.grouper|date:"l, d F o"}}</div>
<table class="table">
<tr>
<th style="width: 20%">Training name</th>
<th style="width: 30%">Training description</th>
<th style="width: 10%">Instructor</th>
<th style="width: 10%">Start time</th>
<th style="width: 10%">End time</th>
<th style="width: 10%">Left</th>
<th style="width: 10%">Test_counter</th>
<th style="width: 10%">Actions</th>
</tr>
{% for training in date.list %}
<tr>
<td>{{training.name}}</td>
<td>{{training.name.desc}}</td>
<td>{{training.instructor}}</td>
<td>{{training.start_time|time:"H:i"}}</td>
<td>{{training.end_time|time:"H:i"}}</td>
<td>{{training.diff}}</td>
<td>{{training.counter}}</td>
<td><a href="{% url 'book' training_id=training.pk%}"><button type="button" class="btn btn-primary">Book</button></a></td>
</tr>
{% endfor %}
</table>
</div>
{% endfor %}
{% endblock %}
</code></pre>
| 2 | 2016-08-02T18:38:49Z | 38,734,881 | <p>I would approach from the other direction: <code>each_training.booking_set.count()</code></p>
<p>Also, I'm not sure it fits your database goals, but if I understand what you are looking for, you could set it up like this:</p>
<p><strong>models.py</strong></p>
<pre><code>class Training(models.Model):
...
<fields>
...
def __str__(self):
return self.name.name
@property
def counter(self):
return self.booking_set.count()
@property
def diff(self):
return self.availability - self.counter
</code></pre>
<p>Like this, the values can go directly from the model to the template.</p>
<p>The only problem that I can see with this is that I think it prevents these fields from being part of a queryset. For example, <code>Training.objects.filter(counter__gt=0)</code> won't work. If you need that, I think you'll still have to find an opportunity to save and update the value in the database, <a href="https://docs.djangoproject.com/en/1.9/topics/signals/" rel="nofollow">possibly using a signal</a>, that way you don't have to save the value over and over again every time the view is called.</p>
<p>It also looks like you could use a manager to handle some of the logic you are doing in the view:</p>
<p><strong>managers.py</strong></p>
<pre><code>from django.db.models import Manager
class Active(Manager):
def by_date(self, date)
Training.objects.filter(state="A", training_date__gte=date).order_by('training_date', 'start_time')
</code></pre>
<p>Then you can add the manager to your model (being sure to preserve your vanilla manager):</p>
<p><strong>models.py</strong></p>
<pre><code>from .managers import Courses
class Training(models.Model):
...
<fields>
objects = models.Manager()
active = Active()
...
def __str__(self):
return self.name.name
</code></pre>
<p>And now you can distribute all the information with a slim ListView. </p>
<pre><code>from django.views.generic import ListView
class HomePageView(ListView):
"""Home Page with list of trainings"""
template_name = 'club/training_list.html'
context_object_name = "trainings"
def get_queryset(self):
now = datetime.datetime.now()
return Trainings.active.by_date(now)
</code></pre>
<p>I'm not familiar with some of the things you are doing in the template, but everything should work the same way it does now.</p>
<p>Or maybe I'm way off, but hopefully its some good food for thought :)</p>
| 0 | 2016-08-03T05:13:30Z | [
"python",
"sql",
"django"
] |
django - how to bind records of table with calculated count values from another table | 38,728,128 | <p>In my application I have a list of trainings. One field on this list should display number of booking for each of training.
To show what I mean I prepared SQL query:</p>
<pre><code>SELECT *
FROM club_training a
LEFT JOIN
(SELECT training_id, count(*)
FROM club_booking
group by training_id) b
ON a.id = b.training_id
</code></pre>
<p>Could you give me some advice how to do it in django?
I used <code>Booking.objects.all().values('training_id').annotate(booked_amount=Count('training_id'))</code> in my code, but the result is that all count values for all trainings are displayed for each training on the list. Should be displayed one count value which is apropriate for each training.</p>
<p><a href="http://i.stack.imgur.com/nfkLF.png" rel="nofollow"><img src="http://i.stack.imgur.com/nfkLF.png" alt="enter image description here"></a></p>
<p>views.py</p>
<pre><code>class HomePageView(TemplateView):
"""Home Page with list of trainings"""
template_name = 'club/training_list.html'
def get_context_data(self, **kwargs):
now = datetime.datetime.now()
context = super(HomePageView, self).get_context_data(**kwargs)
context['trainings'] = Training.objects.filter(state="A", training_date__gte=now).order_by('training_date', 'start_time')
for each_training in context['trainings']:
each_training.diff = each_training.availability - each_training.counter
each_training.counter = Booking.objects.all().values('training_id').annotate(booked_amount=Count('training_id'))
return context
</code></pre>
<p>models.py</p>
<pre><code>class Training(models.Model):
"""Class for plan training"""
STATE = (
('A', 'Active'),
('I', 'Inactive'),
)
name = models.ForeignKey('TrnDesc')
instructor = models.ForeignKey('Instructor')
start_time = models.TimeField(blank=True)
end_time = models.TimeField(default='00:00:00')
availability = models.PositiveIntegerField(default=15)
state = models.CharField(max_length=1, choices=STATE, default='A')
training_date = models.DateField(default=date.today)
counter = models.PositiveIntegerField(default=0)
def __str__(self):
return self.name.name
class Booking(models.Model):
"""Data of people which book fitness classes"""
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
email = models.CharField(max_length=50)
phone = models.CharField(max_length=10)
training = models.ForeignKey('Training')
def __str__(self):
return self.training.name.name
</code></pre>
<p>training_list.html</p>
<pre><code>{% extends 'club/base.html' %}
{% block content %}
<ul class="nav nav-pills">
<li role="presentation" class="active"><a href="#">Fitness Classes</a></li>
<li role="presentation"><a href="#">Join Us</a></li>
<li role="presentation"><a href="#">Contact Us</a></li>
</ul>
<br></br>
{% regroup trainings by training_date as date_list %}
{% for date in date_list %}
<div class="panel panel-default">
<div class="panel-heading">{{date.grouper|date:"l, d F o"}}</div>
<table class="table">
<tr>
<th style="width: 20%">Training name</th>
<th style="width: 30%">Training description</th>
<th style="width: 10%">Instructor</th>
<th style="width: 10%">Start time</th>
<th style="width: 10%">End time</th>
<th style="width: 10%">Left</th>
<th style="width: 10%">Test_counter</th>
<th style="width: 10%">Actions</th>
</tr>
{% for training in date.list %}
<tr>
<td>{{training.name}}</td>
<td>{{training.name.desc}}</td>
<td>{{training.instructor}}</td>
<td>{{training.start_time|time:"H:i"}}</td>
<td>{{training.end_time|time:"H:i"}}</td>
<td>{{training.diff}}</td>
<td>{{training.counter}}</td>
<td><a href="{% url 'book' training_id=training.pk%}"><button type="button" class="btn btn-primary">Book</button></a></td>
</tr>
{% endfor %}
</table>
</div>
{% endfor %}
{% endblock %}
</code></pre>
| 2 | 2016-08-02T18:38:49Z | 38,734,992 | <pre><code> for each_training in context['trainings']:
each_training.diff = each_training.availability - each_training.counter
each_training.counter = Booking.objects.filter(training_id=each_training.id).count() # just modify this line
return context
</code></pre>
| 1 | 2016-08-03T05:21:16Z | [
"python",
"sql",
"django"
] |
Recursive algorithm - extend vs creating new list with '+' | 38,728,182 | <p>So I have a recursive function below that keeps replacing <code>'_'</code> characters with each lowercase letter in the alphabet, until all combinations of those possible lowercase letters are substituted for the <code>'_'</code> characters.</p>
<p><strong>Simple Example</strong>: </p>
<blockquote>
<p><code>repl_underscores('__A')</code> </p>
<p><code>>>>[a_A,b_A,c_A......aaA,abA,acA....zzA]</code></p>
</blockquote>
<p>I had this function working with extend to build up the list, which as the comment below mentions, modifies the same existing list in-place repeatedly and accomplishes the job.</p>
<p>For the sake of practice, I wanted to re-write to build a new list on each call and pass that result to the successive recursive calls, with the goal of getting the same result. </p>
<p>It's not working and I know it has to do with the fact that I'm building up a new list on each call, but I thought that since I was passing in the built-up version on each recursive call that I would be OK since those calls would then be informed of changes.</p>
<p>I'm having trouble finding out where it is breaking. I know I can get it to work by modifying the same list (either through mutable default, global variable, or extend), but I would like to build up a new clean list each time I recurse.</p>
<pre><code>def repl_underscores(letters,res=None):
if res is None: res = list()
if '_' not in letters: return res
repl = [letters.replace('_',letter,1) for letter in string.ascii_lowercase]
res = res + repl #using += works, due to extending being a mutation (same list referenced at each call)
for each in repl:
repl_underscores(each,res) #trying to pass modified list to keep building up
return res
print(repl_underscores('__DER'))
</code></pre>
| 1 | 2016-08-02T18:42:24Z | 38,728,256 | <pre><code>res = res + repl #using += works, due to extending being a mutation (same list referenced at each call)
</code></pre>
<p>This line is the issue, as you seem to have guessed. Each time, in the recursive call, it assigns a <em>new local list</em>, which doesn't keep the reference to the old one, so the caller isn't notified of changes.</p>
<p>Luckily your function already returns the list, so let's just capture that:</p>
<pre><code> res = repl_underscores(each,res) #trying to pass modified list to keep building up
</code></pre>
| 0 | 2016-08-02T18:47:12Z | [
"python",
"algorithm",
"recursion",
"combinations"
] |
Recursive algorithm - extend vs creating new list with '+' | 38,728,182 | <p>So I have a recursive function below that keeps replacing <code>'_'</code> characters with each lowercase letter in the alphabet, until all combinations of those possible lowercase letters are substituted for the <code>'_'</code> characters.</p>
<p><strong>Simple Example</strong>: </p>
<blockquote>
<p><code>repl_underscores('__A')</code> </p>
<p><code>>>>[a_A,b_A,c_A......aaA,abA,acA....zzA]</code></p>
</blockquote>
<p>I had this function working with extend to build up the list, which as the comment below mentions, modifies the same existing list in-place repeatedly and accomplishes the job.</p>
<p>For the sake of practice, I wanted to re-write to build a new list on each call and pass that result to the successive recursive calls, with the goal of getting the same result. </p>
<p>It's not working and I know it has to do with the fact that I'm building up a new list on each call, but I thought that since I was passing in the built-up version on each recursive call that I would be OK since those calls would then be informed of changes.</p>
<p>I'm having trouble finding out where it is breaking. I know I can get it to work by modifying the same list (either through mutable default, global variable, or extend), but I would like to build up a new clean list each time I recurse.</p>
<pre><code>def repl_underscores(letters,res=None):
if res is None: res = list()
if '_' not in letters: return res
repl = [letters.replace('_',letter,1) for letter in string.ascii_lowercase]
res = res + repl #using += works, due to extending being a mutation (same list referenced at each call)
for each in repl:
repl_underscores(each,res) #trying to pass modified list to keep building up
return res
print(repl_underscores('__DER'))
</code></pre>
| 1 | 2016-08-02T18:42:24Z | 38,728,322 | <p>better to not modify the function arguments but build with the returned values (more functional style). With slight modification of your code it works as intended.</p>
<p>import string</p>
<pre><code>def repl_underscores(letters):
res = list()
if '_' not in letters: return res
repl = [letters.replace('_',letter,1) for letter in string.ascii_lowercase]
res += repl
for each in repl:
res += repl_underscores(each)
return res
print(repl_underscores('__DER'))
</code></pre>
| 1 | 2016-08-02T18:51:24Z | [
"python",
"algorithm",
"recursion",
"combinations"
] |
Recursive algorithm - extend vs creating new list with '+' | 38,728,182 | <p>So I have a recursive function below that keeps replacing <code>'_'</code> characters with each lowercase letter in the alphabet, until all combinations of those possible lowercase letters are substituted for the <code>'_'</code> characters.</p>
<p><strong>Simple Example</strong>: </p>
<blockquote>
<p><code>repl_underscores('__A')</code> </p>
<p><code>>>>[a_A,b_A,c_A......aaA,abA,acA....zzA]</code></p>
</blockquote>
<p>I had this function working with extend to build up the list, which as the comment below mentions, modifies the same existing list in-place repeatedly and accomplishes the job.</p>
<p>For the sake of practice, I wanted to re-write to build a new list on each call and pass that result to the successive recursive calls, with the goal of getting the same result. </p>
<p>It's not working and I know it has to do with the fact that I'm building up a new list on each call, but I thought that since I was passing in the built-up version on each recursive call that I would be OK since those calls would then be informed of changes.</p>
<p>I'm having trouble finding out where it is breaking. I know I can get it to work by modifying the same list (either through mutable default, global variable, or extend), but I would like to build up a new clean list each time I recurse.</p>
<pre><code>def repl_underscores(letters,res=None):
if res is None: res = list()
if '_' not in letters: return res
repl = [letters.replace('_',letter,1) for letter in string.ascii_lowercase]
res = res + repl #using += works, due to extending being a mutation (same list referenced at each call)
for each in repl:
repl_underscores(each,res) #trying to pass modified list to keep building up
return res
print(repl_underscores('__DER'))
</code></pre>
| 1 | 2016-08-02T18:42:24Z | 38,728,358 | <p>Just use a function without recursion:</p>
<pre><code>from itertools import combinations_with_replacement, chain
def repl_underscores(letters):
result = []
for chars in combinations_with_replacement(string.lowercase, letters.count('_')):
chars = chain(chars,[''])
result.append(''.join(a+next(chars) for a in letters.split('_')))
return result
</code></pre>
| -1 | 2016-08-02T18:53:31Z | [
"python",
"algorithm",
"recursion",
"combinations"
] |
Returning string matches between two lists for a given number of elements in a third list | 38,728,204 | <p>I've got a feeling that I will be told to go to the 'beginner's guide' or what have you but I have this code here that goes</p>
<pre><code>does = ['my','mother','told','me','to','choose','the']
it = ['my','mother','told','me','to','choose','the']
work = []
while 5 > len(work):
for nope in it:
if nope in does:
work.append(nope)
print (work)
</code></pre>
<p>And I get</p>
<pre><code>['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
</code></pre>
<p>Why is this? And how do I convince it to return</p>
<pre><code>['my', 'mother', 'told', 'me']
</code></pre>
| 11 | 2016-08-02T18:43:57Z | 38,728,248 | <p>You could try something like this:</p>
<pre><code>for nope in it:
if len(work) < 5 and nope in does:
work.append(nope)
else:
break
</code></pre>
<p>The problem with your code is that it does the check of the work's length, after having looped through all the items of <code>it</code> and having added all of them that are in <code>does</code>.</p>
| 8 | 2016-08-02T18:46:38Z | [
"python",
"list",
"set-intersection"
] |
Returning string matches between two lists for a given number of elements in a third list | 38,728,204 | <p>I've got a feeling that I will be told to go to the 'beginner's guide' or what have you but I have this code here that goes</p>
<pre><code>does = ['my','mother','told','me','to','choose','the']
it = ['my','mother','told','me','to','choose','the']
work = []
while 5 > len(work):
for nope in it:
if nope in does:
work.append(nope)
print (work)
</code></pre>
<p>And I get</p>
<pre><code>['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
</code></pre>
<p>Why is this? And how do I convince it to return</p>
<pre><code>['my', 'mother', 'told', 'me']
</code></pre>
| 11 | 2016-08-02T18:43:57Z | 38,728,337 | <p>You can do:</p>
<pre><code>does = ['my','mother','told','me','to','choose','the']
it = ['my','mother','told','me','to','choose','the']
work = []
for nope in it:
if nope in does:
work.append(nope)
work = work[:4]
print (work)
</code></pre>
<p>It's just making the list without checking the length, then cutting it and leaving only the 4 first elements.</p>
| 1 | 2016-08-02T18:52:22Z | [
"python",
"list",
"set-intersection"
] |
Returning string matches between two lists for a given number of elements in a third list | 38,728,204 | <p>I've got a feeling that I will be told to go to the 'beginner's guide' or what have you but I have this code here that goes</p>
<pre><code>does = ['my','mother','told','me','to','choose','the']
it = ['my','mother','told','me','to','choose','the']
work = []
while 5 > len(work):
for nope in it:
if nope in does:
work.append(nope)
print (work)
</code></pre>
<p>And I get</p>
<pre><code>['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
</code></pre>
<p>Why is this? And how do I convince it to return</p>
<pre><code>['my', 'mother', 'told', 'me']
</code></pre>
| 11 | 2016-08-02T18:43:57Z | 38,728,359 | <p>Alternatively, to stay a little closer to your original logic:</p>
<pre><code>i = 0
while 4 > len(work) and i < len(it):
nope = it[i]
if nope in does:
work.append(nope)
i += 1
# ['my', 'mother', 'told', 'me', 'to']
</code></pre>
| 1 | 2016-08-02T18:53:41Z | [
"python",
"list",
"set-intersection"
] |
Returning string matches between two lists for a given number of elements in a third list | 38,728,204 | <p>I've got a feeling that I will be told to go to the 'beginner's guide' or what have you but I have this code here that goes</p>
<pre><code>does = ['my','mother','told','me','to','choose','the']
it = ['my','mother','told','me','to','choose','the']
work = []
while 5 > len(work):
for nope in it:
if nope in does:
work.append(nope)
print (work)
</code></pre>
<p>And I get</p>
<pre><code>['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
</code></pre>
<p>Why is this? And how do I convince it to return</p>
<pre><code>['my', 'mother', 'told', 'me']
</code></pre>
| 11 | 2016-08-02T18:43:57Z | 39,804,847 | <p>Just for fun, here's a one-liner with no imports:
</p>
<pre><code>does = ['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
it = ['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
work = [match for match, _ in zip((nope for nope in does if nope in it), range(4))]
</code></pre>
| 0 | 2016-10-01T09:08:30Z | [
"python",
"list",
"set-intersection"
] |
RarFile Python Module | 38,728,243 | <p>I am trying to make a simple bruteforcer for rar files. My code is...</p>
<pre><code>import rarfile
file = input("Password List Directory: ")
rarFile = input("Rar File: ")
passwordList = open(file,"r")
for i in passwordList:
try :
rarfile.read(rarFile, psw=i)
print('[+] Password Found: '+i)
except Exception as e:
print('[-] '+i+' is not a password ')
passwordList.close()
</code></pre>
<p>I think this has to do with my use of the module, because when I input a password list that I am 10000% sure contains the password to the rarFile, it prints the exception.</p>
| 0 | 2016-08-02T18:46:28Z | 38,728,449 | <p>The real problem here is that you are catching all exceptions, not just the one you want. So use <code>except rarfile.PasswordRequired:</code> That will show you that the error is not a missing password. Instead there is no function <code>read</code> in the rarfile module.</p>
<p>Have a look at some <a href="https://rarfile.readthedocs.io/en/latest/api.html#module-rarfile" rel="nofollow">Documentation</a>. Rar encryption is per file, not per archive.</p>
<p>You need to create a object from the RarFile class and try the password on each file in the archive. (or just the first if you know that is encrypted)</p>
<pre><code>import rarfile
file = input("Password List Directory: ")
rarFilename = input("Rar File: ")
rf = rarfile.RarFile(rarFilename)
passwordList = open(file,"r")
first_file = next(rf.infolist)
for i in passwordList:
password = i.rstrip()
try:
rf.open(first_file, psw=password)
print(password, "found")
except rarfile.PasswordRequired:
print(password,"is not a password")
</code></pre>
<p>When you open and read lines from a file, the "new line" character is kept
at the end of the line. This needs to be stripped from each line.</p>
<pre><code>for i in passwordList:
password = i.rstrip()
try :
rarfile.read(rarFile, psw=password)
print('[+] Password Found: '+password)
</code></pre>
| 1 | 2016-08-02T18:57:34Z | [
"python",
"python-3.x",
"zipfile",
"rar"
] |
Pandas cannot load data, csv encoding mystery | 38,728,366 | <p>I am trying to load a dataset into pandas and cannot get seem to get past step 1. I am new so please forgive if this is obvious, I have searched previous topics and not found an answer. The data is mostly in Chinese characters, which may be the issue.</p>
<p>The .csv is very large, and can be found here: <a href="http://weiboscope.jmsc.hku.hk/datazip/" rel="nofollow">http://weiboscope.jmsc.hku.hk/datazip/</a>
I am trying on week 1.</p>
<p>In my code below, I identify 3 types of decoding I attempted, including an attempt to see what encoding was used </p>
<pre><code>import pandas
import chardet
import os
#this is what I tried to start
data = pandas.read_csv('week1.csv', encoding="utf-8")
#spits out error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x9a in position 69: invalid start byte
#Code to check encoding -- this spits out ascii
bytes = min(32, os.path.getsize('week1.csv'))
raw = open('week1.csv', 'rb').read(bytes)
chardet.detect(raw)
#so i tried this! it also fails, which isn't that surprising since i don't know how you'd do chinese chars in ascii anyway
data = pandas.read_csv('week1.csv', encoding="ascii")
#spits out error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 0: ordinal not in range(128)
#for god knows what reason this allows me to load data into pandas, but definitely not correct encoding because when I print out first 5 lines its gibberish instead of Chinese chars
data = pandas.read_csv('week1.csv', encoding="latin1")
</code></pre>
<p>Any help would be greatly appreciated!</p>
<p>EDIT: The answer provided by @Kristof does in fact work, as does the program a colleague of mine put together yesterday:</p>
<pre><code>import csv
import pandas as pd
def clean_weiboscope(file, nrows=0):
res = []
with open(file, 'r', encoding='utf-8', errors='ignore') as f:
reader = csv.reader(f)
for i, row in enumerate(f):
row = row.replace('\n', '')
if nrows > 0 and i > nrows:
break
if i == 0:
headers = row.split(',')
else:
res.append(tuple(row.split(',')))
df = pd.DataFrame(res)
return df
my_df = clean_weiboscope('week1.csv', nrows=0)
</code></pre>
<p>I also wanted to add for future searchers that this is the Weiboscope open data for 2012.</p>
| 1 | 2016-08-02T18:53:51Z | 38,744,675 | <p>It seems that there's something very wrong with the input file. There are encoding errors throughout.</p>
<p>One thing you <em>could</em> do, is to read the CSV file as a binary, decode the binary string and replace the erroneous characters.</p>
<p>Example (<a href="http://stackoverflow.com/a/20014805/3165737">source</a> for the chunk-reading code):</p>
<pre><code>in_filename = 'week1.csv'
out_filename = 'repaired.csv'
from functools import partial
chunksize = 100*1024*1024 # read 100MB at a time
# Decode with UTF-8 and replace errors with "?"
with open(in_filename, 'rb') as in_file:
with open(out_filename, 'w') as out_file:
for byte_fragment in iter(partial(in_file.read, chunksize), b''):
out_file.write(byte_fragment.decode(encoding='utf_8', errors='replace'))
# Now read the repaired file into a dataframe
import pandas as pd
df = pd.read_csv(out_filename)
df.shape
>> (4790108, 11)
df.head()
</code></pre>
<p><a href="http://i.stack.imgur.com/9ZFfT.png" rel="nofollow"><img src="http://i.stack.imgur.com/9ZFfT.png" alt="sample output"></a></p>
| 0 | 2016-08-03T13:11:37Z | [
"python",
"pandas",
"chardet"
] |
Not being able to import modules using Python | 38,728,379 | <p>I've tried importing modules and I've only had bad luck until now. None of my imports work as Python doesn't seem to be able to find them. If I do paste the import directory on the same folder as my script it'll run but otherwise it won't.</p>
<p>I ran:</p>
<pre><code>sys.path
</code></pre>
<p>And got this awkward result:</p>
<pre><code>['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages']
</code></pre>
<p>And I highly believe my Python installation wasn't that well performed. What's the best turnaround on this ?</p>
<p>Just seems so confusing. Thank you!</p>
| -7 | 2016-08-02T18:54:27Z | 38,728,456 | <p>I would guess that you are trying to import a non-standard library. If that is what you want to do, you have to first run </p>
<pre><code>pip install foo
</code></pre>
<p>in your system's appropriate console. On windows that would be either cmd or powershell.</p>
| 1 | 2016-08-02T18:57:59Z | [
"python"
] |
Not being able to import modules using Python | 38,728,379 | <p>I've tried importing modules and I've only had bad luck until now. None of my imports work as Python doesn't seem to be able to find them. If I do paste the import directory on the same folder as my script it'll run but otherwise it won't.</p>
<p>I ran:</p>
<pre><code>sys.path
</code></pre>
<p>And got this awkward result:</p>
<pre><code>['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages']
</code></pre>
<p>And I highly believe my Python installation wasn't that well performed. What's the best turnaround on this ?</p>
<p>Just seems so confusing. Thank you!</p>
| -7 | 2016-08-02T18:54:27Z | 38,728,489 | <p>This question is poorly constructed and there is not enough information provided. But....if you are importing a module for use in your code, you should be writing something like...</p>
<pre><code> import 'module'
module.usemodule(whatever)
</code></pre>
| -1 | 2016-08-02T18:59:42Z | [
"python"
] |
How to Check if a Value in a Column is Present in the Following Row? | 38,728,425 | <p>I have the following dataframe that I want to do some manipulation over it:</p>
<pre><code> AutoStudyID DiagDate DiagName
0 34 2010-09-23 Lung
1 34 2001-01-01 Skin
2 48 2008-01-01 Brain
</code></pre>
<p>How can I use the power of pandas to check for the case where an <code>AutoStudyID</code> is followed directly by the same <code>AutoStudyID</code> in the next row? </p>
<p>For example like the following two rows:</p>
<pre><code>0 34 2010-09-23 Lung
1 34 2001-01-01 Skin
</code></pre>
<p>My ultimate goal is to make the dataframe has only one unique AutoStudyID per row. And the data of the delicate AutoStudyID should be merged into the one unique by creating new columns, the output should be something like this:</p>
<pre><code> AutoStudyID DiagDate DiagName DiagDate2 DiageName2
0 34 2010-09-23 Lung 2001-01-01 Skin
1 48 2008-01-01 Brain
</code></pre>
<p>Any idea how to tackle this problem?</p>
| -1 | 2016-08-02T18:56:37Z | 38,728,795 | <p>the following will check whether the value in the next row (for numeric and datetime dtypes) is the same?</p>
<pre><code>In [203]: df.AutoStudyID.diff() == 0
Out[203]:
0 False
1 True
2 False
Name: AutoStudyID, dtype: bool
In [204]: df[df.AutoStudyID.diff() == 0]
Out[204]:
AutoStudyID DiagDate DiagName
1 34 2001-01-01 Skin
</code></pre>
<p>or a bit more generic way (it'll work also for <code>strings</code>):</p>
<pre><code>In [206]: df.AutoStudyID.shift() == df.AutoStudyID
Out[206]:
0 False
1 True
2 False
Name: AutoStudyID, dtype: bool
</code></pre>
| 1 | 2016-08-02T19:18:55Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
How to Check if a Value in a Column is Present in the Following Row? | 38,728,425 | <p>I have the following dataframe that I want to do some manipulation over it:</p>
<pre><code> AutoStudyID DiagDate DiagName
0 34 2010-09-23 Lung
1 34 2001-01-01 Skin
2 48 2008-01-01 Brain
</code></pre>
<p>How can I use the power of pandas to check for the case where an <code>AutoStudyID</code> is followed directly by the same <code>AutoStudyID</code> in the next row? </p>
<p>For example like the following two rows:</p>
<pre><code>0 34 2010-09-23 Lung
1 34 2001-01-01 Skin
</code></pre>
<p>My ultimate goal is to make the dataframe has only one unique AutoStudyID per row. And the data of the delicate AutoStudyID should be merged into the one unique by creating new columns, the output should be something like this:</p>
<pre><code> AutoStudyID DiagDate DiagName DiagDate2 DiageName2
0 34 2010-09-23 Lung 2001-01-01 Skin
1 48 2008-01-01 Brain
</code></pre>
<p>Any idea how to tackle this problem?</p>
| -1 | 2016-08-02T18:56:37Z | 38,729,006 | <p>Iterate over the rows with <code>iterrows()</code> and compare the field AutoStudyID with the last value found.</p>
<pre><code>last = None
for i, row in df.iterrows():
if last == df['AutoStudyID'][i]:
print('I found it in position: %s' % i)
else:
last = df['AutoStudyID'][i]
</code></pre>
| 0 | 2016-08-02T19:31:45Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
How to Check if a Value in a Column is Present in the Following Row? | 38,728,425 | <p>I have the following dataframe that I want to do some manipulation over it:</p>
<pre><code> AutoStudyID DiagDate DiagName
0 34 2010-09-23 Lung
1 34 2001-01-01 Skin
2 48 2008-01-01 Brain
</code></pre>
<p>How can I use the power of pandas to check for the case where an <code>AutoStudyID</code> is followed directly by the same <code>AutoStudyID</code> in the next row? </p>
<p>For example like the following two rows:</p>
<pre><code>0 34 2010-09-23 Lung
1 34 2001-01-01 Skin
</code></pre>
<p>My ultimate goal is to make the dataframe has only one unique AutoStudyID per row. And the data of the delicate AutoStudyID should be merged into the one unique by creating new columns, the output should be something like this:</p>
<pre><code> AutoStudyID DiagDate DiagName DiagDate2 DiageName2
0 34 2010-09-23 Lung 2001-01-01 Skin
1 48 2008-01-01 Brain
</code></pre>
<p>Any idea how to tackle this problem?</p>
| -1 | 2016-08-02T18:56:37Z | 38,729,094 | <p>Try adding a new column with the following AutoStudyID:</p>
<pre><code>df['next'] = df.AutoStudyID.shift(-1)
df
AutoStudyID DiagDate DiagName next
0 34 2010-09-23 Lung 34
1 34 2001-01-01 Skin 48
2 48 2008-01-01 Brain NaN
</code></pre>
<p>Each row will have the next's id also. The rows should be sorted by AutoStudyID.</p>
<p>You can also try to group by AutoStudyID:</p>
<pre><code>df.groupby('AutoStudyID')
</code></pre>
<p>For Example:</p>
<pre><code>for group in df.groupby('AutoStudyID'):
print(group)
</code></pre>
<p>You get these groups, you can do what you need:</p>
<pre><code>('34', AutoStudyID DiagDate DiagName next
0 34 2010-09-23 Lung 34
1 34 2001-01-01 Skin 48)
('48', AutoStudyID DiagDate DiagName next
2 48 2008-01-01 Brain NaN)
</code></pre>
| 1 | 2016-08-02T19:37:14Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
Python Method not returning string | 38,728,462 | <p>I am trying to return a string from one of my methods. The method is called in my <code>__init__</code> method. The command I have in my <code>__init__</code> is:</p>
<pre><code>self.downloadHTTP()
</code></pre>
<p>I then in the method do this, the print is there for testing to show that the string is valid before I return it:</p>
<pre><code>print "workspace/data/formatter/input/" + self.filename
return "workspace/data/formatter/input/" + self.filename
</code></pre>
<p>But the string returned is not that, instead I get:</p>
<pre><code><__main__.downloader object at 0x10be6e890>
</code></pre>
<p>But in the console you can see my string is valid as the print works:</p>
<pre><code>workspace/data/formatter/input/30009_2-8-2016_unprocessed.csv
<__main__.downloader object at 0x10be6e890>
</code></pre>
<p>EDIT: </p>
<p>Included the whole class.</p>
<pre><code>class downloader(object):
def __init__(self,settings): #Configures settings and executes appropriate fucntion.
self.fetch_type = settings['item']['import_settings']['fetch_type']
#Setting filename of local copy
i = datetime.datetime.now()
self.filename = settings['item']['partner_id'] + "_%s-%s-%s_unprocessed." % (i.day, i.month, i.year) + settings['item']['import_settings']['feed_format']
if self.fetch_type == "http":
# downloadHTTP
self.fetch_url = settings['item']['import_settings']['fetch_url']
self.downloadHTTP()
def downloadHTTP(self): #TESTED
urllib.urlretrieve(self.fetch_url,"workspace/data/formatter/input/"+self.filename)
return "workspace/data/formatter/input/" + self.filename
</code></pre>
<p>And this is how I instantiate:</p>
<pre><code>settings_path = "workspace/data/configs/feed/FTP.json"
settings_json = json.loads(open(settings_path, 'r').read())
print downloader(settings_json)
</code></pre>
<p>The reason why I'm doing this is I want to pass it to another function.</p>
| -1 | 2016-08-02T18:58:28Z | 38,942,684 | <p>Thank you for your input in the comments. The problem was that I was essentially printing the object and that I was performing all my tasks in the <strong>INIT</strong> function. I solved this by separating the download into a method then returning the string in that method.</p>
| 0 | 2016-08-14T13:24:12Z | [
"python"
] |
Inputs not a sequence wth RNNs and TensorFlow | 38,728,501 | <p>I have some very basic lstm code with tensorflow and python, where my code is </p>
<p><code>output = tf.nn.rnn(tf.nn.rnn_cell.BasicLSTMCell(10), input_flattened, initial_state=tf.placeholder("float", [None, 20]))</code></p>
<p>where my input flattened is shape <code>[?, 5, 22501]</code></p>
<p>I'm getting the error <code>TypeError: inputs must be a sequence</code> on the <code>state</code> parameter of the lstm, and I'm ripping my hair out trying to find out why it is giving me this error. Any help would be greatly appreciated.</p>
| 0 | 2016-08-02T19:00:59Z | 38,740,100 | <p>I think when you use the tf.nn.rnn function it is expecting a list of tensors and not just a single tensor. You should unpack input in the time direction so that it is a list of tensors of shape [?, 22501]. You could also use tf.nn.dynamic_rnn which I think can handle this unpack for you.</p>
| 1 | 2016-08-03T09:48:43Z | [
"python",
"neural-network",
"tensorflow",
"recurrent-neural-network"
] |
python office365 create meeting event | 38,728,602 | <p>I have this python code below to create a meeting event, and it is working. I plan to incorporate this script with the Web Form submision where user enters some basic information such as Subject, attendees, and meeting date/time, and then the Python script will create a meeting event based on submitted info from the web form. That I have no problem to accomplish, but the problem is the timezone of the meeting.</p>
<p>As you can see the Python script requires Start/End time as this ""2016-08-03T15:00:00-07:00" (the -07:00 is for PDT time). However, the web form does not know what timezone of current user (users could be in West, Mountain, Central, or East timezone). It is too complicate to figure out the timezone is -7(PDT), -8(PST), -6(CT)....</p>
<p>Is there a way to query the current time zone setting of the person who creates the meeting based on user login? Then convert that timezone to number (-7 for PDT, -8 for PST, -6 CT...)... so the "StartTimeZone" and "EndTimeZone" have the correct time?</p>
<pre><code># Set the request parameters
url = 'https://outlook.office365.com/api/v1.0/me/events?$Select=Start,End'
user = 'user1@domain.com'
pwd = getpass.getpass('Please enter your AD password: ')
# Create JSON payload
data = {
"Subject": "Testing Outlock Event",
"Body": {
"ContentType": "HTML",
"Content": "Test Content"
},
"Start": "2016-08-03T15:00:00-07:00",
"StartTimeZone": "Pacific Standard Time",
"End": "2016-08-03T16:00:00-07:00",
"EndTimeZone": "Pacific Standard Time",
"Attendees": [
{
"EmailAddress": {
"Address": "attendee1@domain.com",
"Name": "User2"
},
"Type": "Required" },
{
"EmailAddress": {
"Address": "attendee2@domain.com",
"Name": "User3"
},
"Type": "Optional" }
]
}
json_payload = json.dumps(data)
# Build the HTTP request
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(url, data=json_payload)
auth = base64.encodestring('%s:%s' % (user, pwd)).replace('\n', '')
request.add_header('Authorization', 'Basic %s' % auth)
request.add_header('Content-Type', 'application/json')
request.add_header('Accept', 'application/json')
request.get_method = lambda: 'POST'
# Perform the request
result = opener.open(request)
</code></pre>
| 1 | 2016-08-02T19:06:15Z | 38,730,234 | <p>You can get user's timezone offset from UTC with JavaScript on the client and then pass this value to the server during form submission. See <a href="http://stackoverflow.com/a/1809974/1438906">this</a> answer.</p>
<p>UPDATE:</p>
<p>Assuming your user's locale is set up correctly, <code>getTimezoneOffset()</code> will return timezone offset from UTC, in minutes, for the current locale (<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTimezoneOffset" rel="nofollow">source</a>). You don't have to worry about daylight saving time. Use UTC time as the base for conversion. When you get the offset, convert the meeting time specified by the user to UTC using this offset, then convert UTC time to the timezone used on the server.</p>
<p>UPDATE 2:</p>
<p>I realized that in fact you should care about daylight saving time, because <code>getTimezoneOffset()</code> returns the <strong>current</strong> offset while the user can create an event somewhere in the future, after the daylight saving time will change this offset and the conversion will become inaccurate. So the right solution is not to use the current UTC offset directly, but use it instead to get the real timezone that you will then use for conversion. <a href="http://pellepim.bitbucket.org/jstz/" rel="nofollow">jsTimezoneDetect</a> library can do this for you.</p>
| 1 | 2016-08-02T20:52:23Z | [
"python",
"office365"
] |
Django how to relate a foreign key object on form save | 38,728,669 | <p>How can I create a relationship between my object and foreign key from a form submit? You can skip down to the very last line of code to see my issue.</p>
<h2>edit</h2>
<p>My user will not have an option to select the related model object. The related model object will be exclusively the one identified by the <code>id</code> which is determined from the <code>URL</code>, i.e. <code>domain.com/myview/100</code></p>
<p><code>models.py</code></p>
<pre><code>class Activity(models.Model):
id = models.AutoField(primary_key=True)
class Contact(models.Model):
id = models.AutoField(primary_key=True)
firstname = models.CharField(max_length=100, null=False, blank=False)
activity = models.ManyToManyField(Activity, blank=True)
</code></pre>
<p><code>forms.py</code></p>
<pre><code>class ContactForm(forms.ModelForm):
firstname = forms.CharField(max_length=100,
widget=forms.TextInput(attrs={'placeholder':'John'}))
class Meta:
model = Contact
</code></pre>
<p><code>views.py</code></p>
<pre><code>def index(request, id=None):
if id:
if request.method == 'POST':
contact_form = ContactForm(request.POST)
if contact_form.is_valid():
contact = contact_form.save()
# link contact to activity here, activity pk is 'id'
</code></pre>
| 0 | 2016-08-02T19:10:31Z | 38,728,880 | <p>I finally see what are you trying to do, it's pretty easy:</p>
<pre><code>contact = contact_form.save()
# link contact to activity here, activity pk is 'id'
activity = Activity.objects.get(id=id)
contact.activity.add(activity)
</code></pre>
<p>I was confused before because you have <code>id</code> as view function parameter, which people usually use to update a <code>contact</code> because you also have a <code>ContactForm</code> in views.py method. You might make it more explicit using <code>activity_id</code> instead, and make the function name more explicit as well.</p>
| 2 | 2016-08-02T19:24:24Z | [
"python",
"django"
] |
Combine pandas dataframe cells in case of identical values | 38,728,705 | <p>I'm trying to make a new dataframe where, if a 'type' occurs more than once, the contents of the 'country' cells and the 'year' cells of those rows are combined in one row (the 'how' column behaves like the 'type' column: if the types are similar, the hows are as well).</p>
<p>My pd dataframe looks as follows, df:</p>
<pre><code> type country year how
0 't1' 'UK' '2009' 'S'
1 't2' 'GER' '2010' 'D'
2 't2' 'USA' '2011' 'D'
3 't3' 'AUS' '2012' 'F'
4 't4' 'CAN' '2013' 'R'
5 't5' 'SA' '2014' 'L'
6 't5' 'RU' '2015' 'L'
</code></pre>
<p>df2 should look like this:</p>
<pre><code> type country year how
0 't1' 'UK' '2009' 'S'
1 't2' 'GER, USA' '2010, 2011' 'D'
2 't3' 'AUS' '2012' 'F'
3 't4' 'CAN' '2013' 'R'
4 't5' 'SA, RU' '2014, 2015' 'L'
</code></pre>
<p>I'm pretty sure a group by on 'type' (or type and how) is necessary. Using first() for example removes the second of the similar type rows. Is there some handy way to instead combine the cells (strings)? Thanks in advance.</p>
| 2 | 2016-08-02T19:12:30Z | 38,728,741 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>groupby/agg</code></a> with <code>', '.join</code> as the aggregator:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'country': ['UK', 'GER', 'USA', 'AUS', 'CAN', 'SA', 'RU'],
'how': ['S', 'D', 'D', 'F', 'R', 'L', 'L'],
'type': ['t1', 't2', 't2', 't3', 't4', 't5', 't5'],
'year': ['2009', '2010', '2011', '2012', '2013', '2014', '2015']})
result = df.groupby(['type','how']).agg(', '.join).reset_index()
</code></pre>
<p>yields</p>
<pre><code> type how country year
0 t1 S UK 2009
1 t2 D GER, USA 2010, 2011
2 t3 F AUS 2012
3 t4 R CAN 2013
4 t5 L SA, RU 2014, 2015
</code></pre>
| 3 | 2016-08-02T19:15:23Z | [
"python",
"pandas"
] |
Combine pandas dataframe cells in case of identical values | 38,728,705 | <p>I'm trying to make a new dataframe where, if a 'type' occurs more than once, the contents of the 'country' cells and the 'year' cells of those rows are combined in one row (the 'how' column behaves like the 'type' column: if the types are similar, the hows are as well).</p>
<p>My pd dataframe looks as follows, df:</p>
<pre><code> type country year how
0 't1' 'UK' '2009' 'S'
1 't2' 'GER' '2010' 'D'
2 't2' 'USA' '2011' 'D'
3 't3' 'AUS' '2012' 'F'
4 't4' 'CAN' '2013' 'R'
5 't5' 'SA' '2014' 'L'
6 't5' 'RU' '2015' 'L'
</code></pre>
<p>df2 should look like this:</p>
<pre><code> type country year how
0 't1' 'UK' '2009' 'S'
1 't2' 'GER, USA' '2010, 2011' 'D'
2 't3' 'AUS' '2012' 'F'
3 't4' 'CAN' '2013' 'R'
4 't5' 'SA, RU' '2014, 2015' 'L'
</code></pre>
<p>I'm pretty sure a group by on 'type' (or type and how) is necessary. Using first() for example removes the second of the similar type rows. Is there some handy way to instead combine the cells (strings)? Thanks in advance.</p>
| 2 | 2016-08-02T19:12:30Z | 38,728,859 | <p>To get a list in each cell as opposed to a string</p>
<pre><code>def proc_df(df):
df = df[['country', 'year']]
return pd.Series(df.T.values.tolist(), df.columns)
df.groupby(['how', 'type']).apply(proc_df)
</code></pre>
<p><a href="http://i.stack.imgur.com/5JNXm.png" rel="nofollow"><img src="http://i.stack.imgur.com/5JNXm.png" alt="enter image description here"></a></p>
| 1 | 2016-08-02T19:23:06Z | [
"python",
"pandas"
] |
pandas install issues in gitlab and docker | 38,728,722 | <pre><code>Collecting numpy (from -r requirements.txt (line 21))
Downloading numpy-1.11.1.zip (4.7MB)
Collecting pandas (from -r requirements.txt (line 22))
Downloading pandas-0.18.1.tar.gz (7.3MB)
Complete output from command python setup.py egg_info:
Download error on https://pypi.python.org/simple/numpy/: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) -- Some packages may not be found!
Couldn't find index page for 'numpy' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) -- Some packages may not be found!
No local packages or download links found for numpy>=1.7.0
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-8puw9oba/pandas/setup.py", line 631, in <module>
**setuptools_kwargs)
File "/usr/local/lib/python3.5/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/local/lib/python3.5/site-packages/setuptools/dist.py", line 269, in __init__
self.fetch_build_eggs(attrs['setup_requires'])
File "/usr/local/lib/python3.5/site-packages/setuptools/dist.py", line 313, in fetch_build_eggs
replace_conflicting=True,
File "/usr/local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 826, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1092, in best_match
return self.obtain(req, installer)
File "/usr/local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1104, in obtain
return installer(requirement)
File "/usr/local/lib/python3.5/site-packages/setuptools/dist.py", line 380, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/local/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 634, in easy_install
raise DistutilsError(msg)
distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('numpy>=1.7.0')
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8puw9oba/pandas/
ERROR: Build failed: exit code 1
</code></pre>
<p>Trying continuous Integration with gitlab and am running into an issue after pandas has been added as a requirement. when running the pytest the error above happens. the yaml for the gitlab-ci looks like this:</p>
<pre><code>pytest:
image: python:3-alpine
script:
- pip install -r requirements.txt
- python -m pytest tests --ignore=tests/test_routes.py
eslint:
image: node:4.4.7
cache:
paths:
- src/static/node_modules/
script:
- cd src/static
- npm --loglevel=silent install
- npm --loglevel=silent install gulp -g
- gulp lint
</code></pre>
<p>pytest is the one that is failing before it even gets to running the tests</p>
<p>the contents of our requirements.txt are as follows:</p>
<pre><code>astroid==1.4.5
blinker==1.4
click==6.3
colorama==0.3.7
Flask==0.10.1
Flask-DebugToolbar==0.10.0
Flask-Login==0.3.2
Flask-Mail==0.9.1
Flask-Principal==0.4.0
Flask-WTF==0.12
Jinja2==2.8
lazy-object-proxy==1.2.1
MarkupSafe==0.23
passlib==1.6.5
pylint==1.5.5
requests==2.9.1
six==1.10.0
Werkzeug==0.11.4
wrapt==1.10.6
WTForms==2.1
pandas
pyaml
rtyaml
webtest
hypothesis
beautifulsoup4
pytest
</code></pre>
<p>I attempted manually adding numpy before pandas but got the same result. since it complained about numpy >=1.7.0 I also attempted explicitly telling it that version but that did not resolve the issue either. Is there anything I am missing in this configuration that would be causing this problem?</p>
| 0 | 2016-08-02T19:14:19Z | 38,741,547 | <p><code>pip</code> is unable to verify the certificate. You need to manually say which certificate it should use to verify it.</p>
<p>This should work:</p>
<pre><code>pip --cert /etc/ssl/certs/DigiCert_High_Assurance_EV_Root_CA.pem install -r requirements.txt
</code></pre>
| 0 | 2016-08-03T10:53:16Z | [
"python",
"pandas",
"numpy",
"continuous-integration",
"gitlab"
] |
How do I figure out the variance explained in tfidf matrix with kmeans? | 38,728,828 | <p>I am fairly new with working with text data.</p>
<p>I have a data frame of about 300,000 unique product names and I am trying to use k means to cluster similar names together. I used sklearn's tfidfvectorizer to vectorize the names and convert to a tf-idf matrix.</p>
<p>Next I ran k means on the tf-idf matrix with number of clusters ranging from 5 to 10. </p>
<p>I am on stuck on error when trying to calculate variance explained for <code>D_k</code> <code>ValueError: setting an array element with a sequence.</code></p>
<p>I want to plot the variance explained v. number of clusters plot so I can distinguish where the elbow is.</p>
<p>I am referencing <a href="http://datascience.stackexchange.com/questions/6508/k-means-incoherent-behaviour-choosing-k-with-elbow-method-bic-variance-explain">http://datascience.stackexchange.com/questions/6508/k-means-incoherent-behaviour-choosing-k-with-elbow-method-bic-variance-explain</a></p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer
#define vectorizer parameters
tfidf_vectorizer = TfidfVectorizer(use_idf=True,
stop_words = 'english',
ngram_range=(2,4))
%time tfidf_matrix = tfidf_vectorizer.fit_transform(unique_names)
# clustering with kmeans
from sklearn.cluster import KMeans
num_clusters = range(5,10)
%time KM = [KMeans(n_clusters=k).fit(tfidf_matrix) for k in num_clusters]
from scipy.spatial.distance import cdist, pdist
centroids = [k.cluster_centers_ for k in KM]
D_k = [cdist(tfidf_matrix, cent) for cent in centroids]
</code></pre>
| 1 | 2016-08-02T19:21:05Z | 39,806,161 | <p>You should convert your <code>tfidf_matrix</code> (which is sparse) into a proper array.</p>
<pre><code>D_k = [cdist(tfidf_matrix.toarray(), cent) for cent in centroids]
</code></pre>
<p>This worked for me.</p>
| 1 | 2016-10-01T11:38:49Z | [
"python",
"scikit-learn",
"k-means",
"tf-idf"
] |
Why is it possible to have low loss, but also very low accuracy, in a convolutional neural network? | 38,728,895 | <p>I am new to machine learning and am currently trying to train a convolutional neural net with 3 convolutional layers and 1 fully connected layer. I am using a dropout probability of 25% and a learning rate of 0.0001. I have 6000 150x200 training images and 13 output classes. I am using tensorflow. I am noticing a trend where my loss steadily decreases, but my accuracy increases only slightly and then drops back down again. My training images are the blue lines and my validation images are the orange lines. The x axis is steps. <a href="http://i.stack.imgur.com/qxICm.png" rel="nofollow"><img src="http://i.stack.imgur.com/qxICm.png" alt="enter image description here"></a></p>
<p>I am wondering if there is a something I am not understanding or what could be possible causes of this phenomenon? From the material I have read, I assumed low loss meant high accuracy.
Here is my loss function.</p>
<pre><code>cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
</code></pre>
| 1 | 2016-08-02T19:25:16Z | 38,736,628 | <p>That is because <strong>Loss</strong> and <strong>Accuracy</strong> are two totally different things (well at least logically)!</p>
<p>Consider an example where you have defined <code>loss</code> as:</p>
<pre><code>loss = (1-accuracy)
</code></pre>
<p>In this case when you try to minimize <code>loss</code>, <code>accuracy</code> increases automatically.</p>
<p>Now consider another example where you define <code>loss</code> as:</p>
<pre><code>loss = average(prediction_probabilities)
</code></pre>
<p>Though it does not make any sense, it technically is still a valid loss function and your <code>weights</code> are still tuned in order to minimize such <code>loss</code>.</p>
<p>But as you can see, in this case, there is no relation between <code>loss</code> and <code>accuracy</code> so you cannot expect both to increase/decrease at the same time.</p>
<p>Note: <code>Loss</code> will always be minimized (thus your <code>loss</code> decreases after each iteration)!</p>
<p>PS: Please update your question with the <code>loss</code> function you are trying to minimize.</p>
| 2 | 2016-08-03T07:05:01Z | [
"python",
"machine-learning",
"tensorflow",
"deep-learning"
] |
seaborn to replace a matplotlib | 38,728,896 | <p>I am trying to make a plot with <code>seaborn</code>, of a simple plot that I have done it with <code>patplotlib</code></p>
<pre><code> import matplotlib.pyplot as plt
radius = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
area = [3.14159, 12.56636, 28.27431, 50.26544, 78.53975, 113.09724]
square = [1.0, 4.0, 9.0, 16.0, 25.0, 36.0]
plt.plot(radius, area, label='Circle')
plt.plot(radius, square, marker='o', linestyle='--', color='r', label='Square')
plt.xlabel('Radius/Side')
plt.ylabel('Area')
plt.title('Area of Shapes')
plt.legend()
plt.show()
</code></pre>
<p>Any idea please?</p>
| -3 | 2016-08-02T19:25:14Z | 38,729,316 | <p>Use it like this, your plots will look nicer as compared to default matplotlib plots :</p>
<pre><code>import seaborn as sb
import matplotlib.pyplot as plt
sb.set_style("darkgrid")
radius = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
area = [3.14159, 12.56636, 28.27431, 50.26544, 78.53975, 113.09724]
square = [1.0, 4.0, 9.0, 16.0, 25.0, 36.0]
plt.plot(radius, area, label='Circle')
plt.plot(radius, square, marker='o', linestyle='--', color='r', label='Square')
plt.xlabel('Radius/Side')
plt.ylabel('Area')
plt.title('Area of Shapes')
plt.legend()
plt.show()
</code></pre>
<p>Hope this works for you. Please keep the indentation in check though. A good resource for seaborn can be found <a href="https://stanford.edu/~mwaskom/software/seaborn/tutorial/aesthetics.html" rel="nofollow">here</a></p>
| -1 | 2016-08-02T19:51:26Z | [
"python",
"matplotlib",
"seaborn"
] |
Make an image background transparent | 38,728,907 | <p>I have an image with an orange and a white background. I want to make the white background transparent. The code below uses grabcut to make a mask. I then split the image into rgb channels and apply the mask on the alpha channel. You'll see from images below that post-grabcut and mask images are OK. I haven't been able to figure out how to apply the mask to the alpha channel. Suggestions appreciated.</p>
<pre><code> im = cv2.imread(sourceimagefile)
cv2.imshow('original',im)
mask = np.zeros(im.shape[:2],np.uint8)
rect = (box[0][0], box[0][1], box[0][2]-box[0][0], box[0][3]-box[0][1])
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
cv2.grabCut(im,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
if len(np.where((mask==3)|(mask==1))[0])>0:
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
mask2 = np.repeat(mask2[:,:,np.newaxis],3,axis=2)
else:
mask2 = np.zeros_like(im)
mask2[box[0][1]:box[0][3],box[0][0]:box[0][2],:] = 1
im2 = im*mask2
cv2.imshow('post-grabcut',im2)
minVal, maxVal, minLoc, maxLoc = cv2.minMaxLoc(mask)
flag, mask = cv2.threshold(mask, maxVal-1, 255, cv2.cv.CV_THRESH_BINARY)
cv2.imshow("mask", mask)
b, g, r = cv2.split(im2)
img_RGBA = cv2.merge((b, g, r, mask))
cv2.imshow("final",img_RGBA)
</code></pre>
<p><a href="http://i.stack.imgur.com/rnQOv.png" rel="nofollow"><img src="http://i.stack.imgur.com/rnQOv.png" alt="original"></a><a href="http://i.stack.imgur.com/D3oRa.png" rel="nofollow"><img src="http://i.stack.imgur.com/D3oRa.png" alt="post-grabcut"></a><a href="http://i.stack.imgur.com/4n9u3.png" rel="nofollow"><img src="http://i.stack.imgur.com/4n9u3.png" alt="mask"></a><a href="http://i.stack.imgur.com/nL5Og.png" rel="nofollow"><img src="http://i.stack.imgur.com/nL5Og.png" alt="final"></a></p>
| 1 | 2016-08-02T19:26:00Z | 38,732,889 | <p>according to an older SO question, imshow doesn't actually support alpha channels <a href="http://jepsonsblog.blogspot.com/2012/10/overlay-transparent-image-in-opencv.html" rel="nofollow">http://jepsonsblog.blogspot.com/2012/10/overlay-transparent-image-in-opencv.html</a> but this is an old post and support <strong>MAY</strong> have been added, but i do not know for sure</p>
| 1 | 2016-08-03T01:24:22Z | [
"python",
"opencv",
"image-processing",
"computer-vision",
"transparency"
] |
Python parse txt file, PEST output, jacobian.txt | 38,728,942 | <p>We are stuck trying to find a way to parse a tricky text file that is produced by a PEST analysis using Python. It shows measurements of 63 different variables for over 30,000 observations. Here's an example of the output (3/>30,000 shown)</p>
<pre><code> cmfa cmfb cmfc cmfd cmla cmlb cmlc cmld
cmle cgfa cgfb cgfc cgfd cgfe dgfa dgfb
dgfc dgfd icfa icfb icfc icfd vawa vawb
vawc vawd vawe vawf vswa vswb vswc vswd
vswe chfa chfb chfc chfd chfe cgwa cgwb
cgwc cgwd cgwe crta crtb crtc crtd crte
icha ichb ichc ichd iche csea cseb csec
csed csee csef caqa caqb crsa crsb
0 -1.900000E-03 1.080000E-02 3.150000E-02 0.00000 0.00000 0.00000 0.00000 -3.020000E-02
0.00000 -1.870000E-02 0.00000 4.600000E-03 0.00000 0.00000 0.00000 4.510000E-02
0.00000 0.00000 3.650000E-02 -7.000000E-03 -2.100000E-03 -2.000000E-04 3.200000E-03 8.000000E-03
-7.000000E-04 -1.500000E-02 0.00000 4.800000E-03 1.900000E-03 4.000000E-04 2.500000E-03 2.500000E-03
-1.400000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -3.200000E-03 -8.060000E-02
-0.126500 0.298400 0.00000 0.00000 0.00000 0.00000 0.00000 8.000000E-04
-1.900000E-03 1.400000E-03 0.00000 0.00000 -3.200000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -1.200000E-02 1.930000E-02
1 -1.800000E-03 1.140000E-02 1.850000E-02 0.00000 0.00000 0.00000 0.00000 -2.600000E-02
0.00000 -8.200000E-03 0.00000 1.200000E-03 0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 2.560000E-02 -6.100000E-03 -1.100000E-03 0.00000 3.000000E-03 7.400000E-03
-7.000000E-04 -1.410000E-02 0.00000 5.000000E-03 1.900000E-03 3.000000E-04 2.300000E-03 2.300000E-03
-1.330000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -3.400000E-03 -8.410000E-02
-0.123500 0.301900 0.00000 0.00000 0.00000 0.00000 0.00000 1.200000E-03
-2.000000E-03 1.400000E-03 0.00000 0.00000 -3.200000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -1.280000E-02 2.050000E-02
2 -3.300000E-03 6.500000E-03 4.040000E-02 0.00000 0.00000 0.00000 0.00000 -7.060000E-02
4.840000E-02 -0.112500 0.110300 0.00000 0.00000 0.00000 1.10330 0.00000
0.00000 0.00000 3.940000E-02 -8.500000E-03 -1.120000E-02 6.600000E-03 5.700000E-03 1.430000E-02
-1.300000E-03 -2.470000E-02 0.00000 3.700000E-03 2.200000E-03 5.000000E-04 4.300000E-03 4.500000E-03
-2.250000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -2.000000E-03 -5.840000E-02
-0.157300 0.292400 0.00000 0.00000 0.00000 0.00000 0.00000 -3.600000E-03
-1.700000E-03 1.200000E-03 0.00000 0.00000 -3.400000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -7.400000E-03 1.180000E-02
3 -2.200000E-03 1.040000E-02 3.500000E-02 0.00000 0.00000 0.00000 0.00000 -4.390000E-02
0.00000 -3.170000E-02 2.590000E-02 0.00000 0.00000 0.00000 0.259400 0.00000
0.00000 0.00000 3.920000E-02 -1.030000E-02 -3.500000E-03 1.500000E-03 3.600000E-03 9.000000E-03
-9.000000E-04 -1.680000E-02 0.00000 4.700000E-03 2.000000E-03 3.000000E-04 2.700000E-03 2.800000E-03
-1.560000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -3.200000E-03 -7.920000E-02
-0.131600 0.302200 0.00000 0.00000 0.00000 0.00000 0.00000 3.000000E-04
-2.000000E-03 1.300000E-03 0.00000 0.00000 -3.300000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -1.180000E-02 1.880000E-02
</code></pre>
<p>The letter codes (cmfa, cmfb, etc.) are the names of the 63 variables. Each of the letter-code variables relate to the number in the same position for each of the following text blocks. </p>
<p>The first block of numbers is for observation 0, the next block for observation 1 and so on for more than 30,000 observations.</p>
<p>We want to find a way to turn this into a text file (preferably .csv). In the case of my text example, it would have 63 columns and 3 rows (+1 for identifier). Each column would be titled with the appropriate letter code (cmfa, etc)</p>
<p>If possible, we would like this to run on a file with any number of columns and any number of observations</p>
| 0 | 2016-08-02T19:28:28Z | 38,730,870 | <p>A way to parse the file that you have provided(independent of number of rows in file) using simple python, better implementations can be done using regular expressions but i would leave it for you to try further:</p>
<pre><code>#Importing required libraries
import numpy as np
import csv
#Open input file
with open('input.txt','rb') as f:
line = f.read().splitlines()
#Read file and do some parsing
line2 = []
for l in line:
z = l.split(" ")
l2 = []
for val in z:
if not(val==''):
l2.append(val)
if len(l2)==9:
line2.append(l2[1:9])
elif len(l2)==7 or len(l2)==8:
line2.append(l2)
#Remove unnecessary rows and do type conversion to float
pl = np.arange(0,len(line2)+1,8)
line3 = []
for i in np.arange(0,len(pl)-1):
z = line2[pl[i]:pl[i+1]]
z2 = [item for sublist in z for item in sublist]
if i==0:
line3.append(z2)
else:
line3.append([float(i) for i in z2])
#Write to output file
with open('output.csv','wb') as f:
wr = csv.writer(f)
for row in line3:
wr.writerow(row)
</code></pre>
<p>In case you want to keep the indexes:</p>
<pre><code>#Importing required libraries
import numpy as np
import csv
#Open input file
with open('input.txt','rb') as f:
line = f.read().splitlines()
#Read file and do some parsing
line2 = []
for l in line:
z = l.split(" ")
l2 = []
for val in z:
if not(val==''):
l2.append(val)
if not(len(l2)==0):
line2.append(l2)
#Remove unnecessary rows and do type conversion to float
pl = np.arange(0,len(line2)+1,8)
line3 = []
for i in np.arange(0,len(pl)-1):
if i==0:
z = line2[pl[i]:pl[i+1]]
z2 = [item for sublist in z for item in sublist]
line3.append(['']+z2)
else:
z = line2[pl[i]:pl[i+1]]
z2 = [item for sublist in z for item in sublist]
line3.append([float(i) for i in z2])
#Write to output file
with open('output.csv','wb') as f:
wr = csv.writer(f)
for row in line3:
wr.writerow(row)
</code></pre>
| 1 | 2016-08-02T21:35:45Z | [
"python",
"parsing"
] |
Python parse txt file, PEST output, jacobian.txt | 38,728,942 | <p>We are stuck trying to find a way to parse a tricky text file that is produced by a PEST analysis using Python. It shows measurements of 63 different variables for over 30,000 observations. Here's an example of the output (3/>30,000 shown)</p>
<pre><code> cmfa cmfb cmfc cmfd cmla cmlb cmlc cmld
cmle cgfa cgfb cgfc cgfd cgfe dgfa dgfb
dgfc dgfd icfa icfb icfc icfd vawa vawb
vawc vawd vawe vawf vswa vswb vswc vswd
vswe chfa chfb chfc chfd chfe cgwa cgwb
cgwc cgwd cgwe crta crtb crtc crtd crte
icha ichb ichc ichd iche csea cseb csec
csed csee csef caqa caqb crsa crsb
0 -1.900000E-03 1.080000E-02 3.150000E-02 0.00000 0.00000 0.00000 0.00000 -3.020000E-02
0.00000 -1.870000E-02 0.00000 4.600000E-03 0.00000 0.00000 0.00000 4.510000E-02
0.00000 0.00000 3.650000E-02 -7.000000E-03 -2.100000E-03 -2.000000E-04 3.200000E-03 8.000000E-03
-7.000000E-04 -1.500000E-02 0.00000 4.800000E-03 1.900000E-03 4.000000E-04 2.500000E-03 2.500000E-03
-1.400000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -3.200000E-03 -8.060000E-02
-0.126500 0.298400 0.00000 0.00000 0.00000 0.00000 0.00000 8.000000E-04
-1.900000E-03 1.400000E-03 0.00000 0.00000 -3.200000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -1.200000E-02 1.930000E-02
1 -1.800000E-03 1.140000E-02 1.850000E-02 0.00000 0.00000 0.00000 0.00000 -2.600000E-02
0.00000 -8.200000E-03 0.00000 1.200000E-03 0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 2.560000E-02 -6.100000E-03 -1.100000E-03 0.00000 3.000000E-03 7.400000E-03
-7.000000E-04 -1.410000E-02 0.00000 5.000000E-03 1.900000E-03 3.000000E-04 2.300000E-03 2.300000E-03
-1.330000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -3.400000E-03 -8.410000E-02
-0.123500 0.301900 0.00000 0.00000 0.00000 0.00000 0.00000 1.200000E-03
-2.000000E-03 1.400000E-03 0.00000 0.00000 -3.200000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -1.280000E-02 2.050000E-02
2 -3.300000E-03 6.500000E-03 4.040000E-02 0.00000 0.00000 0.00000 0.00000 -7.060000E-02
4.840000E-02 -0.112500 0.110300 0.00000 0.00000 0.00000 1.10330 0.00000
0.00000 0.00000 3.940000E-02 -8.500000E-03 -1.120000E-02 6.600000E-03 5.700000E-03 1.430000E-02
-1.300000E-03 -2.470000E-02 0.00000 3.700000E-03 2.200000E-03 5.000000E-04 4.300000E-03 4.500000E-03
-2.250000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -2.000000E-03 -5.840000E-02
-0.157300 0.292400 0.00000 0.00000 0.00000 0.00000 0.00000 -3.600000E-03
-1.700000E-03 1.200000E-03 0.00000 0.00000 -3.400000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -7.400000E-03 1.180000E-02
3 -2.200000E-03 1.040000E-02 3.500000E-02 0.00000 0.00000 0.00000 0.00000 -4.390000E-02
0.00000 -3.170000E-02 2.590000E-02 0.00000 0.00000 0.00000 0.259400 0.00000
0.00000 0.00000 3.920000E-02 -1.030000E-02 -3.500000E-03 1.500000E-03 3.600000E-03 9.000000E-03
-9.000000E-04 -1.680000E-02 0.00000 4.700000E-03 2.000000E-03 3.000000E-04 2.700000E-03 2.800000E-03
-1.560000E-02 0.00000 0.00000 0.00000 0.00000 0.00000 -3.200000E-03 -7.920000E-02
-0.131600 0.302200 0.00000 0.00000 0.00000 0.00000 0.00000 3.000000E-04
-2.000000E-03 1.300000E-03 0.00000 0.00000 -3.300000E-03 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 -1.180000E-02 1.880000E-02
</code></pre>
<p>The letter codes (cmfa, cmfb, etc.) are the names of the 63 variables. Each of the letter-code variables relate to the number in the same position for each of the following text blocks. </p>
<p>The first block of numbers is for observation 0, the next block for observation 1 and so on for more than 30,000 observations.</p>
<p>We want to find a way to turn this into a text file (preferably .csv). In the case of my text example, it would have 63 columns and 3 rows (+1 for identifier). Each column would be titled with the appropriate letter code (cmfa, etc)</p>
<p>If possible, we would like this to run on a file with any number of columns and any number of observations</p>
| 0 | 2016-08-02T19:28:28Z | 38,756,941 | <p>You can use a <code>mmap</code> and a regex to parse the file without having to read in the entire file into memory. </p>
<p>Something like:</p>
<pre><code>import re
import mmap
import os
size=os.stat(fn_in).st_size
with open(fn_in, "r") as fin, open(fn_out, "w") as fout:
data = mmap.mmap(fin.fileno(), size, access=mmap.ACCESS_READ)
for idx, m in enumerate(re.finditer(r"(.*?)(?:(?:^\s*$)|\Z)", data, re.M | re.S)):
block=m.group(0).strip()
if not block:
continue
if idx==0:
fout.write("O_N,"+",".join(block.split())+"\n")
else:
fout.write(",".join(block.split())+"\n")
</code></pre>
| 0 | 2016-08-04T01:03:42Z | [
"python",
"parsing"
] |
Passing xPath as argument to Scrapy | 38,728,973 | <p>I'm trying to write a generic crawler for a single webpage which is called with the following arguments:</p>
<ul>
<li>Allowed Domains</li>
<li>URL to be crawled</li>
<li>xPath to extract the price within the webpage</li>
</ul>
<p>The URL and allowed domains arguments seem to be working correctly but I can't get the xPath argument to work.</p>
<p>I'm guessing I need to declare a variable to hold it correct as the other two arguments are assigned to existing class elements.</p>
<p>Here is my spider:</p>
<pre><code>import scrapy
from Spotlite.items import SpotliteItem
class GenericSpider(scrapy.Spider):
name = "generic"
def __init__(self, start_url=None, allowed_domains=None, xpath_string=None, *args, **kwargs):
super(GenericSpider, self).__init__(*args, **kwargs)
self.start_urls = ['%s' % start_url]
self.allowed_domains = ['%s' % allowed_domains]
xpath_string = ['%s' % xpath_string]
def parse(self, response):
self.logger.info('Hi, this is an item page! %s', response.url)
item = SpotliteItem()
item['url'] = response.url
item['price'] = response.xpath(xpath_string).extract()
return item
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/ubuntu/spotlite/spotlite/spiders/generic.py", line 23, in parse
item['price'] = response.xpath(xpath_string).extract()
</code></pre>
<p>NameError: global name 'xpath_string' is not defined</p>
<p>Any assistance will be appreciated!</p>
<p>Thanks,</p>
<p>Michael</p>
| 0 | 2016-08-02T19:29:52Z | 38,729,003 | <p>Have <code>xpath_string</code> as an <em>instance variable</em> instead:</p>
<pre><code>import scrapy
from Spotlite.items import SpotliteItem
class GenericSpider(scrapy.Spider):
name = "generic"
def __init__(self, start_url=None, allowed_domains=None, xpath_string=None, *args, **kwargs):
super(GenericSpider, self).__init__(*args, **kwargs)
self.start_urls = ['%s' % start_url]
self.allowed_domains = ['%s' % allowed_domains]
self.xpath_string = xpath_string
def parse(self, response):
self.logger.info('Hi, this is an item page! %s', response.url)
item = SpotliteItem()
item['url'] = response.url
item['price'] = response.xpath(self.xpath_string).extract()
return item
</code></pre>
| 0 | 2016-08-02T19:31:32Z | [
"python",
"scrapy"
] |
Passing xPath as argument to Scrapy | 38,728,973 | <p>I'm trying to write a generic crawler for a single webpage which is called with the following arguments:</p>
<ul>
<li>Allowed Domains</li>
<li>URL to be crawled</li>
<li>xPath to extract the price within the webpage</li>
</ul>
<p>The URL and allowed domains arguments seem to be working correctly but I can't get the xPath argument to work.</p>
<p>I'm guessing I need to declare a variable to hold it correct as the other two arguments are assigned to existing class elements.</p>
<p>Here is my spider:</p>
<pre><code>import scrapy
from Spotlite.items import SpotliteItem
class GenericSpider(scrapy.Spider):
name = "generic"
def __init__(self, start_url=None, allowed_domains=None, xpath_string=None, *args, **kwargs):
super(GenericSpider, self).__init__(*args, **kwargs)
self.start_urls = ['%s' % start_url]
self.allowed_domains = ['%s' % allowed_domains]
xpath_string = ['%s' % xpath_string]
def parse(self, response):
self.logger.info('Hi, this is an item page! %s', response.url)
item = SpotliteItem()
item['url'] = response.url
item['price'] = response.xpath(xpath_string).extract()
return item
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/ubuntu/spotlite/spotlite/spiders/generic.py", line 23, in parse
item['price'] = response.xpath(xpath_string).extract()
</code></pre>
<p>NameError: global name 'xpath_string' is not defined</p>
<p>Any assistance will be appreciated!</p>
<p>Thanks,</p>
<p>Michael</p>
| 0 | 2016-08-02T19:29:52Z | 38,731,907 | <p>Adding the variable to initial class declaration fixed the problem.</p>
<pre><code>import scrapy
from spotlite.items import SpotliteItem
class GenericSpider(scrapy.Spider):
name = "generic"
xpath_string = ""
def __init__(self, start_url, allowed_domains, xpath_string, *args, **kwargs):
super(GenericSpider, self).__init__(*args, **kwargs)
self.start_urls = ['%s' % start_url]
self.allowed_domains = ['%s' % allowed_domains]
self.xpath_string = xpath_string
def parse(self, response):
self.logger.info('URL is %s', response.url)
self.logger.info('xPath is %s', self.xpath_string)
item = SpotliteItem()
item['url'] = response.url
item['price'] = response.xpath(self.xpath_string).extract()
return item
</code></pre>
| 0 | 2016-08-02T23:15:28Z | [
"python",
"scrapy"
] |
Pytest parametrize class tests | 38,729,007 | <p>I have an array of objects that I need to run for each tests inside my test class. I want to parametrize every test function in a TestClass. The end goal is to have something resembling:</p>
<pre><code>@pytest.mark.parametrize('test_input', [1, 2, 3, 4])
class TestClass:
def test_something1(self, test_input):
# test code here, runs each time for the parametrize
</code></pre>
<p>But from my understanding you can't pass input parameters, or at least call <code>@pytest.mark.parametrize</code> on a class, those markers are meant for <code>defs</code> not <code>class</code>s</p>
<p>What i have now:</p>
<pre><code>class TestClass:
def test_something1(self):
for i in stuff:
# test code here
def test_something2(self):
for i in stuff:
# test code here
...
</code></pre>
<p><strong>Is there a way to pass the parametrize a class itself or every function inside the TestClass?</strong> Maybe a <code>@pytest.mark.parametrize</code> inside a <code>@pytest.fixture...(autouse=True).</code></p>
<p>I want to keep my tests organized as a class because it mirrors the file that I'm testing. Because I loop through these objects in at least a dozen different tests, it would be easier to call a loop of the class than in each <code>def</code>.</p>
| 0 | 2016-08-02T19:31:46Z | 38,752,047 | <p>I have solved it. I was overcomplicating it; instead of using a mark I can use a fixture function that passes in parameters.</p>
<p>Before I found the answer (Without parametrize):</p>
<pre><code>class TestClass:
def test_something(self):
for i in example_params:
print(i)
</code></pre>
<p>Answer, using pytest fixture. Will do the same thing, but just need input, not a for loop:</p>
<pre><code>import pytest
example_params = [1, 2, 3]
@pytest.fixture(params=example_params)
def param_loop(request):
return request.param
class TestClass:
def test_something(self, param_loop):
print(param_loop)
</code></pre>
<p><strong>So, to parametrize all <code>def</code>s:</strong></p>
<ol>
<li>Use the <code>@pytest.fixture(params=[])</code> decorator on a <code>def my_function(request)</code></li>
<li>Inside <code>my_function</code>, <code>return request.param</code></li>
<li>Add the <code>my_function</code> to the inputs of any function that you want to parametrize</li>
</ol>
| 0 | 2016-08-03T19:24:06Z | [
"python",
"python-3.x",
"py.test"
] |
What does an empty parenthesis "()" in logging config dictionary mean? | 38,729,099 | <p>I am playing around with the <a href="https://github.com/exoscale/python-logstash-formatter" rel="nofollow">Python Logstash Formatter</a> and <a href="https://github.com/exoscale/python-logstash-formatter/wiki/Configuration-and-making-it-work-in-Python" rel="nofollow">in its wiki</a> it recommended setting the following option for the formatter:</p>
<pre><code>"formatters": { "logstash":{ "()": "logstash_formatter.LogstashFormatter" } }
</code></pre>
<p>This is working for me, but I'm unsure of what the empty parentheses are for, or what exactly <code>logstash_formatter.LogstashFormatter</code> is being set to in this example. </p>
<p>Can someone explain to me what the empty parentheses mean here in relation to the Python logger? It almost seems like it would be an empty tuple, except I can't fathom how setting an empty tuple to a class would work.</p>
| 1 | 2016-08-02T19:37:37Z | 38,729,253 | <p>If you check out the <a href="https://docs.python.org/3.4/library/logging.config.html#user-defined-objects" rel="nofollow">python docs for logging</a>, you'll see this:</p>
<blockquote>
<p>Objects to be configured are described by dictionaries which detail their configuration. In some places, the logging system will be able to infer from the context how an object is to be instantiated, but when a user-defined object is to be instantiated, the system will not know how to do this. In order to provide complete flexibility for user-defined object instantiation, the user needs to provide a âfactoryâ - a callable which is called with a configuration dictionary and which returns the instantiated object. This is signalled by an absolute import path to the factory being made available under the special key <code>'()'</code>. </p>
</blockquote>
<p>Basically what it means is that <code>logstash_formatter.LogstashFormatter</code> is the factory that is going to create a new formatter. So when the logging framework would like to create a formatter, it's going to make sure to <code>import logstash_formatter</code> and then do something like <code>logstash_formatter.LogstashFormatter(*args, **kwargs)</code>.</p>
<p>Indeed, if you <a href="https://github.com/python/cpython/blob/3.5/Lib/logging/config.py#L646" rel="nofollow">use the source, Luke</a>, you can see that</p>
<ul>
<li><a href="https://github.com/python/cpython/blob/3.5/Lib/logging/config.py#L459" rel="nofollow">The value is extracted</a></li>
<li><p><a href="https://github.com/python/cpython/blob/3.5/Lib/logging/config.py#L370" rel="nofollow">Then resolved/imported</a></p></li>
<li><p>And <a href="https://github.com/python/cpython/blob/3.5/Lib/logging/config.py#L370" rel="nofollow">the created factory is used here</a></p>
<pre><code>if '()' in config:
factory = config['()'] # for use in exception handler
</code></pre></li>
<li><p><a href="https://github.com/python/cpython/blob/3.5/Lib/logging/config.py#L731" rel="nofollow">And later the factory is called with kwargs</a>:</p>
<pre><code> result = factory(**kwargs)
</code></pre></li>
</ul>
| 3 | 2016-08-02T19:47:21Z | [
"python",
"logging",
"logstash"
] |
Python code to find the sum of common elements in two sequences of integers in a range | 38,729,109 | <p>"""A better Python code to find find the sum of common elements in two sequences #of integers in a range ??"""</p>
<pre><code>#F() constructs a sequence:
def F():
bot=int(input("Enter start value:"))
#start value
top = int(input("Enter stop value:"))
#stop value
L=range(bot,top+1)
return(L)
# Let L1 and L2 two sequences
L1=F()
L2=F()
print(L1, L2)
#G()returns the sum of the common elements in L1 and L2:
def G(L1, L2):
res = []
for x in L1:
if x in L2:
res.append(x)
return sum(res)
print(G(L1, L2))
# Example: L1=range(1,11), L2=range(5,21): 45(=5+6+7+8+9+10)
</code></pre>
| 0 | 2016-08-02T19:38:10Z | 38,729,943 | <p>If your solution is working, Why looking for a "better Python code"? Your code is good enough. The only change I would do is the list <code>res</code>. You don't really need it:</p>
<pre><code>def G(L1, L2):
total = 0
for x in L1:
if x in L2:
total += x
return total
</code></pre>
<p>The solution using <code>set</code> is good if you are sure that all the elements in L1 and L2 are unique. In this case, because you generated them with a <code>range</code>, they are unique, and you could use:</p>
<pre><code>sum(set(L1).intersection(set(L2))
</code></pre>
<p>If there are duplicates, you could filter the elements:</p>
<pre><code>sum(filter(lambda x: x in L2, L1))
</code></pre>
<p>Or you could also use list comprehension:</p>
<pre><code>sum([x for x in L1 if x in L2])
</code></pre>
<p>But I repeat: I think your solution is good Python code.</p>
| 1 | 2016-08-02T20:32:02Z | [
"python"
] |
Anaconda using Using both Python 2.x and Python 3.x in IPython Notebook | 38,729,113 | <p>using the instructions <a href="http://stackoverflow.com/questions/30492623/using-both-python-2-x-and-python-3-x-in-ipython-notebook">here</a> Using nb_conda_kernels I almost have it going,, but I seem to have only the very basic Python Interpreter in my environments. What I want is a full Anaconda environment for each version of Python, with all the libraries? How can I do that?</p>
| 0 | 2016-08-02T19:38:49Z | 38,741,109 | <p>What you probably want to do is to create a environment with some predefined set of packages. From the command line, and possibly outside any conda environment, try something like (<code>py2</code> can indeed any name you want):</p>
<pre><code>conda create --name py2 python=2 anaconda
</code></pre>
<p>See <a href="http://conda.pydata.org/docs/help/help.html" rel="nofollow">http://conda.pydata.org/docs/help/help.html</a> for more details, or for a quick look: <a href="http://conda.pydata.org/docs/_downloads/conda-cheatsheet.pdf" rel="nofollow">http://conda.pydata.org/docs/_downloads/conda-cheatsheet.pdf</a></p>
| 0 | 2016-08-03T10:33:17Z | [
"python",
"anaconda",
"conda"
] |
Python function deletes my var? | 38,729,213 | <p>I have the following code:</p>
<pre><code>def moveServo(x, y):
print x
print y
s1.ChangeDutyCycle(x)
s2.ChangeDutyCycle(y)
print "Successfull"
print x
print y
@app.route('/cameramove/', methods=['GET'])
def cameramove():
ret_data = True
x = request.args.get('x')
y = request.args.get('y')
moveServo(x, y)
return jsonify(ret_data)
</code></pre>
<p>The output is:</p>
<pre><code>192.168.178.23 - - [02/Aug/2016 19:36:24] "GET /cameramove/?x=7.8&y=9.3 HTTP/1.1" 500 -
7.8
9.4
192.168.178.23 - - [02/Aug/2016 19:36:24] "GET /cameramove/?x=7.8&y=9.4 HTTP/1.1" 500 -
7.8
9.4
</code></pre>
<p>You see that the function resets the variables. But when i change the definition of the var's:</p>
<pre><code>def moveServo(x, y):
print x
print y
s1.ChangeDutyCycle(x)
s2.ChangeDutyCycle(y)
print "Successfull"
print x
print y
@app.route('/cameramove/', methods=['GET'])
def cameramove():
ret_data = True
x = 5.6
y = 3.9
moveServo(x, y)
return jsonify(ret_data)
</code></pre>
<p>The output:</p>
<pre><code>192.168.178.23 - - [02/Aug/2016 19:40:44] "GET /cameramove/?x=6.8&y=9.1 HTTP/1.1" 500 -
5.6
3.9
Successfull
5.6
3.9
</code></pre>
<p>Does it work at once :O</p>
<p>Can anybody help me? I have no idea why the function would not accept the variables.</p>
| 0 | 2016-08-02T19:44:28Z | 38,738,726 | <p>I ended up using:</p>
<pre><code>float(request.args.get('x'))
</code></pre>
| 0 | 2016-08-03T08:47:42Z | [
"python",
"function",
"variables"
] |
For loop supposed to print 4 rows. it only prints one | 38,729,225 | <p>I have a list called <strong>deck</strong>, which has 104 elements. I want to create a for loop, that displays images on a canvas in a simple GUI (that can only be run on CodeSkulptor, the link to my program's here:
<a href="http://www.codeskulptor.org/#user41_kgywoL4h56_1.py" rel="nofollow">http://www.codeskulptor.org/#user41_kgywoL4h56_1.py</a> )</p>
<p>The loop only prints the first row, I think the way I update coordinates of the centre of the image is what's wrong with my code.</p>
<pre><code> if center_d[0] >= WIDTH:
center_s[1] += height
center_d[1] += height
</code></pre>
<p>The entire loop's below and if you need more context, visit the link to my program, that i provided above. Thanks!</p>
<pre><code>def draw(canvas):
global deck, cards, WIDTH, HEIGHT
width = 70
height = 106
center_s = [41, 59]
center_d = [41, 59]
for card in deck:
canvas.draw_image(deck_img, center_s, (width, height), center_d, (width, height))
center_s[0] += 70
center_d[0] += 70
if center_d[0] >= WIDTH:
center_s[1] += height
center_d[1] += height
</code></pre>
| 0 | 2016-08-02T19:45:08Z | 38,729,508 | <p>You forgot about your <code>center_s[0]</code> and <code>center_s[0]</code> coordinates. They are growing constantly.
You need to set them to zero, e.g like this:</p>
<pre><code>if center_d[0] >= WIDTH:
center_s[0] = 41
center_d[0] = 41
center_s[1] += height
center_d[1] += height
</code></pre>
| 2 | 2016-08-02T20:04:40Z | [
"python",
"python-2.7",
"for-loop"
] |
Age Classifier python program | 38,729,247 | <pre class="lang-python prettyprint-override"><code>print("Welcome to the Age Classifier program")
person_age=(float(input("Enter the person's Age"))
if person_age<=1 or person_age>0:
print("Person is an infant")
elif person_age>1 or person_age<13:
print("Person is a child")
elif person_age>=13 or person_age<20:
print("Person is a teenager")
elif person_age>=20 :
print("Person is an adult")
else:
print("Person has not been conceived or is developing in the womb")
</code></pre>
<p>When I execute this code, the interpreter reports that there is an error on the 1st line of the body of the <code>if</code> statements, with a message reporting that the syntax is invalid. I tried adding parenthesis and the same syntax error is encountered.</p>
| 1 | 2016-08-02T19:47:01Z | 38,729,267 | <p>You have unbalanced parentheses.</p>
<pre><code>person_age=float(input("Enter the person's Age"))
</code></pre>
<p>It would probably be a better idea, though, to make this an integer:</p>
<pre><code>person_age=int(input("Enter the person's Age"))
</code></pre>
| 1 | 2016-08-02T19:48:21Z | [
"python",
"syntax"
] |
Age Classifier python program | 38,729,247 | <pre class="lang-python prettyprint-override"><code>print("Welcome to the Age Classifier program")
person_age=(float(input("Enter the person's Age"))
if person_age<=1 or person_age>0:
print("Person is an infant")
elif person_age>1 or person_age<13:
print("Person is a child")
elif person_age>=13 or person_age<20:
print("Person is a teenager")
elif person_age>=20 :
print("Person is an adult")
else:
print("Person has not been conceived or is developing in the womb")
</code></pre>
<p>When I execute this code, the interpreter reports that there is an error on the 1st line of the body of the <code>if</code> statements, with a message reporting that the syntax is invalid. I tried adding parenthesis and the same syntax error is encountered.</p>
| 1 | 2016-08-02T19:47:01Z | 38,729,331 | <p>The error in your first line is primarily due to the parenthesis:</p>
<pre><code>person_age=(float(input("Enter the person's Age")) # 3 opening, 2 closing.
</code></pre>
<p>Change this to:</p>
<pre><code>person_age=(float(input("Enter the person's Age")))
</code></pre>
<p>Also, you have a logical error. The <code>or</code> operator returns <code>True</code> if either of the conditions is True. I doubt that suits your use case. You should do something like:</p>
<pre><code>if person_age<=1 and person_age>0:
print("Person is an infant")
elif person_age>1 and person_age<13:
print("Person is a child")
elif person_age>=13 and person_age<20:
print("Person is a teenager")
elif person_age>=20 :
print("Person is an adult")
else:
print("Person has not been conceived or is developing in the womb")
</code></pre>
| 2 | 2016-08-02T19:52:17Z | [
"python",
"syntax"
] |
I want to randomly select items from a list and add them to another list without replacement | 38,729,272 | <p>I'm trying to randomly select items from a list and add them to another list.</p>
<p>The list of elements I'm choosing from looks like this:</p>
<pre><code>data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
</code></pre>
<p>I want to randomly take an element from this list and add it to one of four lists until each list has the same number of elements.</p>
<pre><code>lists=[[1,'x','x','x','x','x'],[3,'x','x','x','x','x'],[5,'x','x','x','x','x'],[7,'x','x','x','x','x']]
</code></pre>
<p>I have tried using random.choice but this gives me duplicates:</p>
<pre><code>def fill_lists(data):
for list in lists:
for n,i in enumerate(list):
if i=='x':
list[n]= random.choice(data)
</code></pre>
<p>I want my function to return a list that contains 4 lists each containing a random sample of the data list with no duplicates. I also want the first element of each list to be a value that I have already placed into the list.</p>
| 3 | 2016-08-02T19:48:38Z | 38,729,341 | <p>You can use <code>random.sample</code>:</p>
<pre><code> data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
random.sample(data, 5)
# [27, 12, 33, 24, 17]
</code></pre>
<p>To get a nested list of it, use a list comprehension</p>
<pre><code> [random.sample(data, 5) for _ in range(5)]
# [[40, 35, 24, 54, 17],
# [17, 54, 35, 43, 37],
# [40, 4, 43, 33, 44],
# [51, 37, 35, 33, 8],
# [54, 4, 44, 27, 50]]
</code></pre>
<p>Edit: The above won't give you unique values; you should accept the above answer for the unique values. I interpreted the question wrong!</p>
| 0 | 2016-08-02T19:52:48Z | [
"python",
"list",
"random",
"sample"
] |
I want to randomly select items from a list and add them to another list without replacement | 38,729,272 | <p>I'm trying to randomly select items from a list and add them to another list.</p>
<p>The list of elements I'm choosing from looks like this:</p>
<pre><code>data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
</code></pre>
<p>I want to randomly take an element from this list and add it to one of four lists until each list has the same number of elements.</p>
<pre><code>lists=[[1,'x','x','x','x','x'],[3,'x','x','x','x','x'],[5,'x','x','x','x','x'],[7,'x','x','x','x','x']]
</code></pre>
<p>I have tried using random.choice but this gives me duplicates:</p>
<pre><code>def fill_lists(data):
for list in lists:
for n,i in enumerate(list):
if i=='x':
list[n]= random.choice(data)
</code></pre>
<p>I want my function to return a list that contains 4 lists each containing a random sample of the data list with no duplicates. I also want the first element of each list to be a value that I have already placed into the list.</p>
| 3 | 2016-08-02T19:48:38Z | 38,729,376 | <pre><code>import random
data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
random.shuffle(data)
lists = [data[i:i+len(data)/4] for i in range(0, len(data), len(data)/4)]
print(lists)
</code></pre>
<p>Randomly pulling from your initial list will have the same effect as shuffling then pulling in order. Splitting into sublists can then be done. If you need the sublists sorted, just map sort over the list afterwards.</p>
<p>You can change the number of groups by altering the divisor of <code>len(data)/4</code></p>
<p>Edit: I missed this part of your question:</p>
<pre><code>heads = [1,3,5,7]
[q.insert(0,p) for p,q in zip(heads,lists)]
</code></pre>
| 3 | 2016-08-02T19:56:04Z | [
"python",
"list",
"random",
"sample"
] |
I want to randomly select items from a list and add them to another list without replacement | 38,729,272 | <p>I'm trying to randomly select items from a list and add them to another list.</p>
<p>The list of elements I'm choosing from looks like this:</p>
<pre><code>data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
</code></pre>
<p>I want to randomly take an element from this list and add it to one of four lists until each list has the same number of elements.</p>
<pre><code>lists=[[1,'x','x','x','x','x'],[3,'x','x','x','x','x'],[5,'x','x','x','x','x'],[7,'x','x','x','x','x']]
</code></pre>
<p>I have tried using random.choice but this gives me duplicates:</p>
<pre><code>def fill_lists(data):
for list in lists:
for n,i in enumerate(list):
if i=='x':
list[n]= random.choice(data)
</code></pre>
<p>I want my function to return a list that contains 4 lists each containing a random sample of the data list with no duplicates. I also want the first element of each list to be a value that I have already placed into the list.</p>
| 3 | 2016-08-02T19:48:38Z | 38,729,502 | <p>You could try this, modifying the ranges inside the <code>d</code> function to tune to the number elements you want.</p>
<pre><code>import random
def f(data):
val = random.choice(data)
ix = data.index(val)
data.pop(ix)
return val, data
def d(data):
topholder = []
m = len(data)/4
for i in range(4):
holder = []
for n in range(m):
holder.append(f(data)[0])
topholder.append(holder)
return topholder
d(data)
</code></pre>
<p>This will always give you 4 lists of randomly sampled values without duplication.</p>
| 0 | 2016-08-02T20:03:57Z | [
"python",
"list",
"random",
"sample"
] |
I want to randomly select items from a list and add them to another list without replacement | 38,729,272 | <p>I'm trying to randomly select items from a list and add them to another list.</p>
<p>The list of elements I'm choosing from looks like this:</p>
<pre><code>data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
</code></pre>
<p>I want to randomly take an element from this list and add it to one of four lists until each list has the same number of elements.</p>
<pre><code>lists=[[1,'x','x','x','x','x'],[3,'x','x','x','x','x'],[5,'x','x','x','x','x'],[7,'x','x','x','x','x']]
</code></pre>
<p>I have tried using random.choice but this gives me duplicates:</p>
<pre><code>def fill_lists(data):
for list in lists:
for n,i in enumerate(list):
if i=='x':
list[n]= random.choice(data)
</code></pre>
<p>I want my function to return a list that contains 4 lists each containing a random sample of the data list with no duplicates. I also want the first element of each list to be a value that I have already placed into the list.</p>
| 3 | 2016-08-02T19:48:38Z | 38,729,520 | <p>another shuffle based but ensuring the all sub lists have the same size in case number of elements is not divisible to number of lists (try 7 for example).</p>
<pre><code>from random import shuffle
def split(data, n):
size=int(len(data)/n);
for i in range(0, n*size, size):
yield data[i:i+size]
data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
shuffle(data)
list(split(data, 5))
</code></pre>
| 0 | 2016-08-02T20:05:29Z | [
"python",
"list",
"random",
"sample"
] |
I want to randomly select items from a list and add them to another list without replacement | 38,729,272 | <p>I'm trying to randomly select items from a list and add them to another list.</p>
<p>The list of elements I'm choosing from looks like this:</p>
<pre><code>data=[2,3,4,7,8,12,17,24,27,33,35,36,37,38,40,43,44,50,51,54]
</code></pre>
<p>I want to randomly take an element from this list and add it to one of four lists until each list has the same number of elements.</p>
<pre><code>lists=[[1,'x','x','x','x','x'],[3,'x','x','x','x','x'],[5,'x','x','x','x','x'],[7,'x','x','x','x','x']]
</code></pre>
<p>I have tried using random.choice but this gives me duplicates:</p>
<pre><code>def fill_lists(data):
for list in lists:
for n,i in enumerate(list):
if i=='x':
list[n]= random.choice(data)
</code></pre>
<p>I want my function to return a list that contains 4 lists each containing a random sample of the data list with no duplicates. I also want the first element of each list to be a value that I have already placed into the list.</p>
| 3 | 2016-08-02T19:48:38Z | 38,730,541 | <p>This is a dynamic function that returns a list of list where each list starts with a specified value. The amount of nested lists is determined by the amount of <em>starting_values</em>.</p>
<pre><code>import random
def get_random_list(element_list, starting_values, size_per_group):
num_of_groups = len(starting_values)
size_per_group -= 1
total_elements = num_of_groups * size_per_group
random_data = random.sample(element_list, total_elements)
return [[starting_values[x]] + random_data[x * size_per_group:(x + 1) * size_per_group] for x in range(num_of_groups)]
data = [2, 3, 4, 7, 8, 12, 17, 24, 27, 33, 35, 36, 37, 38, 40, 43, 44, 50, 51, 54]
print(get_random_list(data, starting_values=[1, 2, 3, 4, 5, 6], size_per_group=2))
# OUTPUT: [[1, 36], [2, 54], [3, 17], [4, 7], [5, 35], [6, 33]]
print(get_random_list(data, starting_values=[9, 3, 5], size_per_group=6))
# OUTPUT: [[9, 54, 2, 7, 38, 24], [3, 35, 8, 37, 40, 17], [5, 44, 4, 27, 50, 3]]
</code></pre>
<p>It works for Python2.x and Python3.x but for Python2.x you should change <code>range()</code> to <code>xrange()</code> for better use of memory.</p>
| 0 | 2016-08-02T21:12:16Z | [
"python",
"list",
"random",
"sample"
] |
How to run a Python script in Node.js synchronously? | 38,729,300 | <p>I am running the following Python script in Node.js through <a href="https://github.com/extrabacon/python-shell" rel="nofollow">python-shell</a>:</p>
<pre><code>import sys
import time
x=0
completeData = "";
while x<800:
crgb = ""+x;
print crgb
completeData = completeData + crgb + "@";
time.sleep(.0001)
x = x+1
file = open("sensorData.txt", "w")
file.write(completeData)
file.close()
sys.stdout.flush()
else:
print "Device not found\n"
</code></pre>
<p>And my corresponding Node.js code is:</p>
<pre><code>var PythonShell = require('python-shell');
PythonShell.run('sensor.py', function (err) {
if (err) throw err;
console.log('finished');
});
console.log ("Now reading data");
</code></pre>
<p>Output is:</p>
<pre><code>Now reading data
finished
</code></pre>
<p>But expected output is:</p>
<pre><code>finished
Now reading data
</code></pre>
<p>Node.js can not execute my Python script <em>synchronously</em>, it executes first all the code following the <code>PythonShell.run</code> function then executes <code>PythonShell.run</code>. How can I execute first <code>PythonShell.run</code> then the following code? Any help will be mostly appreciated... It is an emergency please!</p>
| 0 | 2016-08-02T19:50:53Z | 38,729,571 | <p>As this is asynchronous, add an end callback (found in the documentation) instead of instructions at top level</p>
<pre><code>// end the input stream and allow the process to exit
pyshell.end(function (err) {
if (err) throw err;
console.log ("Now reading data");
});
</code></pre>
| 0 | 2016-08-02T20:08:48Z | [
"python",
"node.js"
] |
Conditionally Select and Set Column Values | 38,729,313 | <p>I have two dataframes. I need to copy the values of df2.faults column to df1.faults column based on the values of unit and date.</p>
<p>The two dataframes have different lengths. df1 has possible duplicates of (unit,date) contrary to df2.
An example that mimics my dataset:</p>
<pre><code> df1 = pd.DataFrame({'unit': ['x']*5+['y']*6 + ['z']*5,
'date': ['2016-06-14', '2016-06-14', '2016-06-15', '2016-06-16', '2016-06-16',
'2016-06-14', '2016-06-14', '2016-06-15', '2016-06-15', '2016-06-16', '2016-06-16',
'2016-06-15', '2016-06-16', '2016-06-16', '2016-06-17', '2016-06-17'],
'faults': None})
df1.date = pd.to_datetime(df1.date)
print(df1)
date faults unit
0 2016-06-14 None x
1 2016-06-14 None x
2 2016-06-15 None x
3 2016-06-16 None x
4 2016-06-16 None x
5 2016-06-14 None y
6 2016-06-14 None y
7 2016-06-15 None y
8 2016-06-15 None y
9 2016-06-16 None y
10 2016-06-16 None y
11 2016-06-15 None z
12 2016-06-16 None z
13 2016-06-16 None z
14 2016-06-17 None z
15 2016-06-17 None z
df2 = pd.DataFrame({'unit': ['x']*3+['y']*3 + ['z']*3,
'date': ['2016-06-14', '2016-06-15', '2016-06-16',
'2016-06-14', '2016-06-15', '2016-06-16',
'2016-06-15', '2016-06-16', '2016-06-17'],
'faults': [76, 12, 30, 45, 23, 25, 10, 26, 43]})
df2.date = pd.to_datetime(df2.date)
print(df2)
date faults unit
0 2016-06-14 76 x
1 2016-06-15 12 x
2 2016-06-16 30 x
3 2016-06-14 45 y
4 2016-06-15 23 y
5 2016-06-16 25 y
6 2016-06-15 10 z
7 2016-06-16 26 z
8 2016-06-17 43 z
</code></pre>
<p>The required output using nested loops:</p>
<pre><code> for u in pd.unique(df2.unit):
for d in pd.unique(df2[df2.unit == u].date):
df1.ix[(df1.unit == u)&(df1.date == d) ,'faults'] = int(df2[(df2.unit == u)&(df2.date == d)]['faults'])
print(df1)
date faults unit
0 2016-06-14 76 x
1 2016-06-14 76 x
2 2016-06-15 12 x
3 2016-06-16 30 x
4 2016-06-16 30 x
5 2016-06-14 45 y
6 2016-06-14 45 y
7 2016-06-15 23 y
8 2016-06-15 23 y
9 2016-06-16 25 y
10 2016-06-16 25 y
11 2016-06-15 10 z
12 2016-06-16 26 z
13 2016-06-16 26 z
14 2016-06-17 43 z
15 2016-06-17 43 z
</code></pre>
<p>I can't think of an efficient approach! List comprehension, conditional indexing, ...? Am I missing something?</p>
<p>Thanks!</p>
<h2>Update</h2>
<p>One-loop solution is </p>
<pre><code>for index, row in df2.iterrows():
df1.ix[(df1.unit == row['unit'])&(df1.date == row['date']) ,'faults'] = row['faults']
</code></pre>
<p>Any more efficient solution? My dataset is relatively large that I want to avoid loops at all.</p>
| 2 | 2016-08-02T19:51:19Z | 38,729,595 | <p>Simple, use a left merge :</p>
<pre><code>df1 = pd.merge(df1,df2,how='left',on=['date','unit'])
df1 =
date faults_x unit faults_y
0 2016-06-14 None x 76
1 2016-06-14 None x 76
2 2016-06-15 None x 12
3 2016-06-16 None x 30
4 2016-06-16 None x 30
5 2016-06-14 None y 45
6 2016-06-14 None y 45
7 2016-06-15 None y 23
8 2016-06-15 None y 23
9 2016-06-16 None y 25
10 2016-06-16 None y 25
11 2016-06-15 None z 10
12 2016-06-16 None z 26
13 2016-06-16 None z 26
14 2016-06-17 None z 43
15 2016-06-17 None z 43
# Some Bookkeeping
df1 = df1.drop('faults_x',1)
df1.rename(columns={'faults_y':'faults'})
# Final Output
df1 =
date unit faults
0 2016-06-14 x 76
1 2016-06-14 x 76
2 2016-06-15 x 12
3 2016-06-16 x 30
4 2016-06-16 x 30
5 2016-06-14 y 45
6 2016-06-14 y 45
7 2016-06-15 y 23
8 2016-06-15 y 23
9 2016-06-16 y 25
10 2016-06-16 y 25
11 2016-06-15 z 10
12 2016-06-16 z 26
13 2016-06-16 z 26
14 2016-06-17 z 43
15 2016-06-17 z 43
</code></pre>
<p>Remember your joins and you will be fine!! :)</p>
<p>In case you want to do it in one go then:</p>
<pre><code>df1 = pd.merge(df1.drop('faults',1),df2,how='left',on=['date','unit'])
</code></pre>
| 4 | 2016-08-02T20:10:27Z | [
"python",
"python-3.x",
"pandas"
] |
Sort by one column's values, keeping rows grouped by another column's value | 38,729,321 | <p>I have two (hundreds) df's that are generated and then concatenated, which I would then like to sort while keeping the rows with identical column <code>D</code> names in the original order:</p>
<pre><code>In [120]: df_list[0]
Out[120]:
A B C D
0 0.564678 0.598355 0.606693 MA0835
1 0.066291 0.063587 0.662292 MA0835
2 0.000000 0.000000 0.010758 MA0835
3 0.000000 0.000000 0.097895 MA0835
4 0.000000 0.000000 0.136468 MA0835
In [121]: df_list[1]
Out[121]:
A B C D
0 0.628844 0.614492 0.570333 MA1002
1 0.317790 0.293189 0.239368 MA1002
2 0.000000 0.000000 0.000000 MA1002
3 0.000000 0.000000 0.000000 MA1002
4 0.000000 0.000000 0.000000 MA1002
In [122]: df = pd.concat(df_list[0:2])
In [122]: df
Out[122]:
A B C D
0 0.564678 0.598355 0.606693 MA0835
1 0.066291 0.063587 0.662292 MA0835
2 0.000000 0.000000 0.010758 MA0835
3 0.000000 0.000000 0.097895 MA0835
4 0.000000 0.000000 0.136468 MA0835
0 0.628844 0.614492 0.570333 MA1002
1 0.317790 0.293189 0.239368 MA1002
2 0.000000 0.000000 0.000000 MA1002
3 0.000000 0.000000 0.000000 MA1002
4 0.000000 0.000000 0.000000 MA1002
</code></pre>
<p>Standard sorting produces:</p>
<pre><code>In [125]: df.sort_values('A',ascending=False)
Out[125]:
A B C D
0 0.628844 0.614492 0.570333 MA1002
0 0.564678 0.598355 0.606693 MA0835
1 0.317790 0.293189 0.239368 MA1002
1 0.066291 0.063587 0.662292 MA0835
2 0.000000 0.000000 0.010758 MA0835
3 0.000000 0.000000 0.097895 MA0835
4 0.000000 0.000000 0.136468 MA0835
2 0.000000 0.000000 0.000000 MA1002
3 0.000000 0.000000 0.000000 MA1002
4 0.000000 0.000000 0.000000 MA1002
</code></pre>
<p>However, I would like to sort on <code>A</code> and keep the row-groupings as specified by <code>D</code>. This is the desired output:</p>
<pre><code> A B C D
0 0.628844 0.614492 0.570333 MA1002
1 0.317790 0.293189 0.239368 MA1002
2 0.000000 0.000000 0.000000 MA1002
3 0.000000 0.000000 0.000000 MA1002
4 0.000000 0.000000 0.000000 MA1002
0 0.564678 0.598355 0.606693 MA0835
1 0.066291 0.063587 0.662292 MA0835
2 0.000000 0.000000 0.010758 MA0835
3 0.000000 0.000000 0.097895 MA0835
4 0.000000 0.000000 0.136468 MA0835
</code></pre>
<p>Do I need to work with <code>groupby</code>, or is there another sorting/grouping technique I am unfamiliar with?</p>
| 4 | 2016-08-02T19:51:43Z | 38,729,920 | <p>Use the <code>keys</code> argument in <code>pd.concat</code></p>
<pre><code>keys = [(df.A.iloc[0], i) for i, df in enumerate(list_of_dfs)]
pd.concat(list_of_dfs, keys=keys) \
.sort_index(ascending=[False, True, True]) \
.reset_index(drop=True)
</code></pre>
<p><a href="http://i.stack.imgur.com/5j6Yu.png" rel="nofollow"><img src="http://i.stack.imgur.com/5j6Yu.png" alt="enter image description here"></a></p>
| 3 | 2016-08-02T20:30:28Z | [
"python",
"sorting",
"pandas"
] |
How to pass an extra parameter to authenticate function when using django.views.login? | 38,729,330 | <p>I have an authentication back-end which looks like this</p>
<pre><code>class Backend(object):
def authenticate(self, username=None, password=None):
# Do stuff
</code></pre>
<p>My url handler looks like this</p>
<pre><code>url(r'^login/$', 'django.contrib.auth.views.login', {'template_name': 'login.html'}),
</code></pre>
<p>Login form looks something like this (standard form, you can skip this)</p>
<pre><code>{% if not form.username.errors %}
<input id="id_username" name="username" type="text" class="form-control" placeholder="Username (admin)" autofocus>
{% else %}
<div class="form-group has-error">
{% for error in form.username.errors %}
<label class="control-label" for="id_username">{{ error }}</label>
{% endfor %}
<input id="id_username" name="username" type="text" class="form-control" placeholder="Username (admin)" autofocus>
</div>
{% endif %}
{% if not form.password.errors %}
<input id="id_password" name="password" type="text" class="form-control" placeholder="Password (admin)" autofocus>
{% else %}
<div class="form-group has-error">
{% for error in form.password.errors %}
<label class="control-label" for="id_password">{{ error }}</label>
{% endfor %}
<input id="id_password" name="password" type="text" class="form-control" placeholder="Password (admin)">
</div>
{% endif %}
</code></pre>
<p>I need to pass an extra parameter(eg country) to the authenticate function possibly from the form.
The new authenticate function should be like this </p>
<pre><code>def authenticate(self, username=None, password=None, country=None):
</code></pre>
<p>How to do this?</p>
| 1 | 2016-08-02T19:52:16Z | 38,729,617 | <p>I believe that you are asking about writing custom authentication backend and defining your own <code>authenticate</code> method in it.</p>
<p>Is so, check <a href="https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#writing-an-authentication-backend" rel="nofollow">Django documentation</a>. As said there <code>authenticate</code> method is run against <strong>authenticate(**credentials)</strong>.
This means that you can pas any required kwargs parameters including 'country'.</p>
<p>In your case:</p>
<ol>
<li>add 'country' to method declaration</li>
<li>pass country as an additional parameter when 'authenticate' user.</li>
</ol>
<p>That is it.
Hope it helps.</p>
| 1 | 2016-08-02T20:11:32Z | [
"python",
"django"
] |
How to pass an extra parameter to authenticate function when using django.views.login? | 38,729,330 | <p>I have an authentication back-end which looks like this</p>
<pre><code>class Backend(object):
def authenticate(self, username=None, password=None):
# Do stuff
</code></pre>
<p>My url handler looks like this</p>
<pre><code>url(r'^login/$', 'django.contrib.auth.views.login', {'template_name': 'login.html'}),
</code></pre>
<p>Login form looks something like this (standard form, you can skip this)</p>
<pre><code>{% if not form.username.errors %}
<input id="id_username" name="username" type="text" class="form-control" placeholder="Username (admin)" autofocus>
{% else %}
<div class="form-group has-error">
{% for error in form.username.errors %}
<label class="control-label" for="id_username">{{ error }}</label>
{% endfor %}
<input id="id_username" name="username" type="text" class="form-control" placeholder="Username (admin)" autofocus>
</div>
{% endif %}
{% if not form.password.errors %}
<input id="id_password" name="password" type="text" class="form-control" placeholder="Password (admin)" autofocus>
{% else %}
<div class="form-group has-error">
{% for error in form.password.errors %}
<label class="control-label" for="id_password">{{ error }}</label>
{% endfor %}
<input id="id_password" name="password" type="text" class="form-control" placeholder="Password (admin)">
</div>
{% endif %}
</code></pre>
<p>I need to pass an extra parameter(eg country) to the authenticate function possibly from the form.
The new authenticate function should be like this </p>
<pre><code>def authenticate(self, username=None, password=None, country=None):
</code></pre>
<p>How to do this?</p>
| 1 | 2016-08-02T19:52:16Z | 38,729,914 | <p>You need to subclass the <code>AuthenticationForm</code>, and override the <code>clean</code> method so that it calls <code>authenticate</code> with your extra parameter.</p>
<p>Then, in your urls.py, pass your <code>authentication_form</code> to the login view.</p>
| 0 | 2016-08-02T20:30:15Z | [
"python",
"django"
] |
Executing a local shell function on a remote host over ssh using Python | 38,729,374 | <p>My <code>.profile</code> defines a function</p>
<pre><code>myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
</code></pre>
<p>I'd like to execute it from my python script</p>
<pre><code>import subprocess
subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True)
</code></pre>
<p>Getting an error back</p>
<pre><code>bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
</code></pre>
<p>Escaping ; results in </p>
<pre><code>bash: ;: command not found
</code></pre>
| 2 | 2016-08-02T19:55:58Z | 38,729,761 | <p>The original command was not interpreting the <code>;</code> before <code>myps</code> properly. Using <code>sh -c</code> fixes that, but... ( please see Charles Duffy comments below ).</p>
<p>Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in <code>.profile</code> are actually accessible in the shell started by the subprocess.Popen object ):</p>
<pre><code>subprocess.call('ssh user@box "$(typeset -f); myps"', shell=True),
</code></pre>
<p>An alternative ( less safe ) method would be to use <code>sh -c</code> for the subshell command:</p>
<pre><code>subprocess.call('ssh user@box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
</code></pre>
<p>This seemingly returned the same result:</p>
<pre><code>subprocess.call('ssh user@box "sh -c typeset -f; myps"', shell=True)
</code></pre>
<p>There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.</p>
| 0 | 2016-08-02T20:21:19Z | [
"python",
"ksh"
] |
Executing a local shell function on a remote host over ssh using Python | 38,729,374 | <p>My <code>.profile</code> defines a function</p>
<pre><code>myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
</code></pre>
<p>I'd like to execute it from my python script</p>
<pre><code>import subprocess
subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True)
</code></pre>
<p>Getting an error back</p>
<pre><code>bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
</code></pre>
<p>Escaping ; results in </p>
<pre><code>bash: ;: command not found
</code></pre>
| 2 | 2016-08-02T19:55:58Z | 38,730,287 | <pre><code>script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user@box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
</code></pre>
<hr>
<p>There are a few pertinent items here:</p>
<ul>
<li><p>The dotfile defining this function needs to be locally invoked <em>before</em> you run <code>typeset -f</code> to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the <code>ENV</code> environment variable is an exception).</p>
<p>In the given example, this is served by the <code>. ~/profile</code> command within the script.</p></li>
<li><p>The shell needs to be one supporting <code>typeset</code>, so it has to be <code>bash</code> or <code>ksh</code>, not <code>sh</code> (as used by <code>script=True</code> by default), which may be provided by <code>ash</code> or <code>dash</code>, lacking this feature.</p>
<p>In the given example, this is served by passing <code>['ksh', '-c']</code> is the first two arguments to the argv array.</p></li>
<li><p><code>typeset</code> needs to be run locally, so it can't be in an argv position other than the first with <code>script=True</code>. (To provide an example: <code>subprocess.Popen(['''printf '%s\n' "$@"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True)</code> evaluates only <code>printf '%s\n' "$@"</code> as a shell script; <code>This is just literal data!</code> and <code>$(touch /tmp/this-is-not-executed)</code> are passed as literal data, so no file named <code>/tmp/this-is-not-executed</code> is created).</p>
<p>In the given example, this is mooted by <em>not using</em> <code>script=True</code>.</p></li>
<li><p>Explicitly invoking <code>ksh -s</code> (or <code>bash -s</code>, as appropriate) ensures that the shell evaluating your function definitions matches the shell you <em>wrote</em> those functions against, rather than passing them to <code>sh -c</code>, as would happen otherwise.</p>
<p>In the given example, this is served by <code>ssh user@box ksh -s</code> inside the script.</p></li>
</ul>
| 1 | 2016-08-02T20:56:10Z | [
"python",
"ksh"
] |
Executing a local shell function on a remote host over ssh using Python | 38,729,374 | <p>My <code>.profile</code> defines a function</p>
<pre><code>myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
</code></pre>
<p>I'd like to execute it from my python script</p>
<pre><code>import subprocess
subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True)
</code></pre>
<p>Getting an error back</p>
<pre><code>bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
</code></pre>
<p>Escaping ; results in </p>
<pre><code>bash: ;: command not found
</code></pre>
| 2 | 2016-08-02T19:55:58Z | 38,744,502 | <p>I ended up using this.</p>
<pre><code>import subprocess
import sys
import re
HOST = "user@" + box
COMMAND = 'my long command with many many flags in single quotes'
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
</code></pre>
| 1 | 2016-08-03T13:03:44Z | [
"python",
"ksh"
] |
Should Python modules import the modules they semantically depend on? | 38,729,393 | <p>Should Python modules <code>import</code> the modules they semantically depend on?</p>
<p>For example:</p>
<p>module <code>a</code>:</p>
<pre><code>class A(object):
...
def foo(self):
...
</code></pre>
<p>module <code>b</code>:</p>
<pre><code>import a
def f(a_instance):
a_instance.foo()
...
</code></pre>
<p>The first line of module <code>b</code> is unnecessary, strictly speaking, but I wonder if it's considered good form in Python?</p>
| 1 | 2016-08-02T19:57:47Z | 38,729,523 | <p><code>b</code> semantically depends on <em>nothing</em>.</p>
<p>quite literally, the only thing that <code>def f</code> depends on is that <code>a_instance</code> produces an attribute <code>.foo</code> that is a callable. Full stop.</p>
<p>It doesn't matter if you pass in <code>A()</code> or <code>AChild()</code> or even a <code>MagicMock</code>.</p>
<p>This is what the phrase "duck typing" means. Consider:</p>
<pre><code>def is_a_duck(duck_candidate):
duck_candidate.looks_like_a_duck()
duck_candidate.walks_like_a_duck()
duck_candidate.quacks_like_a_duck()
print('This is a duck')
return True
</code></pre>
<p>If you create something that <code>.looks_like_a_duck()</code>, and <code>.walks_like_a_duck()</code> and <code>.quacks_like_a_duck()</code>, then as far as we're concert, it's a duck!</p>
<pre><code>class Person:
def looks_like_a_duck(self): pass
def walks_like_a_duck(self): pass
def quacks_like_a_duck(self): pass
class FakeDuck:
def looks_like_a_duck(self): pass
def walks_like_a_duck(self): pass
def quacks_like_a_duck(self): print('Quack quack quack')
def funcy_duck():
funcy_duck.looks_like_a_duck = lambda: None
funcy_duck.walks_like_a_duck = lambda: None
funcy_duck.quacks_like_a_duck = lambda: None
return funcy_duck
print(is_a_duck(Person())
print(is_a_duck(FakeDuck())
try:
print(is_a_duck(funcy_duck))
except AttributeError:
print('not a duck yet')
funcy_duck()
print(is_a_duck(funcy_duck))
</code></pre>
<p>These are all ducks - it doesn't matter if you define them in <code>ducks.py</code>, or different files, or even dump them as pickles and load them up later. They're all ducks as far as our function is concerned. There's no semantic dependencies on anything but what attributes and behavior our argument has.</p>
| 5 | 2016-08-02T20:05:39Z | [
"python"
] |
Pivot with Pandas Python to get booleans | 38,729,448 | <p>I have the following csv: <a href="https://github.com/antonio1695/Python/blob/master/nearBPO/facturas.csv" rel="nofollow">https://github.com/antonio1695/Python/blob/master/nearBPO/facturas.csv</a></p>
<p>From which I created a dataframe with the following code:</p>
<pre><code>import pandas as pd
df = pd.read_csv("C:/Users/Antonio/Desktop/nearBPO/facturas.csv", encoding = "ISO-8859-1")
df_du = df.iloc[:,[0,5]]
dfv = df_du.groupby('UUID')['Desc'].apply(list)
df2 = dfv.reset_index()
</code></pre>
<p>*Note: I'm taking the csv locally.</p>
<p>Which after the code looks like this: </p>
<pre><code> UUID Desc
0 0019A60D-78F8-E341-8D3E-9786201FE017 [TRANSPORTACION DE PASAJEROS]
1 003B8B8F-7017-E441-8C84-8C0EA577E29D [SERVICIO POR HORA]
2 00536BC1-1B10-4146-A59B-36613090EF10 [CONSUMO Y RENTA DE SALA DE JUNTAS]
3 005BBAEE-ABEC-E341-8CED-15DA22D11F65 [VERIFICACION HOLOGRAMA DOBLE CERO]
4 006C5F2E-CAE0-4498-9288-0241C1949D8A [C Meg XT Clas CH, Com Whop Q CH, C Meg XT Cla...
5 0075D1FC-996D-4784-9755-2F4598D16163 [Consumo]
</code></pre>
<p>I would like to make a dataframe which had each element of the 'Desc' column as a column and each UUID as a row where i would have a 1 (or True) if the UUID had the corresponding 'Desc' in it. </p>
<p>Example of what I want:</p>
<pre><code>UUID Transportacion de pasajeros Servicio por hora
0019A60D-78F8-E341-8D3E-9786201FE017 1 0
003B8B8F-7017-E441-8C84-8C0EA577E29D 0 1
</code></pre>
<p>What I was trying was to make was a matrix of 0 with an if to make the 1's. Afterwards I would merge it and pivot it. However, since the some 'Desc' are the same, I didn't know how big I should do it. And it seems to come along with many other flaws in the merge part. </p>
| 2 | 2016-08-02T20:01:07Z | 38,729,800 | <p>You can use</p>
<pre><code>pd.concat([df2['UUID'], df2['Desc'].str.join('___').str.get_dummies('___')], axis=1)
</code></pre>
<p>It returns something like this:</p>
<pre><code>Out:
UUID SERVICIO POR HORA \
0 0019A60D-78F8-E341-8D3E-9786201FE017 0
1 003B8B8F-7017-E441-8C84-8C0EA577E29D 1
TRANSPORTACION DE PASAJEROS
0 1
1 0
</code></pre>
| 3 | 2016-08-02T20:23:18Z | [
"python",
"pandas",
"dataframe",
"merge",
"pivot"
] |
Apply condition on pandas columns to create a boolen indexing array | 38,729,550 | <p>I want to drop specific rows from a pandas dataframe. Usually you can do that using something like</p>
<pre><code>df[df['some_column'] != 1234]
</code></pre>
<p>What <code>df['some_column'] != 1234</code> does is creating an indexing array that is indexing the new df, thus letting only rows with value <code>True</code> to be present.</p>
<p>But in some cases, like mine, I don't see how I can express the condition in such a way, and iterating over pandas rows is way too slow to be considered a viable option.</p>
<p>To be more specific, I want to drop all rows where the value of a column is also a key in a dictionary, in a similar manner with the example above.</p>
<p>In a perfect world I would consider something like</p>
<pre><code>df[df['some_column'] not in my_dict.keys()]
</code></pre>
<p>Which is obviously not working. Any suggestions?</p>
| 4 | 2016-08-02T20:07:38Z | 38,729,819 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#special-use-of-the-operator-with-list-objects" rel="nofollow"><code>query</code></a> for this purpose:</p>
<pre><code>df.query('some_column != list(my_dict.keys()')
</code></pre>
| 1 | 2016-08-02T20:24:12Z | [
"python",
"pandas"
] |
Apply condition on pandas columns to create a boolen indexing array | 38,729,550 | <p>I want to drop specific rows from a pandas dataframe. Usually you can do that using something like</p>
<pre><code>df[df['some_column'] != 1234]
</code></pre>
<p>What <code>df['some_column'] != 1234</code> does is creating an indexing array that is indexing the new df, thus letting only rows with value <code>True</code> to be present.</p>
<p>But in some cases, like mine, I don't see how I can express the condition in such a way, and iterating over pandas rows is way too slow to be considered a viable option.</p>
<p>To be more specific, I want to drop all rows where the value of a column is also a key in a dictionary, in a similar manner with the example above.</p>
<p>In a perfect world I would consider something like</p>
<pre><code>df[df['some_column'] not in my_dict.keys()]
</code></pre>
<p>Which is obviously not working. Any suggestions?</p>
| 4 | 2016-08-02T20:07:38Z | 38,729,821 | <p>What you're looking for is <code>isin()</code></p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 2], [1, 3], [4, 6],[5,7],[8,9]], columns=['A', 'B'])
In[9]: df
Out[9]: df
A B
0 1 2
1 1 3
2 4 6
3 5 7
4 8 9
mydict = {1:'A',8:'B'}
df[df['A'].isin(mydict.keys())]
Out[11]:
A B
0 1 2
1 1 3
4 8 9
</code></pre>
| 2 | 2016-08-02T20:24:19Z | [
"python",
"pandas"
] |
Apply condition on pandas columns to create a boolen indexing array | 38,729,550 | <p>I want to drop specific rows from a pandas dataframe. Usually you can do that using something like</p>
<pre><code>df[df['some_column'] != 1234]
</code></pre>
<p>What <code>df['some_column'] != 1234</code> does is creating an indexing array that is indexing the new df, thus letting only rows with value <code>True</code> to be present.</p>
<p>But in some cases, like mine, I don't see how I can express the condition in such a way, and iterating over pandas rows is way too slow to be considered a viable option.</p>
<p>To be more specific, I want to drop all rows where the value of a column is also a key in a dictionary, in a similar manner with the example above.</p>
<p>In a perfect world I would consider something like</p>
<pre><code>df[df['some_column'] not in my_dict.keys()]
</code></pre>
<p>Which is obviously not working. Any suggestions?</p>
| 4 | 2016-08-02T20:07:38Z | 38,729,844 | <p>You can use the function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow"><code>isin()</code></a> to select rows whose column value is in an iterable.</p>
<h3>Using lists:</h3>
<pre><code>my_list = ['my', 'own', 'data']
df.loc[df['column'].isin (my_list)]
</code></pre>
<h3>Using dicts:</h3>
<pre><code>my_dict = {'key1':'Some value'}
df.loc[df['column'].isin (my_dict.keys())]
</code></pre>
| 1 | 2016-08-02T20:25:57Z | [
"python",
"pandas"
] |
File paths within my Python script fail when launching via double-click (Windows 10) | 38,729,569 | <p>I have a python script that reads and writes to files that are located relative to it, in directories above and beside it. When I run my script via Cygwin using</p>
<pre><code>python script.py
</code></pre>
<p>The program works perfectly. However, when I run it by navigating through the windows GUI to my file and double clicking, I get a blank cmd prompt and then my program runs fine until I reach the point where I need to access the other files, at which point it fails and gives me this message in the cmd prompt that opens itself:</p>
<pre><code>../FFPRM.TXT
../2025510296/FFPRM_000.TXT
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:\Users\rbanks\Desktop\TSAC\EXECUTABLE\T-SAC_GUI.py", line 705, in run_exe
invalid_entry, output_text = self.apply()
File "C:\Users\rbanks\Desktop\TSAC\EXECUTABLE\T-SAC_GUI.py", line 694, in apply
p = subprocess.Popen(['cp', output_file_path, output_file_path_id])
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\subprocess.py", line 950, in __init__
restore_signals, start_new_session)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\subprocess.py", line 1220, in _execute_child startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>I am deploying this script as well as the directory structure as a zip for users to be able to unzip and use anywhere on their PC, so it is important for me to be able to run it with a simple double click and my relative file paths.</p>
<p>My first thought was the cmd prompt that was opening and executing my script was in a different environment, but when I run:</p>
<pre><code>cd
pause
</code></pre>
<p>in a .cmd script, I get:</p>
<pre><code>C:\Users\rbanks\Desktop\TSAC\EXECUTABLE>pause
</code></pre>
<p>Which is the correct location. </p>
<p>I am not having any luck with Google, I assume because I can't seem to construct a sufficient search query. Could someone point me in the right direction please?</p>
| 0 | 2016-08-02T20:08:36Z | 38,729,615 | <p><strong>[edit]</strong> the other answer is correct(at least i suspect) but i will leave this here in the hopes that it helps the op in the future with path problems ... and doing something like this is just generally good practice</p>
<hr>
<p>use</p>
<pre><code>BASEPATH = os.path.abspath(os.path.dirname(__file__))
</code></pre>
<p>at the top of your script</p>
<p>the later</p>
<pre><code>txt_file = os.path.join(BASEPATH,"my_file.txt")
</code></pre>
<p>or even</p>
<pre><code>txt_file = os.path.join(BASEPATH,"..","my_file.txt")
</code></pre>
<p>this gives you the benefit of being able to do things like</p>
<pre><code>if not os.path.exists(txt_file):
print "Cannot find file: %r"%txt_file
</code></pre>
<p>which will likely give you a better idea about what your problem might actually be (if its simply path related at least)</p>
| 4 | 2016-08-02T20:11:30Z | [
"python",
"windows",
"cmd",
"cygwin"
] |
File paths within my Python script fail when launching via double-click (Windows 10) | 38,729,569 | <p>I have a python script that reads and writes to files that are located relative to it, in directories above and beside it. When I run my script via Cygwin using</p>
<pre><code>python script.py
</code></pre>
<p>The program works perfectly. However, when I run it by navigating through the windows GUI to my file and double clicking, I get a blank cmd prompt and then my program runs fine until I reach the point where I need to access the other files, at which point it fails and gives me this message in the cmd prompt that opens itself:</p>
<pre><code>../FFPRM.TXT
../2025510296/FFPRM_000.TXT
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:\Users\rbanks\Desktop\TSAC\EXECUTABLE\T-SAC_GUI.py", line 705, in run_exe
invalid_entry, output_text = self.apply()
File "C:\Users\rbanks\Desktop\TSAC\EXECUTABLE\T-SAC_GUI.py", line 694, in apply
p = subprocess.Popen(['cp', output_file_path, output_file_path_id])
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\subprocess.py", line 950, in __init__
restore_signals, start_new_session)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\subprocess.py", line 1220, in _execute_child startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>I am deploying this script as well as the directory structure as a zip for users to be able to unzip and use anywhere on their PC, so it is important for me to be able to run it with a simple double click and my relative file paths.</p>
<p>My first thought was the cmd prompt that was opening and executing my script was in a different environment, but when I run:</p>
<pre><code>cd
pause
</code></pre>
<p>in a .cmd script, I get:</p>
<pre><code>C:\Users\rbanks\Desktop\TSAC\EXECUTABLE>pause
</code></pre>
<p>Which is the correct location. </p>
<p>I am not having any luck with Google, I assume because I can't seem to construct a sufficient search query. Could someone point me in the right direction please?</p>
| 0 | 2016-08-02T20:08:36Z | 38,729,661 | <p>The problem is not the current directory. It is correct when double clicking on the icon</p>
<p>The problem is: Cygwin commands are not in the windows path</p>
<p>You are using python, so don't run simple copy commands like this, which make your script non-portable and subject to variations, requiring installation of cygwin, etc...</p>
<pre><code>p = subprocess.Popen(['cp', output_file_path, output_file_path_id])
</code></pre>
<p>can be replaced by</p>
<pre><code>import shutil
shutil.copyfile(output_file_path, output_file_path_id)
</code></pre>
<p>now you have a 100% pythonic solution, native, which will throw exceptions if cannot read/write files, so fully integrated in the rest of your program.</p>
<p>Before running an external command from Python make sure that no python way exists. There are so many useful modules out there.</p>
<p>Other examples of how to avoid running basic commands from python (of course if you need to run a C compilation it's different!):</p>
<pre><code>- zipfile package: much better than running `zip.exe`
- gzip package: can open gzipped files natively from python
- `os.listdir()` instead of running `cmd /c dir /B`
</code></pre>
<p>etc... python rules!</p>
| 2 | 2016-08-02T20:15:20Z | [
"python",
"windows",
"cmd",
"cygwin"
] |
How to show a Django auth user field in a custom admin list display | 38,729,587 | <p>I am using the Django auth user model along with a custom user profile model. The user profile admin looks like this:</p>
<pre><code>class UserProfileAdmin(admin.ModelAdmin):
list_display = ['user', 'first_login', 'project', 'type']
class Meta:
model = UserProfile
</code></pre>
<p>The user profile model looks like this:</p>
<pre><code>class UserProfile(models.Model):
user = models.OneToOneField(User)
first_login = models.BooleanField(default = True)
TYPE_CHOICES = (('R', 'Reader'), ('A', 'Author'))
type = models.CharField(max_length = 9, choices = TYP_CHOICES, blank = True)
project = models.ForeignKey('Project', on_delete=models.CASCADE)
</code></pre>
<p>What I like to do is to display the <code>is_active</code> property of the user in the list display of the UserProfileAdmin. Is this possible, and if yes, how?</p>
| 1 | 2016-08-02T20:09:58Z | 38,729,740 | <p>It is possible if you define say <code>wrapped_is_active</code> method in your custom admin model with signature like:</p>
<pre><code>def wrapped_is_active(self, item):
if item:
return item.user.is_active
wrapped_is_active.boolean = True
</code></pre>
<p>You should specify that method in your list_display, so that is becomes like:</p>
<pre><code>list_display=['user', 'first_login', 'project', 'type', 'wrapped_is_active']
</code></pre>
<p>for more information see <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display" rel="nofollow">Django admin site documentation</a></p>
| 1 | 2016-08-02T20:19:47Z | [
"python",
"django"
] |
How to show a Django auth user field in a custom admin list display | 38,729,587 | <p>I am using the Django auth user model along with a custom user profile model. The user profile admin looks like this:</p>
<pre><code>class UserProfileAdmin(admin.ModelAdmin):
list_display = ['user', 'first_login', 'project', 'type']
class Meta:
model = UserProfile
</code></pre>
<p>The user profile model looks like this:</p>
<pre><code>class UserProfile(models.Model):
user = models.OneToOneField(User)
first_login = models.BooleanField(default = True)
TYPE_CHOICES = (('R', 'Reader'), ('A', 'Author'))
type = models.CharField(max_length = 9, choices = TYP_CHOICES, blank = True)
project = models.ForeignKey('Project', on_delete=models.CASCADE)
</code></pre>
<p>What I like to do is to display the <code>is_active</code> property of the user in the list display of the UserProfileAdmin. Is this possible, and if yes, how?</p>
| 1 | 2016-08-02T20:09:58Z | 38,729,877 | <p>it's possible : I made changes in your code have a look :</p>
<pre><code>class UserProfileAdmin(admin.ModelAdmin):
list_display = ['user', 'first_login', 'project', 'type','is_active']
class Meta:
model = UserProfile
class UserProfile(models.Model):
user = models.OneToOneField(User)
first_login = models.BooleanField(default = True)
TYPE_CHOICES = (('R', 'Reader'), ('A', 'Author'))
type = models.CharField(max_length = 9, choices = TYP_CHOICES, blank = True)
project = models.ForeignKey('Project', on_delete=models.CASCADE)
is_active = models.BooleanField(default=True)
</code></pre>
| 0 | 2016-08-02T20:28:00Z | [
"python",
"django"
] |
How to execute code asynchronously in Twisted Klein? | 38,729,711 | <p>I have two functions in my python Twisted Klein web service:</p>
<pre><code>@inlineCallbacks
def logging(data):
ofile = open("file", "w")
ofile.write(data)
yield os.system("command to upload the written file")
@APP.route('/dostuff')
@inlineCallbacks
def dostuff():
yield logging(data)
print "check!"
returnValue("42")
</code></pre>
<p>When <code>os.system("command to upload the written file")</code> runs, it will show message saying "start uploading" then "upload complete". I want to make the logging function asynchronous so that processing in <code>logging</code> handler happens after <code>dostuff</code> handler prints out "check!". (I actually want processing to happen after returnValue("42"), but both of those are making the logging function async I think?)</p>
<p>I thought the yield statement will make it non-blocking but it seems not the case, the "check!" always got printed after "start uploading" and "upload complete". I'll appreciate if anyone can give me some feedback on it since I'm new to async coding and got blocked on this for a while...</p>
| 6 | 2016-08-02T20:18:16Z | 38,846,221 | <p>The 'yield' statement doesn't make things happen asynchronously. It just defers execution of the function containing it and returns a generator object that can later be used to iterate a sequence.</p>
<p>So dostuff() is going to return a generator object. Nothing will happen until that generator object is iterated sometime later. But there's nothing in your code to make this happen. I expect your dostuff routine will produce a syntax error because it contains both a yield and a non-empty return. The logging routine won't do anything because it contains a yield and the generator it returns is never used.</p>
<p>Finally, the logging routine is going to truncate its output file every time it's called because it opens the log file with mode 'w' on every invocation.</p>
<p>For asynchronous execution, you need some form of multiprocessing. But I don't think that's needed in this context. Your logging function is fairly light weight and should run quickly and not interfere with dostuff's work.</p>
<p>I would suggest trying something like this:</p>
<pre><code>@inlineCallbacks
def logging(data):
try:
logging._ofile.write(data + '\n')
except AttributeError:
logging._ofile = open("file", 'w')
logging._ofile.write(data + '\n')
@APP.route('/dostuff')
@inlineCallbacks
def dostuff():
logging("before!")
os.system("command to upload the written file")
logging("after!")
return("42")
</code></pre>
<p>Here we open the logging file only once, the first time logging is called when _ofile is not defined as an attribute of logging. On subsequent calls, logging._ofile will already be open and the write statement in the try block will be successful.</p>
<p>Routine dostuff() calls logging to indicate we're about to do the work, actually does the work, then calls logging to indicate the work has been done, and finally returns the desired value.</p>
| 1 | 2016-08-09T08:55:23Z | [
"python",
"web-services",
"asynchronous",
"twisted",
"klein-mvc"
] |
How to execute code asynchronously in Twisted Klein? | 38,729,711 | <p>I have two functions in my python Twisted Klein web service:</p>
<pre><code>@inlineCallbacks
def logging(data):
ofile = open("file", "w")
ofile.write(data)
yield os.system("command to upload the written file")
@APP.route('/dostuff')
@inlineCallbacks
def dostuff():
yield logging(data)
print "check!"
returnValue("42")
</code></pre>
<p>When <code>os.system("command to upload the written file")</code> runs, it will show message saying "start uploading" then "upload complete". I want to make the logging function asynchronous so that processing in <code>logging</code> handler happens after <code>dostuff</code> handler prints out "check!". (I actually want processing to happen after returnValue("42"), but both of those are making the logging function async I think?)</p>
<p>I thought the yield statement will make it non-blocking but it seems not the case, the "check!" always got printed after "start uploading" and "upload complete". I'll appreciate if anyone can give me some feedback on it since I'm new to async coding and got blocked on this for a while...</p>
| 6 | 2016-08-02T20:18:16Z | 38,889,392 | <p>To make your code async you need to use <a href="https://twistedmatrix.com/documents/current/core/howto/defer.html" rel="nofollow">Twisted Deferreds</a> as <a href="http://klein.readthedocs.io/en/latest/examples/deferreds.html" rel="nofollow">described here</a>. Deferreds give you an API for asynchronous code execution, they allow you to attach callbacks to your functions and they execute code in Twisted event loop managed by reactor object. </p>
<p>I see two potential ways to use Deferreds in your case.</p>
<p><strong>1) Execute task in background with <code>reactor.callLater()</code></strong></p>
<p>This is ok if <code>dostuff</code> handler doesn't care about result. You can use <a href="http://twistedmatrix.com/documents/current/core/howto/time.html" rel="nofollow">reactor.callLater()</a>. This way your async function will execute after you return value from <code>doStuff</code>.</p>
<p>So something like this:</p>
<pre><code>from klein import run, route, Klein
from twisted.internet import defer, task, reactor
import os
app = Klein()
def logging(data):
ofile = open("file", "w")
ofile.write(data)
result = os.system("ls")
print(result)
@route('/')
def dostuff(request):
reactor.callLater(0, logging, "some data")
print("check!")
return b'Hello, world!'
run("localhost", 8080)
</code></pre>
<p>The order of events with this code is following, first "check" is printed, then "hello world" response is returned and in the end async call suceeds and prints results of running <code>os.system()</code>.</p>
<pre><code>2016-08-11 08:52:33+0200 [-] check!
2016-08-11 08:52:33+0200 [-] "127.0.0.1" - - [11/Aug/2016:06:52:32 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.35.0"
a.py file
</code></pre>
<p><strong>2) Execute task in background and get result with <code>task.deferLater()</code></strong></p>
<p>If you care about results of your 'logging' function you can also attach callback to this object and use <a href="http://twistedmatrix.com/documents/current/api/twisted.internet.task.html" rel="nofollow">twisted.internet.task</a> API. If you want to go this way you need to refactor your handler to work like this</p>
<pre><code>@route('/')
def dostuff(request):
def the_end(result):
print("executed at the end with result: {}".format(result))
dfd = task.deferLater(reactor, 0, logging, "some data")
dfd.addCallback(the_end)
print("check!")
return b'Hello, world!'
</code></pre>
<p>This way order of events will be same as above, but <code>the_end</code> function will be executed at the end after your <code>logging</code> function finishes.</p>
<pre><code>2016-08-11 08:59:24+0200 [-] check!
2016-08-11 08:59:24+0200 [-] "127.0.0.1" - - [11/Aug/2016:06:59:23 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.35.0"
a.py file
2016-08-11 08:59:24+0200 [-] executed at the end with result: some result
</code></pre>
| 3 | 2016-08-11T07:00:31Z | [
"python",
"web-services",
"asynchronous",
"twisted",
"klein-mvc"
] |
Chrome Web Push Notification with Encrypted Payload in Python | 38,729,729 | <p>Can someone please explain how to send Web Push Notification with Payload for Chrome using Curl/Python ?</p>
<p>This is the code I am trying: </p>
<p>Code: </p>
<pre><code>fcm_url = "https://fcm.googleapis.com/fcm/send"
headers = {'Content-Type': 'application/json'}
headers.update({'Authorization': 'key=' + fcm_key})
encoded = {'body': "i\x87\xb7W\xee\xd6QzL\xb6Q\xfe\xf5=t\xc4D[\xfe\xe3+g\xb7\x86\xdd\x81\xb9I\xffX\x99\x9b\x85$x\x80\xc6\x88\xe3\xbcm\x91\xff\x17a\x87C\x81\xf0\xbd\xb3}Y\xc8\xdb:\x14\x02\xf2R\xe7\x12\xcb\x1c\x0f\x13\xca'\xec7B\xc3\x9e\xb2\x17\xa0\xf0\xcd\xed3\xff\x1e\xc9k'A\xfb\x84\xf0\x17\xd4+I\xe0\xe0\x92i%\x00\xf0\xe0\xdb[\xa5\xc9/'\xc3L\xf6a\x183\xc1x7\xa6\x04!\xcctH\xf3\xf3\xcf\xf5\x12r\xd3\xf4\xd4\x8b\x10pc\x84Dd\xee\xe5'\x82\xa8\x81\x9a\xf4\x94\xd3\x12\x166\x91G3t\x08\xf9\xbe\x1b\x90\x02\xc6\x17\x17\xc3\xe9\x08qQK\xd4\xce\xc2\x88\x8f\xcbA\xc9\xfd]\x99\x13\xfd\xa6v2\xd65\x1b\xd5\x82)FUX\x92c\xed\xecF\x91rk\xba\x04\xd2\x90\x93a\n\x96|M}\x10\xc4\xb0\xe4\xdd\x1dd4\xf1\xcb\x06<\xf7\x06\xf9\xfe\xce\x19W\xaa\xc4", 'crypto_key': 'BJUZv_0v3lqIkptS5R4r-DKbVXl4Cpd4YN4ASO4dcNzEGPgtNW7EDF2HBSNGy0fI8kBJj3bnSuCh0bmR46ICKpc', 'salt': 'bjNe7m6zIQeDcsh4TF_wqg'}
crypto_key = "dh=" + encoded["crypto_key"]
salt = "salt=" + encoded['salt']
headers.update({'crypto-key': crypto_key, 'content-encoding': 'aesgcm', 'encryption': salt})
fcm_data = {"data": {"message": base64.urlsafe_b64encode(encoded['body'])}, "registration_ids": google_ids_array}
resp = requests.post(fcm_url, data=json.dumps(fcm_data), headers=headers)
</code></pre>
<p>Response:</p>
<pre><code>(200, 'OK')
{"multicast_id":66796434737498,"success":1,"failure":0,"canonical_ids":0,"results":[{"message_id":"0:14768189819%063a1cbcf9fd7ecd"}]}
</code></pre>
<p>but the notification doesn't contain the above specified Data.</p>
<p>Cases Tried, But None Useful: </p>
<pre><code>1) fcm_data = {"data": {"message": "Hello"}, "to": google_ids_array[0]} # Wrong Notification
2) fcm_data = {"data": {"message": "Hello"}, "registration_ids": google_ids_array} # Wrong Notification
3) fcm_data = {"data": {"message": base64.urlsafe_b64encode(encoded['body'])}, "to": google_ids_array[0]} # Wrong Notification
4) fcm_data = {"data": {"message": base64.urlsafe_b64encode(encoded['body'])}, "registrations_ids": google_ids_array} # Bad Request, to
5) fcm_data = {"raw_data": "He", "registration_ids": google_ids_array} # success, No Notification
6) fcm_data = {"raw_data": str({"message": "Hello"}), "registration_ids": google_ids_array} # Failure, MessageTooBig
7) fcm_data = {"raw_data": base64.urlsafe_b64encode(encoded['body']), "registration_ids": google_ids_array} # Failure, MessageTooBig
</code></pre>
<p>Can someone please suggest how to send correct Web Notification with Payload on Chrome Browser ? </p>
<p>Any Suggestion/Hint would be helpful...</p>
<p>Thanks,</p>
| 0 | 2016-08-02T20:19:22Z | 38,741,821 | <p><strong>Working Code</strong></p>
<pre><code>fcm_url = "https://fcm.googleapis.com/fcm/send"
encoded = {'body': "i\x87\xb7W\xee\xd6QzL\xb6Q\xfe\xf5=t\xc4D[\xfe\xe3+g\xb7\x86\xdd\x81\xb9I\xffX\x99\x9b\x85$x\x80\xc6\x88\xe3\xbcm\x91\xff\x17a\x87C\x81\xf0\xbd\xb3}Y\xc8\xdb:\x14\x02\xf2R\xe7\x12\xcb\x1c\x0f\x13\xca'\xec7B\xc3\x9e\xb2\x17\xa0\xf0\xcd\xed3\xff\x1e\xc9k'A\xfb\x84\xf0\x17\xd4+I\xe0\xe0\x92i%\x00\xf0\xe0\xdb[\xa5\xc9/'\xc3L\xf6a\x183\xc1x7\xa6\x04!\xcctH\xf3\xf3\xcf\xf5\x12r\xd3\xf4\xd4\x8b\x10pc\x84Dd\xee\xe5'\x82\xa8\x81\x9a\xf4\x94\xd3\x12\x166\x91G3t\x08\xf9\xbe\x1b\x90\x02\xc6\x17\x17\xc3\xe9\x08qQK\xd4\xce\xc2\x88\x8f\xcbA\xc9\xfd]\x99\x13\xfd\xa6v2\xd65\x1b\xd5\x82)FUX\x92c\xed\xecF\x91rk\xba\x04\xd2\x90\x93a\n\x96|M}\x10\xc4\xb0\xe4\xdd\x1dd4\xf1\xcb\x06<\xf7\x06\xf9\xfe\xce\x19W\xaa\xc4", 'crypto_key': 'BJUZv_0v3lqIkptS5R4r-DKbVXl4Cpd4YN4ASO4dcNzEGPgtNW7EDF2HBSNGy0fI8kBJj3bnSuCh0bmR46ICKpc', 'salt': 'bjNe7m6zIQeDcsh4TF_wqg'}
crypto_key = "dh=" + encoded["crypto_key"]
salt = "salt=" + encoded['salt']
headers = {'Authorization': 'key=' + fcm_key, 'Content-Type': 'application/json', }
headers.update({'crypto-key': crypto_key, 'content-encoding': 'aesgcm', 'encryption': salt})
fcm_data = {"raw_data": base64.b64encode(encoded.get('body')), "registration_ids": google_ids_array}
resp = requests.post(fcm_url, data=json.dumps(fcm_data), headers=headers)
</code></pre>
<p>For Encoding The Payload, you can refer to this Python Library: <a href="https://github.com/web-push-libs/pywebpush" rel="nofollow">https://github.com/web-push-libs/pywebpush</a></p>
<p>Thanks</p>
| 1 | 2016-08-03T11:05:11Z | [
"python",
"django",
"google-chrome",
"push-notification",
"google-cloud-messaging"
] |
Writing numbers into a file- check code | 38,729,810 | <p>I'm taking my first ever CS class and I have an assignment due Friday.
I just wanted someone to check my code and make sure it works/follows the directions.</p>
<p>Instructions:</p>
<p><strong>Write a program that:</strong></p>
<p>1) gets the name of a text file of numbers from the user. Each number in the file is on its own line.</p>
<p>2) reads in those numbers one at a time</p>
<p>3) writes the even numbers to a file named even.txt</p>
<p>4) writes the odd numbers to a file named odd.txt</p>
<p>5) displays to the user the sum of the positive numbers and the count of the negative numbers.</p>
<p><strong>HERE IS WHAT I HAVE</strong></p>
<pre><code>def main():
#Open text file for reading
numberFile = open(r'numberFile.txt', 'r')
#Priming read
number = numberFile.readline()
#Setting up loop to continue reading until
#an empty line is reached
total = 0
count = 0
while number != '':
number = float(number) #convert from string to number
if number%2 == 0:
evenNumber = open('even.txt', 'w') #writes even numbers into a file
evenNumber.write(number + '\n')
else:
oddNumber = open('odd.txt', 'w') #writes odd numbers into a file
oddNumber.write(number + '\n')
for number in numberFile:
number = float(number) #convert from string to number
if number <= 0: #identify negative numbers
count +=1 #count negative numbers
if number >= 0: #identify positive numbers
total += number #sum of positive numbers
number = numberFile.readline()
numberFile.close() #close file after program is complete
main()
</code></pre>
| -1 | 2016-08-02T20:23:37Z | 38,729,915 | <p>Although this isn't a code review site, I'll give you some pointers.</p>
<ol>
<li>You never get a filename from the user - you should probably add that. It will be something like <code>filename = input('Enter filename: ')</code></li>
<li>You overwrite the <code>even.txt</code> and <code>odd.txt</code> each time you open it with <code>'w'</code>. Consider using <code>'a+'</code></li>
<li>You never output the <code>total</code> or the <code>count</code>. Try using <code>print</code> on those.</li>
</ol>
<p>On top of all of that, there are better ways to open files are do those kinds of operations, but I'll let you learn those in a future class.</p>
| 0 | 2016-08-02T20:30:16Z | [
"python",
"python-3.x",
"file-writing"
] |
pandas pivot_table keep index | 38,729,856 | <p>i have a dataframe :</p>
<pre><code> day_bucket label numeric_value
0 2011-01-21 birds 4
1 2011-01-22 birds 0
2 2011-01-23 birds 7
3 2011-01-24 birds 3
</code></pre>
<p>I want to pivot this dataframe so that i have a column <code>birds</code> with the values below it. </p>
<pre><code>pd.pivot_table(df, values='numeric_value', index='day_bucket',columns='label')
</code></pre>
<p>gives:</p>
<pre><code>label birds
day_bucket
2011-01-21 4
2011-01-22 0
2011-01-23 7
2011-01-24 3
</code></pre>
<p>what should i do the keep the index? The result will look like:</p>
<pre><code> day_bucket birds
0 2011-01-21 4
1 2011-01-22 0
2 2011-01-23 7
3 2011-01-24 3
</code></pre>
| 2 | 2016-08-02T20:26:50Z | 38,729,983 | <p><code>set_index</code> with <code>append</code></p>
<pre><code>df.set_index(['day_bucket', 'label'], append=True) \
.rename_axis([None, None, None]).squeeze().unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/zq7zb.png" rel="nofollow"><img src="http://i.stack.imgur.com/zq7zb.png" alt="enter image description here"></a></p>
| 3 | 2016-08-02T20:35:21Z | [
"python",
"pandas",
"pivot",
"pivot-table"
] |
pandas pivot_table keep index | 38,729,856 | <p>i have a dataframe :</p>
<pre><code> day_bucket label numeric_value
0 2011-01-21 birds 4
1 2011-01-22 birds 0
2 2011-01-23 birds 7
3 2011-01-24 birds 3
</code></pre>
<p>I want to pivot this dataframe so that i have a column <code>birds</code> with the values below it. </p>
<pre><code>pd.pivot_table(df, values='numeric_value', index='day_bucket',columns='label')
</code></pre>
<p>gives:</p>
<pre><code>label birds
day_bucket
2011-01-21 4
2011-01-22 0
2011-01-23 7
2011-01-24 3
</code></pre>
<p>what should i do the keep the index? The result will look like:</p>
<pre><code> day_bucket birds
0 2011-01-21 4
1 2011-01-22 0
2 2011-01-23 7
3 2011-01-24 3
</code></pre>
| 2 | 2016-08-02T20:26:50Z | 38,730,306 | <p>In the meantime i also came up with a result</p>
<pre><code>pd.pivot_table(df, values='numeric_value', index=[df.index.values,'day_bucket'],columns='label').reset_index('day_bucket')
label day_bucket mortality_birds
0 2011-01-21 4
1 2011-01-22 0
2 2011-01-23 7
3 2011-01-24 3
</code></pre>
| 0 | 2016-08-02T20:57:35Z | [
"python",
"pandas",
"pivot",
"pivot-table"
] |
Converting dictionary with known indices to a multidimensional array | 38,729,912 | <p>I have a dictionary with entries labelled as <code>{(k,i): value, ...}</code>. I now want to convert this dictionary into a 2d array where the value given for an element of the array at position <code>[k,i]</code> is the value from the dictionary with label <code>(k,i)</code>. The length of the rows will not necessarily be of the same size (e.g. row <code>k = 4</code> may go up to index <code>i = 60</code> while row <code>k = 24</code> may go up to index <code>i = 31</code>). Due to the asymmetry, it is fine to make all additional entries in a particular row equal to 0 in order to have a rectangular matrix. </p>
| 4 | 2016-08-02T20:30:08Z | 38,730,099 | <p>Here's an approach -</p>
<pre><code># Get keys (as indices for output) and values as arrays
idx = np.array(d.keys())
vals = np.array(d.values())
# Get dimensions of output array based on max extents of indices
dims = idx.max(0)+1
# Setup output array and assign values into it indexed by those indices
out = np.zeros(dims,dtype=vals.dtype)
out[idx[:,0],idx[:,1]] = vals
</code></pre>
<p>We could also use sparse matrices to get the final output. e.g. with <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.html" rel="nofollow"><code>coordinate format sparse matrices</code></a>. This would be memory efficient when kept as sparse matrices. So, the last step could be replaced by something like this -</p>
<pre><code>from scipy.sparse import coo_matrix
out = coo_matrix((vals, (idx[:,0], idx[:,1])), dims).toarray()
</code></pre>
<p>Sample run -</p>
<pre><code>In [70]: d
Out[70]: {(1, 4): 120, (2, 2): 72, (2, 3): 100, (5, 2): 88}
In [71]: out
Out[71]:
array([[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 120],
[ 0, 0, 72, 100, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 88, 0, 0]])
</code></pre>
<hr>
<p>To make it generic for ndarrays of any number of dimensions, we can use linear-indexing and use <code>np.put</code> to assign values into the output array. Thus, in our first approach, just replace the last step of assigning values with something like this -</p>
<pre><code>np.put(out,np.ravel_multi_index(idx.T,dims),vals)
</code></pre>
<p>Sample run -</p>
<pre><code>In [106]: d
Out[106]: {(1,0,0): 99, (1,0,4): 120, (2,0,2): 72, (2,1,3): 100, (3,0,2): 88}
In [107]: out
Out[107]:
array([[[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0]],
[[ 99, 0, 0, 0, 120],
[ 0, 0, 0, 0, 0]],
[[ 0, 0, 72, 0, 0],
[ 0, 0, 0, 100, 0]],
[[ 0, 0, 88, 0, 0],
[ 0, 0, 0, 0, 0]]])
</code></pre>
| 3 | 2016-08-02T20:43:03Z | [
"python",
"numpy",
"dictionary"
] |
Converting dictionary with known indices to a multidimensional array | 38,729,912 | <p>I have a dictionary with entries labelled as <code>{(k,i): value, ...}</code>. I now want to convert this dictionary into a 2d array where the value given for an element of the array at position <code>[k,i]</code> is the value from the dictionary with label <code>(k,i)</code>. The length of the rows will not necessarily be of the same size (e.g. row <code>k = 4</code> may go up to index <code>i = 60</code> while row <code>k = 24</code> may go up to index <code>i = 31</code>). Due to the asymmetry, it is fine to make all additional entries in a particular row equal to 0 in order to have a rectangular matrix. </p>
| 4 | 2016-08-02T20:30:08Z | 38,730,781 | <p>There is a dictionary-of-keys sparse format that can be built from a dictionary like this.</p>
<p>Starting with <code>Divakar's</code> <code>d</code> sample:</p>
<pre><code>In [1189]: d={(1, 4): 120, (2, 2): 72, (2, 3): 100, (5, 2): 88}
</code></pre>
<p>Make an empty sparse matrix of the right shape and dtype:</p>
<pre><code>In [1190]: M=sparse.dok_matrix((6,5),dtype=int)
In [1191]: M
Out[1191]:
<6x5 sparse matrix of type '<class 'numpy.int32'>'
with 0 stored elements in Dictionary Of Keys format>
</code></pre>
<p>Add the <code>d</code> values via a dictionary <code>update</code>. This works because this particular sparse format is a <code>dict</code> subclass. Be ware though that this trick is not documented (at least not that I'm aware of):</p>
<pre><code>In [1192]: M.update(d)
In [1193]: M
Out[1193]:
<6x5 sparse matrix of type '<class 'numpy.int32'>'
with 4 stored elements in Dictionary Of Keys format>
In [1194]: M.A # convert M to numpy array (handy display trick)
Out[1194]:
array([[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 120],
[ 0, 0, 72, 100, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 88, 0, 0]])
</code></pre>
<p><code>M</code> can be converted to the other sparse formats, <code>coo</code>, <code>csr</code>. In fact <code>sparse</code> does this kind of conversion by itself, depending on the use (display, calculation, etc).</p>
<pre><code>In [1196]: print(M)
(2, 3) 100
(5, 2) 88
(1, 4) 120
(2, 2) 72
</code></pre>
| 0 | 2016-08-02T21:29:29Z | [
"python",
"numpy",
"dictionary"
] |
How to write multiprocessing python codes with dictionary and dataframe | 38,730,023 | <p>I spent couple hours on multiprocessing coding on Python. After I read codes on <a href="https://docs.python.org/2.7/library/multiprocessing.html" rel="nofollow">document</a>, I wrote codes below. My plan is to add values in two global dataframe together, and assign the result to a dictionary. </p>
<pre><code>from multiprocessing import Process, Manager
import pandas as pd
import numpy as np
import time
def f(d):
for i in C:
d[i] = A.loc[i].sum() + B.loc[i].sum()
C = [10,20,30]
A = pd.DataFrame(np.matrix('1,2;3,4;5,6'), index = C, columns = ['A','B'])
B = pd.DataFrame(np.matrix('3,4;5,4;5,2'), index = C, columns = ['A','B'])
if __name__ == '__main__':
manager = Manager()
d = manager.dict()
d = dict([(c, 0) for c in C])
t0 = time.clock()
p = Process(target=f, args=(d,))
p.start()
p.join()
print time.clock()-t0, 'seconds processing time'
print d
d = dict([(c, 0) for c in C])
t0 = time.clock()
f(d)
print time.clock()-t0, 'seconds processing time'
print d
</code></pre>
<p>The result in my linux server is shown below, which is not my expect:</p>
<blockquote>
<p>0.0 seconds processing time</p>
<p>{10: 0, 20: 0, 30: 0}</p>
<p>0.0 seconds processing time</p>
<p>{10: 10, 20: 16, 30: 18}</p>
</blockquote>
<p>It seems the multiprocessing part didn't add two dataframes' values together. Could you guys give me some hints?</p>
<p>Thanks in advance.</p>
| 0 | 2016-08-02T20:37:40Z | 38,730,125 | <p>Example here that you could adapt and which works:</p>
<p><a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow">https://docs.python.org/2/library/multiprocessing.html</a></p>
<p>You have you use a manager object to be able to share memory between processes.</p>
<p>In your example you create a dictionary using the manager but you kill it with a normal dictionary the line after</p>
<pre><code>manager = Manager()
d = manager.dict() # correct
d = dict([(c, 0) for c in C]) # d is not a manager.dict: no shared memory
</code></pre>
<p>Instead do this (tested, compiles)</p>
<pre><code>d = manager.dict([(c, 0) for c in C])
</code></pre>
| 0 | 2016-08-02T20:45:16Z | [
"python",
"dictionary",
"dataframe",
"python-multiprocessing"
] |
How to allocate memory to a C callback function with ctypes and threading | 38,730,072 | <p>I'm currently writing python bindings for a camera software. The commercial SDK only contains the documentation and DLL. I do not have the C code but I'll try to be as clear as possible, so please bear with me. </p>
<p>I have all the library functions working but one: a callback function that acquires the pictures. The documentation only provides the function prototypes: </p>
<blockquote>
<p>foo: Starts image acquisition on specified port. Return error code.</p>
</blockquote>
<pre><code>int foo(HANDLE dev, char port, int queueMode, callback_type fnCallback, void * userInfo );
</code></pre>
<blockquote>
<p>callback_type: Application-defined callback function that recieves
information about acquired frame.</p>
</blockquote>
<pre><code>void callback_type(void * userInfo, photoStruct* photo);
</code></pre>
<p>This is my python code after loading the library and wrapping the functions and structures.</p>
<pre><code>callback_type = WINFUNCTYPE(None,c_void_p,POINTER(photoStruct))
def my_callback(userInfo, photoInfo):
if(photoInfo.status == 0):
#Process incoming photo
print "I'm here" #for debugging purposes
q.put(photoInfo.rawBuffer[:photoInfo.bufferSize])
q.task_done()
_my_callback = callback_type(my_callback)
</code></pre>
<p>Whenever I try to call foo for the first time:</p>
<pre><code> >>>lib.foo(dev, port, c_int(0), _my_callback, None)
</code></pre>
<p>I get Error code 4: Not enough memory to perform operation (according to the documentation).
When I call it again (and again), it returns always 0 which is success, but the queue stays empty and it never prints "I'm here".</p>
<p>I then tried adding a thread:</p>
<pre><code>ty = threading.Thread(target=lib.foo, args=(dev,port, c_int(0), _my_callback, None))
>>> ty.start()
>>> ty.run()
line 758, in run
del self.__target, self.__args, self.__kwargs
AttributeError: _Thread__target
</code></pre>
<p>I also tried to just not use run() and call the function directly after starting the thread. It doesn't work still.
I don't really know what is going on here. I am very new to threading and ctypes but my guess is that the library uses its own thread to return the photos and I somehow should allocate memory for this callback function. But how?
Any solutions/comments would be greatly appreciated. Thank you!</p>
<p><strong>_ Edit: sample code from the documentation _</strong></p>
<pre><code>* \sample
* \code
* void __stdcall callback_type(void* userInfo, photoStruct* photo)
* {
* // Process incoming photo
* }
* ...
* foo(dev, port, 0, &callback_type, NULL);
* ...
* \endcode
*/
</code></pre>
<p>I tried to call foo in Python with</p>
<pre><code>>>>lib.foo(dev, port, c_int(0), byref(_my_callback), None)
</code></pre>
<p>It stopped the program. </p>
| 0 | 2016-08-02T20:41:17Z | 38,761,040 | <p>I'm not sure where you went wrong without a complete example. Did you declare <code>foo.argtypes</code>? Are you sure the functions are <code>__stdcall</code> for use with <code>WinDLL</code> and <code>WINFUNCTYPE</code>, or are they <code>__cdecl</code> and need <code>CDLL</code> and <code>CFUNCTYPE</code>?</p>
<p>Here's a working example using the following test code for the DLL:</p>
<pre><code>typedef void* HANDLE;
typedef struct {
int value;
} photoStruct;
typedef void (__stdcall * callback_type)(void * userInfo, photoStruct* photo);
__declspec(dllexport) int __stdcall foo(HANDLE dev, char port, int queueMode, callback_type fnCallback, void * userInfo)
{
photoStruct p;
p.value = 123;
fnCallback(userInfo, &p);
return 0;
}
</code></pre>
<p>Python code:</p>
<pre><code>#!python2
from ctypes import *
HANDLE = c_void_p
class photoStruct(Structure):
_fields_ = [('value',c_int)]
def __repr__(self):
return 'photoStruct(value={})'.format(self.value)
callback_type = WINFUNCTYPE(None,c_void_p,POINTER(photoStruct))
@callback_type
def my_callback(userInfo, photo):
print userInfo,photo.contents
foo = WinDLL('test').foo
foo.argtypes = (HANDLE,c_char,c_int,callback_type,c_void_p)
foo.restype = c_int
foo(1,'2',3,my_callback,None)
</code></pre>
<p>Output:</p>
<pre><code>None photoStruct(value=123)
</code></pre>
| 1 | 2016-08-04T07:28:52Z | [
"python",
"c",
"multithreading",
"ctypes"
] |
Parsing CSVs for only one value | 38,730,174 | <p>I am trying to parse data from CSV files. The files are in a folder and I want to extract data and write them to the db. However the csvs are not set up in a table format. I know how to import csvs into the db with the for each loop container, adding data flow tasks, and importing with OLE DB Destination. </p>
<p>The problem is just getting one value out of these csvs. The format of the file is as followed:</p>
<pre><code>Title Title 2
Date saved ##/##/#### ##:## AM
Comment
[ Main ]
No. Measure Output Unit of measure
1 Name 8 µm
Count 0 pcs
[ XY Measure ]
X
Y
D
[ Area ]
No. Area Unit Perimeter Unit
</code></pre>
<p>All I want is just the output which is "8", to snatch the name of the file to make it name of the result or add it to a column, and the date and time to add to their own columns.
I am not sure which direction to head into and i hope someone has some things for me to look into. Originally, I wasn't sure if I should do the parsing externally (python) before using SQL server. If anyone knows another way I should use to get this done please let me know. Sorry for the unclear post earlier.</p>
<p>The expect outcome:</p>
<pre><code>Filename Date Time Outcome
jnnnnnnn ##/##/#### ##:## 8
</code></pre>
| -1 | 2016-08-02T20:49:04Z | 39,028,356 | <p>I'd try this:</p>
<pre><code>filename = # from the from the path of the file you're parsing
# define appropriate vars
for row in csv_file:
if row.find('Date saved') > 0:
row = row.replace('Date saved ')
date_saved = row[0:row.find(' ')]
row = row.replace(date_saved + ' ')
time = row[0:row.find(' ')]
elif row.find(u"\u03BC"):
split_row = row.split(' ')
outcome = split_row[2]
# add filename,date_saved,time,outcome to data that will go in DB
</code></pre>
| 0 | 2016-08-18T22:18:56Z | [
"python",
"csv",
"parsing",
"ssis"
] |
Problems with parsing JSON for D3 | 38,730,227 | <p>I am building an app with Flask and Python, and I want to pass some of my Python-generated results into JSON, so that they can be visualized with D3 under a container fluid. To do this, I am trying to use the Jinja method <code>var myjson = {{jsonDict|tojson }};</code>. Here <code>jsonDict</code> is a variable in my Python code that is a string of a dict, where single quotes have been replaced with double quotes with a regular expression, so that it looks like proper JSON. I am also using the JS method <code>root = JSON.parse( myjson );</code>. I believe that the combination of these two should solve my problem, but when I run the code, however I am getting the error:</p>
<pre><code>(index):2222
Uncaught SyntaxError: Unexpected token y in JSON at position 0
uo @ d3.v3.min.js:3
i @ d3.v3.min.js:1
</code></pre>
<p>Here is the D3 template that I am trying to use:
<a href="http://bl.ocks.org/mbostock/7607535" rel="nofollow">http://bl.ocks.org/mbostock/7607535</a></p>
<p>Here is my implementation of this D3 (just the relevant script):</p>
<pre><code><script>
var margin = 20,
diameter = 960;
var color = d3.scale.linear()
.domain([-1, 5])
.range(["hsl(152,80%,80%)", "hsl(228,30%,40%)"])
.interpolate(d3.interpolateHcl);
var pack = d3.layout.pack()
.padding(2)
.size([diameter - margin, diameter - margin])
.value(function(d) { return d.size; })
var svg = d3.select("#dan").append("svg") //#dan is name of my container fluid
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(" + diameter / 2 + "," + diameter / 2 + ")");
//this is the part of the code I have added //
var myjson = {{ jsonDict|tojson }};
root = JSON.parse( myjson );
//this is the part of the code I have added //
d3.json("root", function(error, root) {
if (error) throw error; //this is index 2222
var focus = root,
nodes = pack.nodes(root),
view;
var circle = svg.selectAll("circle")
.data(nodes)
.enter().append("circle")
.attr("class", function(d) { return d.parent ? d.children ? "node" : "node node--leaf" : "node node--root"; })
.style("fill", function(d) { return d.children ? color(d.depth) : null; })
.on("click", function(d) { if (focus !== d) zoom(d), d3.event.stopPropagation(); });
var text = svg.selectAll("text")
.data(nodes)
.enter().append("text")
.attr("class", "label")
.style("fill-opacity", function(d) { return d.parent === root ? 1 : 0; })
.style("display", function(d) { return d.parent === root ? "inline" : "none"; })
.text(function(d) { return d.name; });
var node = svg.selectAll("circle,text");
d3.select("#dan")
.style("background", color(-1))
.on("click", function() { zoom(root); });
zoomTo([root.x, root.y, root.r * 2 + margin]);
function zoom(d) {
var focus0 = focus; focus = d;
var transition = d3.transition()
.duration(d3.event.altKey ? 7500 : 750)
.tween("zoom", function(d) {
var i = d3.interpolateZoom(view, [focus.x, focus.y, focus.r * 2 + margin]);
return function(t) { zoomTo(i(t)); };
});
transition.selectAll("text")
.filter(function(d) { return d.parent === focus || this.style.display === "inline"; })
.style("fill-opacity", function(d) { return d.parent === focus ? 1 : 0; })
.each("start", function(d) { if (d.parent === focus) this.style.display = "inline"; })
.each("end", function(d) { if (d.parent !== focus) this.style.display = "none"; });
}
function zoomTo(v) {
var k = diameter / v[2]; view = v;
node.attr("transform", function(d) { return "translate(" + (d.x - v[0]) * k + "," + (d.y - v[1]) * k + ")"; });
circle.attr("r", function(d) { return d.r * k; });
}
});
d3.select(self.frameElement).style("height", diameter + "px");
</script>
</code></pre>
<p>As you can see, I replaced the original JS lines from the D3 code:</p>
<pre><code>d3.json("flare.json", function(error, root) {
if (error) throw error;
</code></pre>
<p>with:</p>
<pre><code>var myjson = {{ jsonDict|tojson }};
root = JSON.parse( myjson );
d3.json("root", function(error, root) {
</code></pre>
<p>As I have my code now, if I inspect the page, the <code>svg</code> shows up on the webpage in the correct place, but it is blank.</p>
<p>I am new to D3 and Javascript. Any help would be much appreciated! Thank you! </p>
<p>EDIT - console logs</p>
<p>if I do console.log(myjson), the console prints the string of the JSON properly (see below)</p>
<p>if I do console.log(root), the console prints</p>
<pre><code>Object {children: Array[2], name: "flare"}
children:Array[2]
name:"flare"
__proto__:Object
__defineSetter__: __defineSetter__()
__lookupGetter__:__lookupGetter__()
__lookupSetter__:__lookupSetter__()
constructor:Object()
hasOwnProperty:hasOwnProperty()
isPrototypeOf:isPrototypeOf()
propertyIsEnumerable:propertyIsEnumerable()
toLocaleString:toLocaleString()
toString:toString()
valueOf:valueOf()
get __proto__:__proto__()
set __proto__:__proto__()
</code></pre>
<p>So it seems that the <code>JSON.parse</code> method is failing me somehow.</p>
<p>EDIT -- my JSON string that is being passed from Python into <code>var myjson</code></p>
<pre><code>{"name": "flare", "children": [{"name": "concept0", "children": [{"name": "intermediate host", "size": 700}, {"name": "abstrusus brevior", "size": 700}, {"name": "stage larva", "size": 700}, {"name": "anterior extremity", "size": 700}, {"name": "crenosoma vulpi", "size": 700}]}, {"name": "concept1", "children": [{"name": "infected cat", "size": 700}, {"name": "abstrusus infection", "size": 700}, {"name": "domestic cat", "size": 700}, {"name": "feline aelurostrongylosis", "size": 700}, {"name": "cat infect", "size": 700}]}]}
</code></pre>
<p>pagesource for <code>console.log(myjson)</code></p>
<pre><code>{"name": "flare", "children": [{"name": "concept0", "children": [{"size": 700, "name": "intermediate host"}, {"size": 700, "name": "abstrusus brevior"}, {"size": 700, "name": "stage larva"}, {"size": 700, "name": "anterior extremity"}, {"size": 700, "name": "crenosoma vulpi"}]}, {"name": "concept1", "children": [{"size": 700, "name": "infected cat"}, {"size": 700, "name": "abstrusus infection"}, {"size": 700, "name": "domestic cat"}, {"size": 700, "name": "feline aelurostrongylosis"}, {"size": 700, "name": "cat infect"}]}]}
</code></pre>
<p>EDIT - I think the problem can be seen in <code>console.log(root)</code>. I've checked other D3 visualizations and the log should typically look like this:</p>
<pre><code>Object {name: "flare", children: Array[5]}
children: Array[5]
depth:0
name:"flare"
r:470
value:21000
x:470
y:470
__proto__:Object
</code></pre>
| 1 | 2016-08-02T20:51:57Z | 38,749,565 | <p>Because you are already passing your json to your client via your flask application there is no need to use d3's <code>d3.json()</code> method. <code>d3.json()</code> is essentially an <code>ajax</code> <code>GET</code> request which requests the file from the server. </p>
<blockquote>
<h1>d3.json(url[, callback])</h1>
<p>Creates a request for the JSON file at the specified url with the mime
type "application/json". If a callback is specified, the request is
immediately issued with the GET method, and the callback will be
invoked asynchronously when the file is loaded or the request fails;
the callback is invoked with two arguments: the error, if any, and the
parsed JSON. The parsed JSON is undefined if an error occurs. If no
callback is specified, the returned request can be issued using
xhr.get or similar, and handled using xhr.on. </p>
</blockquote>
<p>You already had your data stored in a variable on in your javascript so there was no need to request it from the server.<br>
Also when you tried to request it you were passing in an invalid url.<br>
Removing the <code>d3.json()</code> function and just running the rest of the code should work. </p>
| 1 | 2016-08-03T16:55:26Z | [
"javascript",
"python",
"json",
"d3.js",
"flask"
] |
Django admin: timezone display | 38,730,230 | <p>So i am making an app where you can find activities which happen at locations.</p>
<p>On the django-admin page i want to be able to modify Activities (which works).</p>
<p>However an activity has a starting time - i want this starting time to be in the same timezone as the location.</p>
<p>So i want it to display the start time, on the activity admin page, in the same timezone as the location is in, but then when saved it should be converted to UTC time.</p>
<p>The starttime is in an inline-formset, as it can have multiple start times.</p>
<p>I find a way to change the datetime when saving the objects, but i cant find a way to modify it when it's rendered in the inline-thing.</p>
<p>How do i modify data as it's rendered in the admin page?</p>
| 0 | 2016-08-02T20:52:03Z | 38,730,431 | <p>"<em>So i want it to display the start time, on the activity admin page, in the same timezone as the location is in, but then when saved it should be converted to UTC time.</em>"</p>
<p>According to Django's documentation on <strong>Time zone aware input in forms</strong> (<a href="https://docs.djangoproject.com/en/1.10/topics/i18n/timezones/#time-zone-aware-input-in-forms" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/i18n/timezones/#time-zone-aware-input-in-forms</a>):</p>
<blockquote>
<p>When you enable time zone support, Django interprets datetimes entered
in forms in the current time zone and returns aware datetime objects
in cleaned_data.</p>
</blockquote>
<p>Which from what I understood is what you want. This leads us to <strong>Default time zone and current time zone</strong> (<a href="https://docs.djangoproject.com/en/1.10/topics/i18n/timezones/#default-current-time-zone" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/i18n/timezones/#default-current-time-zone</a>), which states:</p>
<blockquote>
<p>The <strong>current time zone</strong> is the time zone thatâs used for rendering.</p>
<p>You should set the current time zone to the end userâs actual time
zone with <strong>activate()</strong>. Otherwise, the default time zone is used.</p>
</blockquote>
<p>So, use <strong>activate()</strong> (<a href="https://docs.djangoproject.com/en/1.10/ref/utils/#django.utils.timezone.activate" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/utils/#django.utils.timezone.activate</a>) to set <strong>timezone</strong> argument and you're good to go.</p>
| 1 | 2016-08-02T21:05:04Z | [
"python",
"django",
"django-admin"
] |
Django admin: timezone display | 38,730,230 | <p>So i am making an app where you can find activities which happen at locations.</p>
<p>On the django-admin page i want to be able to modify Activities (which works).</p>
<p>However an activity has a starting time - i want this starting time to be in the same timezone as the location.</p>
<p>So i want it to display the start time, on the activity admin page, in the same timezone as the location is in, but then when saved it should be converted to UTC time.</p>
<p>The starttime is in an inline-formset, as it can have multiple start times.</p>
<p>I find a way to change the datetime when saving the objects, but i cant find a way to modify it when it's rendered in the inline-thing.</p>
<p>How do i modify data as it's rendered in the admin page?</p>
| 0 | 2016-08-02T20:52:03Z | 38,730,489 | <p>Try to set <br>
<code>USE_L10N = False</code>
on settings.py:<br>
<br>
<a href="https://docs.djangoproject.com/en/1.9/ref/settings/#use-l10n" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/settings/#use-l10n</a></p>
| 0 | 2016-08-02T21:08:25Z | [
"python",
"django",
"django-admin"
] |
How to put generator expressions as input to Python's join method? | 38,730,265 | <p>How do I put several generator expressions as input to Python's <code>join</code> method? I tried the following which don't work. Is there a difference when using Python 2.x and 3.x? My exact version is Python 2.7.12.</p>
<pre><code>def productList(self, obj):
return ", ".join([w.name for w in obj.someProducts.all()],[w.code
for w in obj.someProducts.all()])
</code></pre>
<p>and without the <code>[]</code></p>
<pre><code> def productList(self, obj):
return ", ".join(w.name for w in obj.someProducts.all(),w.code
for w in obj.someProducts.all())
</code></pre>
<p>input:</p>
<pre><code>Product table
Name: Char;
Code: Char;
</code></pre>
<p>output:</p>
<pre><code>name1code1, name2code2
</code></pre>
| 1 | 2016-08-02T20:54:49Z | 38,730,374 | <p>If you want to display name and code, you just need to combine them, you don't need two generators.</p>
<pre><code>', '.join(w.name + ':' + w.code for w in obj.someProducts.all())
</code></pre>
<p>Or string formatting:</p>
<pre><code>', '.join('{name}: {code}'.format(name=w.name, code=w.code) for w in obj.someProducts.all())
</code></pre>
<p>Or another join (not recommended, unless you can give it a generator - creating another list is a waste, but this demonstrates how you can nest joins)</p>
<pre><code>', '.join(':'.join([w.name, w.code]) for w in obj.someProducts.all())
</code></pre>
<p>On a side note, Python 3.6 introduces <a href="https://www.python.org/dev/peps/pep-0498/" rel="nofollow">Literal String Interpolation</a>, which means you should be able to do something like this (I'll test this at home, since I don't have 3.6 at work; could someone verify this actually works. Let me know if it doesn't, I'll remove it):</p>
<pre><code>', '.join(f'{w.name}: {w.code}' for w in obj.someProducts.all())
</code></pre>
| 2 | 2016-08-02T21:01:22Z | [
"python"
] |
How to put generator expressions as input to Python's join method? | 38,730,265 | <p>How do I put several generator expressions as input to Python's <code>join</code> method? I tried the following which don't work. Is there a difference when using Python 2.x and 3.x? My exact version is Python 2.7.12.</p>
<pre><code>def productList(self, obj):
return ", ".join([w.name for w in obj.someProducts.all()],[w.code
for w in obj.someProducts.all()])
</code></pre>
<p>and without the <code>[]</code></p>
<pre><code> def productList(self, obj):
return ", ".join(w.name for w in obj.someProducts.all(),w.code
for w in obj.someProducts.all())
</code></pre>
<p>input:</p>
<pre><code>Product table
Name: Char;
Code: Char;
</code></pre>
<p>output:</p>
<pre><code>name1code1, name2code2
</code></pre>
| 1 | 2016-08-02T20:54:49Z | 38,730,401 | <p>Give it a try:</p>
<pre><code>', '.join('{}{}'.format(w.name, w.code) for w in obj.someProducts.all())
</code></pre>
<p>I just made the w.name and w.code one string, that way you don't need 2 lists. Change the format as you wish.</p>
| 2 | 2016-08-02T21:02:55Z | [
"python"
] |
Need help using M2Crypto and USB Token | 38,730,319 | <p>I am using M2Crypto (0.22.6rc4). I want to use <code>engine_pkcs11</code> from the OpenSC project and the Aladdin PKI client for token based authentication to encrypt and decrypt data.</p>
<pre><code>from M2Crypto import Engine, m2, RSA, BIO
slot_id = "slot_01"
pin = "password"
dynamic = Engine.load_dynamic_engine("pkcs11", "/usr/lib/ssl/engines/libpkcs11.so")
pkcs11 = Engine.Engine("pkcs11")
pkcs11.ctrl_cmd_string("MODULE_PATH", "/usr/lib/watchdata/ICP/lib/libwdpkcs_icp.so")
pkcs11.init()
r = pkcs11.ctrl_cmd_string("PIN", pin)
pubkey = pkcs11.load_public_key(slot_id, pin)
priv = pkcs11.load_private_key(slot_id, pin)
enc = pubkey.get_rsa().public_encrypt("teste", RSA.pkcs1_oaep_padding)
dec = priv.get_rsa().private_decrypt(enc, RSA.pkcs1_oaep_padding)
print dec
</code></pre>
<p>For some reason I can encrypt data, but when try to decrypt I get an instance of RSA_pub and this error:</p>
<pre><code> File "pkcs11.py", line 14, in <module>
dec = priv.get_rsa().private_decrypt(enc, RSA.pkcs1_oaep_padding)
File "/usr/lib/python2.7/dist-packages/M2Crypto/RSA.py", line 279, in private_decrypt
raise RSAError, 'RSA_pub object has no private key'
M2Crypto.RSA.RSAError: RSA_pub object has no private key
</code></pre>
<p>Any help would be appreciated!</p>
| 0 | 2016-08-02T20:58:26Z | 39,989,300 | <p>There is a bug in the M2Crypto wrapping of RSA private keys. A work around is use the low level M2Crypto API to directly access the private key object.</p>
<pre><code>def decrypt(cipher_text):
# Load the key using high level API
engine = Engine.Engine('pkcs11')
engine.init()
key_slot = 'slot_1-id_01'
privKey = engine.load_private_key(key_slot)
# Get a pointer to the low level API object
rsa_ptr = m2.pkey_get1_rsa(privKey.pkey)
rsaWrapper = RSA.RSA(rsa_ptr, 1)
# Decrypt with low level API
results = m2.rsa_private_decrypt(rsaWrapper.rsa, ciphertext, 1)
</code></pre>
| 0 | 2016-10-12T01:49:32Z | [
"python",
"encryption",
"pkcs#11",
"m2crypto",
"opensc"
] |
Two function one after the other when clicked pushbutton - Pyside | 38,730,411 | <p>I have a little problem with py pyside script. I make a setup wizard and I want to change my current widget in my stackedwidget then make the all installation of librairies etc...</p>
<p>I've tried two solutions:</p>
<p>The first is this one:</p>
<pre><code>self.pushButton.clicked.connect(lambda: changepage(self, MainWindow))
self.pushButton.clicked.connect(lambda: makeinstall(self, MainWindow))
</code></pre>
<p>and it doesn't work, the window don't change and my installation is launch.</p>
<p>The second is:</p>
<pre><code>def changepage(self, MainWindow):
self.stackedWidget.setCurrentIndex(4)
makeinstall(self, MainWindow)
</code></pre>
<p>and it doesn't work too. In the two solutions, the page is changed after the installation (after the end of the function I think).</p>
<p>Did someone have a solution to run two function, one after the other in pyside?</p>
<p>Regards,</p>
| 0 | 2016-08-02T21:03:43Z | 38,731,348 | <p>The slot connected to the signal is called <em>synchronously</em>, so the GUI will not be updated until it returns. There are lots of different ways to solve this, but you can try forcing an update like this:</p>
<pre><code>def changepage(self, MainWindow):
self.stackedWidget.setCurrentIndex(4)
QtGui.qApp.processEvents()
</code></pre>
<p>Or if that doesn't work, try using a single-shot timer to run the installer:</p>
<pre><code> QtCore.QTimer.singleShot(0, lambda: makeinstall(self, MainWindow))
</code></pre>
| 0 | 2016-08-02T22:15:56Z | [
"python",
"qt",
"pyside",
"qstackedwidget"
] |
OpenCV with Python - reading images in a loop | 38,730,473 | <p>I'm using OpenCV's <code>imread</code> function to read my images into Python, for further processing as a NumPy array later in the pipeline. I know that OpenCV uses BGR instead of RGB, and have accounted for it, where required. But one thing that stumps me is why I get these differing outputs for the following scenarios?</p>
<p>Reading an image directly into a single works fine. The plotted image (using <code>matplotlib.pyplot</code>) reproduces my .tiff/.png input correctly. </p>
<pre><code>img_train = cv2.imread('image.png')
plt.imshow(img_train)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/Kpmg4.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Kpmg4.jpg" alt="enter image description here"></a></p>
<p>When I use <code>cv2.imread</code> in a loop (for reading from a directory of such images - which is my ultimate goal here), I create an array as follows:</p>
<pre><code>files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
img_train = np.empty([len(files), height, width, channel])
for n in range(0, len(files)):
img_train[n] = cv2.imread(join(mypath, files[n]))
plt.imshow(img_train[n])
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/jYdRi.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/jYdRi.jpg" alt="enter image description here"></a></p>
<p>When I try to cross check and plot the image obtained thus, I get a very different output. Why so? How do I rectify this so that it looks more like my input, like in the first case? Am I reading the arrays correctly in the second case, or is it flawed? </p>
<p>Otherwise, is it something that stems from Matplotlib's plotting function? I do not know how to cross check for this case, though.</p>
<p>Any advice appreciated. </p>
| 0 | 2016-08-02T21:07:24Z | 38,730,638 | <p>Extremely trivial solution.
<code>np.empty</code> creates an array of dtype float by default.
Changing this to uint8 as in the first case with OpenCV alone worked fine. </p>
| 0 | 2016-08-02T21:19:33Z | [
"python",
"opencv",
"matplotlib"
] |
Change pytest capturing for one test | 38,730,502 | <p>Is it possible to change the capturing behavior in pytest for just one test â i.e., within the test script?</p>
<p>I have a bunch of tests that I use with <a href="http://doc.pytest.org/en/latest/" rel="nofollow"><code>pytest</code></a>. There are several useful quantities that I like to print during some tests, so I use the <code>-s</code> flag to show them in the <code>pytest</code> output. But I also test for warnings, which also get printed, and look ugly and distracting. I've tried using the <a href="https://docs.python.org/3.5/library/warnings.html" rel="nofollow"><code>warnings.simplefilter</code></a> as usual to just not show the warnings, but that doesn't seem to do anything. (Maybe <code>pytest</code> hacks it???) Anyway, I'd like some way to quiet the warnings but still check that they are raised, while also being able to see the captured output from my other print statements. Is there any way to do this â e.g., by change the capture for just one test function?</p>
| 1 | 2016-08-02T21:09:44Z | 38,733,664 | <p>I've done it by manually redirecting <code>stderr</code>:</p>
<pre><code>import os
import sys
import warnings
import pytest
def test():
stderr = sys.stderr
sys.stderr = open(os.devnull, 'w')
with pytest.warns(UserWarning):
warnings.warn("Warning!", UserWarning)
sys.stderr = stderr
</code></pre>
<p>For good measure, I could similarly redirect <code>stdout</code> to devnull, if other print statements are not wanted.</p>
| 1 | 2016-08-03T02:58:07Z | [
"python",
"py.test"
] |
Change pytest capturing for one test | 38,730,502 | <p>Is it possible to change the capturing behavior in pytest for just one test â i.e., within the test script?</p>
<p>I have a bunch of tests that I use with <a href="http://doc.pytest.org/en/latest/" rel="nofollow"><code>pytest</code></a>. There are several useful quantities that I like to print during some tests, so I use the <code>-s</code> flag to show them in the <code>pytest</code> output. But I also test for warnings, which also get printed, and look ugly and distracting. I've tried using the <a href="https://docs.python.org/3.5/library/warnings.html" rel="nofollow"><code>warnings.simplefilter</code></a> as usual to just not show the warnings, but that doesn't seem to do anything. (Maybe <code>pytest</code> hacks it???) Anyway, I'd like some way to quiet the warnings but still check that they are raised, while also being able to see the captured output from my other print statements. Is there any way to do this â e.g., by change the capture for just one test function?</p>
| 1 | 2016-08-02T21:09:44Z | 38,739,444 | <p>With pytest 3.x there is an <a href="http://doc.pytest.org/en/features/capture.html#accessing-captured-output-from-a-test-function" rel="nofollow">easy way</a> to temporarily disable capturing (see the section about <code>capsys.disabled()</code>.</p>
<p>There's also the <a href="https://github.com/fschulze/pytest-warnings" rel="nofollow">pytest-warnings</a> plugin which shows the warning in a dedicated report section.</p>
| 2 | 2016-08-03T09:19:21Z | [
"python",
"py.test"
] |
Google Drive API List Folders GET | 38,730,532 | <p>I am trying to query Google Drive for a certain file to find its url. Here is my code. The only thing I changed was my API key is substituted with AAAAAAAAAA</p>
<pre><code> queryString = urllib.quote_plus("title = \'" + filename + "\'")
parameters = {'key':'{AAAAAAAAAA}', 'q': queryString}
inserted_file = requests.get('https://www.googleapis.com/drive/v2/files', params = parameters)
print (inserted_file.url)
print inserted_file
</code></pre>
<p>when I print inserted_file, it returns error 400. What am I doing wrong? </p>
| 0 | 2016-08-02T21:11:34Z | 38,749,836 | <p><a href="https://developers.google.com/drive/v3/web/handle-errors#400_bad_request" rel="nofollow">400 error</a> response means the <code>value</code> supplied is invalid or the combination of provided fields is invalid. Its user error. This can mean that a required field or parameter has not been provided.</p>
<p>It might gives you <code>API key not valid</code> or <code>Bad request</code> message/ </p>
<pre><code>"message": "API key not valid. Please pass a valid API key."
"message": "Bad Request"
</code></pre>
<p>It is recommended to generate/use valid credentials. There is a documentation how to get credentials/ steps to run sample project here: <a href="https://developers.google.com/drive/v3/web/quickstart/python#top_of_page" rel="nofollow">https://developers.google.com/drive/v3/web/quickstart/python#top_of_page</a></p>
| 0 | 2016-08-03T17:11:19Z | [
"python",
"get",
"google-drive-sdk",
"python-requests",
"urllib"
] |
Is there a specific range of unicode code points which can be checked for emojis? | 38,730,560 | <p>Do emojis occupy a well-defined unicode range?</p>
<p>And, is there a definitive way to check whether a code point is an emoji in python 2.7?</p>
<p>I cannot seem to find any information on this. A couple of sources have pointed to the range:</p>
<pre><code>\U0001f600-\U0001f650
</code></pre>
<p>But for example, í ¾í´ has the code point</p>
<pre><code>\U0001f918
</code></pre>
<p>which lies outside this range.</p>
<p>Thanks.</p>
| 9 | 2016-08-02T21:13:36Z | 38,730,797 | <p><a href="https://pypi.python.org/pypi/regex" rel="nofollow">regex</a> supports matching by Unicode property, but unfortunately it does not (yet?) support the <a href="http://unicode.org/reports/tr51/#Data_Files" rel="nofollow">emoji-specific properties</a>. When it does, finding them will be as simple as:</p>
<pre><code>>>> regex.match(ur'\P{Emoji=yes}', u'í ¾í´') # NOTE: Doesn't (yet) work
</code></pre>
<p>In the meantime, <a href="http://unicode.org/Public/emoji/3.0/emoji-data.txt" rel="nofollow">here's the emoji table from unicode.org</a>.</p>
| 4 | 2016-08-02T21:30:45Z | [
"python",
"python-2.7",
"unicode"
] |
Reading live output from telnet connection in python | 38,730,611 | <p>I have an Arduino set up as a server. In Terminal (I use a Mac), one can connect to it, see the output, and close the connection as follows: </p>
<pre><code>> telnet HOST
Trying 192.168.0.101...
Connected to HOST.
Escape character is '^]'.
0 , 25486 , 0.00 :
1 , 25754 , 0.00 :
2 , 26054 , 0.00 :
3 , 26320 , 0.00 :
4 , 26642 , 0.00 :
5 , 26912 , 0.00 :
6 , 27187 , 0.00 :
7 , 27452 , 0.00 :
8 , 27774 , 0.00 :
0 , 28068 , 2.72 :
1 , 28389 , 2.72 :
2 , 28695 , 2.72 :
3 , 29002 , 2.72 :
4 , 29272 , 2.72 :
5 , 29537 , 2.72 :
6 , 29806 , 2.72 :
7 , 30112 , 2.72 :
8 , 30389 , 2.72 :
^]
telnet> quit
Connection closed.
</code></pre>
<p>The data currently streams at around 5 lines per second, without delay. I then tried to recreate this connection in a Python script using <code>telnetlib</code>.</p>
<pre><code>import telnetlib
import time
tn = telnetlib.Telnet(HOST)
tn.set_debuglevel(1)
while True:
tn_read = tn.read_very_eager()
time.sleep(1)
print repr(tn_read)
</code></pre>
<p>This script only returns empty strings. I read about there being a timing issue, so I included a manual delay. I have also tried <code>tn.read_until(':')</code> to no avail.</p>
<p>My resulting questions:</p>
<ol>
<li>Is there any way to pull one line at a time, assuming the incoming stream is continuous and effectively never-ending?</li>
<li>How is this implemented in Python?</li>
</ol>
<p>Thank you.</p>
<p>EDIT:
I've included the void loop for the Arduino code. </p>
<pre><code>void loop(void){
// Handle any multicast DNS requests
mdns.update();
// Handle a connected client.
Adafruit_CC3000_ClientRef client = senseServer.available();
if (client) {
Serial.println("Connected");
for(int i = 0; i < 9; i ++){ //sets number of channels
client.print(i);
client.print(" , ");
stamp = millis();
client.print(stamp);
client.print(" , ");
client.print(R2);
client.println(" :");
delay(10);
}
e = e + 1;
R2 = pow(2.718,e);
}
}
</code></pre>
| 1 | 2016-08-02T21:17:32Z | 38,731,450 | <p>Can you work at a lower level, using the <a href="https://docs.python.org/3.5/library/socket.html" rel="nofollow"><code>socket</code> module</a>?</p>
<pre><code>import socket
s = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
s.connect(("192.168.0.101", 23))
while True:
data = str(s.recv(1024),encoding='utf-8')
print(data)
</code></pre>
<p>Why this should work: It seems that your server is not a full telnet server (requiring a login etc) but a socket that waits for a connection, and then returns data. </p>
<p>Since the server is just a socket, you can connect to it with a simple socket, which is what the above does. I tested this in two ways. First with the Star wars telnet server at <code>towel.blinkenlights.nl</code>, and secondly with a simple python server that waits for a connection and then returns a line of text every second (to simulate your server). </p>
| 0 | 2016-08-02T22:26:06Z | [
"python",
"c++",
"arduino",
"telnet",
"telnetlib"
] |
How do I write the index for a list of lists | 38,730,635 | <p>I have a list of 3 lists each containing a random integer between 1 and 9:</p>
<pre><code>lists= [[1,3,5],[2,4,6],[7,8,9]]
</code></pre>
<p>I ask the user to select any single digit number. I am making a program that finds the number one less than the user inputs and then decides if the next number in the list (assuming it is not the end of the list) is bigger or smaller.</p>
<pre><code>for x in lists:
for i in x:
if i= user_choice-1:
</code></pre>
<p>Here I am stuck.</p>
<p>Lets say the user_choice is 3. I want the program to find the number 3-1=2 in the nested lists and then compare the number following 2 (in this case 4) to the user_choice. </p>
| 0 | 2016-08-02T21:19:04Z | 38,730,704 | <p>if your list is </p>
<pre><code>lists= [[1,3,5],[2,4,6],[7,8,9]]
</code></pre>
<p>to access the "1" you would type: <code>lists[0][0]</code>
to access the "8" you would type: <code>lists[2][1]</code></p>
<p>*remember lists start their index at 0!!! :)</p>
| 2 | 2016-08-02T21:23:53Z | [
"python",
"list",
"syntax"
] |
How do I write the index for a list of lists | 38,730,635 | <p>I have a list of 3 lists each containing a random integer between 1 and 9:</p>
<pre><code>lists= [[1,3,5],[2,4,6],[7,8,9]]
</code></pre>
<p>I ask the user to select any single digit number. I am making a program that finds the number one less than the user inputs and then decides if the next number in the list (assuming it is not the end of the list) is bigger or smaller.</p>
<pre><code>for x in lists:
for i in x:
if i= user_choice-1:
</code></pre>
<p>Here I am stuck.</p>
<p>Lets say the user_choice is 3. I want the program to find the number 3-1=2 in the nested lists and then compare the number following 2 (in this case 4) to the user_choice. </p>
| 0 | 2016-08-02T21:19:04Z | 38,730,722 | <pre><code>lists= [[1,3,5],[2,4,6],[7,8,9]]
for x in lists:
index = 0
for i in x:
index += 1
if i == user_choice - 1:
print(x[index])
else:
(...)
</code></pre>
| 0 | 2016-08-02T21:25:26Z | [
"python",
"list",
"syntax"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.