title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Reading in file block by block using specified delimiter in python | 38,655,176 | <p>I have an input_file.fa file like this (<a href="https://en.wikipedia.org/wiki/FASTA_format" rel="nofollow">FASTA</a> format):</p>
<pre><code>> header1 description
data data
data
>header2 description
more data
data
data
</code></pre>
<p>I want to read in the file one chunk at a time, so that each chunk contains one header and the corresponding data, e.g. block 1:</p>
<pre><code>> header1 description
data data
data
</code></pre>
<p>Of course I could just read in the file like this and split:</p>
<pre><code>with open("1.fa") as f:
for block in f.read().split(">"):
pass
</code></pre>
<p>But <em>I want to avoid the reading the whole file into memory</em>, because the files are often large.</p>
<p>I can read in the file line by line of course:</p>
<pre><code>with open("input_file.fa") as f:
for line in f:
pass
</code></pre>
<p>But ideally what I want is something like this:</p>
<pre><code>with open("input_file.fa", newline=">") as f:
for block in f:
pass
</code></pre>
<p>But I get an error:</p>
<blockquote>
<p>ValueError: illegal newline value: ></p>
</blockquote>
<p>I've also tried using the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">csv module</a>, but with no success.</p>
<p>I did find <a href="http://stackoverflow.com/questions/16260061/reading-a-file-with-a-specified-delimiter-for-newline">this post</a> from 3 years ago, which provides a generator based solution to this issue, but it doesn't seem that compact, is this really the only/best solution? It would be neat if it is possible to create the generator with a single line rather than a separate function, something like this pseudocode:</p>
<pre><code>with open("input_file.fa") as f:
blocks = magic_generator_split_by_>
for block in blocks:
pass
</code></pre>
<p>If this is impossible, then I guess you could consider my question a duplicate of the other post, but if that is so, I hope people can explain to me why the other solution is the only one. Many thanks.</p>
| 1 | 2016-07-29T09:25:00Z | 38,656,892 | <p>A general solution here will be write a generator function for this that yields one group at a time. This was you will be storing only one group at a time in memory.</p>
<pre><code>def get_groups(seq, group_by):
data = []
for line in seq:
# Here the `startswith()` logic can be replaced with other
# condition(s) depending on the requirement.
if line.startswith(group_by):
if data:
yield data
data = []
data.append(line)
if data:
yield data
with open('input.txt') as f:
for i, group in enumerate(get_groups(f, ">"), start=1):
print ("Group #{}".format(i))
print ("".join(group))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Group #1
> header1 description
data data
data
Group #2
>header2 description
more data
data
data
</code></pre>
<hr>
<p>For FASTA formats in general I would recommend using <a href="http://biopython.org/wiki/Biopython" rel="nofollow">Biopython</a> package.</p>
| 1 | 2016-07-29T10:46:50Z | [
"python",
"file",
"python-3.x"
] |
Reading in file block by block using specified delimiter in python | 38,655,176 | <p>I have an input_file.fa file like this (<a href="https://en.wikipedia.org/wiki/FASTA_format" rel="nofollow">FASTA</a> format):</p>
<pre><code>> header1 description
data data
data
>header2 description
more data
data
data
</code></pre>
<p>I want to read in the file one chunk at a time, so that each chunk contains one header and the corresponding data, e.g. block 1:</p>
<pre><code>> header1 description
data data
data
</code></pre>
<p>Of course I could just read in the file like this and split:</p>
<pre><code>with open("1.fa") as f:
for block in f.read().split(">"):
pass
</code></pre>
<p>But <em>I want to avoid the reading the whole file into memory</em>, because the files are often large.</p>
<p>I can read in the file line by line of course:</p>
<pre><code>with open("input_file.fa") as f:
for line in f:
pass
</code></pre>
<p>But ideally what I want is something like this:</p>
<pre><code>with open("input_file.fa", newline=">") as f:
for block in f:
pass
</code></pre>
<p>But I get an error:</p>
<blockquote>
<p>ValueError: illegal newline value: ></p>
</blockquote>
<p>I've also tried using the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">csv module</a>, but with no success.</p>
<p>I did find <a href="http://stackoverflow.com/questions/16260061/reading-a-file-with-a-specified-delimiter-for-newline">this post</a> from 3 years ago, which provides a generator based solution to this issue, but it doesn't seem that compact, is this really the only/best solution? It would be neat if it is possible to create the generator with a single line rather than a separate function, something like this pseudocode:</p>
<pre><code>with open("input_file.fa") as f:
blocks = magic_generator_split_by_>
for block in blocks:
pass
</code></pre>
<p>If this is impossible, then I guess you could consider my question a duplicate of the other post, but if that is so, I hope people can explain to me why the other solution is the only one. Many thanks.</p>
| 1 | 2016-07-29T09:25:00Z | 38,657,002 | <pre><code>def read_blocks(file):
block = ''
for line in file:
if line.startswith('>') and len(block)>0:
yield block
block = ''
block += line
yield block
with open('input_file.fa') as f:
for block in read_blocks(f):
print(block)
</code></pre>
<p>This will read in the file line by line and you will get back the blocks with the yield statement. This is lazy so you don't have to worry about large memory footprint.</p>
| 0 | 2016-07-29T10:52:46Z | [
"python",
"file",
"python-3.x"
] |
Reading in file block by block using specified delimiter in python | 38,655,176 | <p>I have an input_file.fa file like this (<a href="https://en.wikipedia.org/wiki/FASTA_format" rel="nofollow">FASTA</a> format):</p>
<pre><code>> header1 description
data data
data
>header2 description
more data
data
data
</code></pre>
<p>I want to read in the file one chunk at a time, so that each chunk contains one header and the corresponding data, e.g. block 1:</p>
<pre><code>> header1 description
data data
data
</code></pre>
<p>Of course I could just read in the file like this and split:</p>
<pre><code>with open("1.fa") as f:
for block in f.read().split(">"):
pass
</code></pre>
<p>But <em>I want to avoid the reading the whole file into memory</em>, because the files are often large.</p>
<p>I can read in the file line by line of course:</p>
<pre><code>with open("input_file.fa") as f:
for line in f:
pass
</code></pre>
<p>But ideally what I want is something like this:</p>
<pre><code>with open("input_file.fa", newline=">") as f:
for block in f:
pass
</code></pre>
<p>But I get an error:</p>
<blockquote>
<p>ValueError: illegal newline value: ></p>
</blockquote>
<p>I've also tried using the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">csv module</a>, but with no success.</p>
<p>I did find <a href="http://stackoverflow.com/questions/16260061/reading-a-file-with-a-specified-delimiter-for-newline">this post</a> from 3 years ago, which provides a generator based solution to this issue, but it doesn't seem that compact, is this really the only/best solution? It would be neat if it is possible to create the generator with a single line rather than a separate function, something like this pseudocode:</p>
<pre><code>with open("input_file.fa") as f:
blocks = magic_generator_split_by_>
for block in blocks:
pass
</code></pre>
<p>If this is impossible, then I guess you could consider my question a duplicate of the other post, but if that is so, I hope people can explain to me why the other solution is the only one. Many thanks.</p>
| 1 | 2016-07-29T09:25:00Z | 38,663,342 | <p>One approach that I like is to use <a href="https://docs.python.org/3.5/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a> together with a simple <code>key</code> fuction:</p>
<pre><code>from itertools import groupby
def make_grouper():
counter = 0
def key(line):
nonlocal counter
if line.startswith('>'):
counter += 1
return counter
return key
</code></pre>
<p>Use it as:</p>
<pre><code>with open('filename') as f:
for k, group in groupby(f, key=make_grouper()):
fasta_section = ''.join(group) # or list(group)
</code></pre>
<p>You need the <code>join</code> only if you have to handle the contents of a whole section as a single string. If you are only interested in reading the lines one by one you can simply do:</p>
<pre><code>with open('filename') as f:
for k, group in groupby(f, key=make_grouper()):
# parse >header description
header, description = next(group)[1:].split(maxsplit=1)
for line in group:
# handle the contents of the section line by line
</code></pre>
| 1 | 2016-07-29T16:18:02Z | [
"python",
"file",
"python-3.x"
] |
Adding widgets to form produced by model | 38,655,241 | <p>Hey i am working on calendarium and it is layed out differently to what i normally work with so i'm having a little bit if trouble adding widgets to the form.</p>
<p>There is nothing in the EventCreatView so i dont understnad why there is a form on the page fully im guessing its the args that are passed.which is the one below </p>
<pre><code>class EventMixin(object):
"""Mixin to handle event-related functions."""
model = Event
fields = '__all__'
}
</code></pre>
<p>i tried to add this:</p>
<pre><code>class EventMixin(object):
"""Mixin to handle event-related functions."""
model = Event
fields = '__all__'
widgets = {
'start': model.DateTimeField(widget = AdminDateWidget)}
@method_decorator(permission_required('calendarium.add_event'))
def dispatch(self, request, *args, **kwargs):
return super(EventMixin, self).dispatch(request, *args, **kwargs)
class EventUpdateView(EventMixin, UpdateView):
"""View to update information of an event."""
pass
class EventCreateView(EventMixin, CreateView):
"""View to create an event."""
pass
</code></pre>
<p>i messed about with it and it didn't seem to work but i don't know if i'm doing the completely wrong thing.</p>
<p>Would love some help, thanks J</p>
| 0 | 2016-07-29T09:28:07Z | 38,658,547 | <p>There are at least two issues here.</p>
<p>Firstly, <code>widgets</code> is an attribute on a form, not on a view. Like all editing views, UpdateView automatically creates a ModelForm for you based on the model if you don't specify one; but since you want to customise it, you need to define your ModelForm explicitly and use the <code>widgets</code> attribute in its Meta class.</p>
<p>Secondly, as the name implies, <code>widgets</code> sets the form's widgets, not its fields. In your case, you just need <code>widgets = {'start': AdminDateWidget}</code>.</p>
| 0 | 2016-07-29T12:12:31Z | [
"python",
"django"
] |
AttributeError: 'list' object has no attribute 'items' in a scrapy | 38,655,329 | <p>I was doing a scrapy with python3.5 then this happened:</p>
<pre><code>Traceback (most recent call last):
File "F:/PyCharm/xiaozhou/main.py", line 6, in <module>
cmdline.execute("scrapy crawl nvospider".split())
File "F:\Python3.5\lib\site-packages\scrapy\cmdline.py", line 108, in execute
settings = get_project_settings()
File "F:\Python3.5\lib\site-packages\scrapy\utils\project.py", line 60, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "F:\Python3.5\lib\site-packages\scrapy\settings\__init__.py", line 285, in setmodule
self.set(key, getattr(module, key), priority)
File "F:\Python3.5\lib\site-packages\scrapy\settings\__init__.py", line 260, in set
self.attributes[name].set(value, priority)
File "F:\Python3.5\lib\site-packages\scrapy\settings\__init__.py", line 55, in set
value = BaseSettings(value, priority=priority)
File "F:\Python3.5\lib\site-packages\scrapy\settings\__init__.py", line 91, in __init__
self.update(values, priority)
File "F:\Python3.5\lib\site-packages\scrapy\settings\__init__.py", line 317, in update
for name, value in six.iteritems(values):
File "F:\Python3.5\lib\site-packages\six.py", line 581, in iteritems
return iter(d.items(**kw))
AttributeError: 'list' object has no attribute 'items'
</code></pre>
<p>Following is my code:
This is spider:</p>
<pre><code>from scrapy.spiders import CrawlSpider
from scrapy.selector import Selector
from xiaozhou.items import NovelspiderItem
class novSpider(CrawlSpider):
name = "nvospider"
redis_key = 'nvospider:start_urls'
start_urls = ['http://www.daomubiji.com/']
def parse(self,response):
selector = Selector(response)
table = selector.xpath('//table')
for each in table:
bookname = each.xpath('tr/td[@colspam="3"]/center/h2/text()').extract()[0]
content = each.xpath('tr/td/a/text()').extract()
url = each.xpath('tr/td/a/@herf').extract()
for i in range(len(url)):
item = NovelspiderItem()
item['bookname'] = bookname
item['chapterURL'] = url[i]
try:
item['bookTitle'] = content[i].split(' ')[0]
item['chapterNum'] = content[i].split(' ')[1]
except Exception.e:
continue
try:
item['chapterName'] = content[i].split(' ')[2]
except Exception.e:
item['chapterName'] = content[i].split(' ')[1][-3:]
yield item
</code></pre>
<p>Pipelines:</p>
<pre><code>class XiaozhouPipeline(object):
def __init__(self):
connection = pymongo.MongoClient(
settings['MONGODB_HOST'],
settings['MONGODB_PORT']
)
db = connection[settings['MONGO_DBNAME']]
self.collection = db[settings['MONGODB_COLLECTION']]
def process_item(self,item,spider):
self.collection.insert(dict(item))
return item
</code></pre>
<p>items:
from scrapy import Field, Item</p>
<pre><code>class NovelspiderItem(Item):
bookName = Field()
bookTitle = Field()
chapterNum = Field()
chapterName = Field()
chapterURL = Field()
</code></pre>
<p>settings:</p>
<pre><code># -*- coding: utf-8 -*-
BOT_NAME = 'xiaozhou'
SPIDER_MODULES = ['xiaozhou.spiders']
NEWSPIDER_MODULE = 'xiaozhou.spiders'
ITEM_PIPELINES = ['xiaozhou.pipelines.XiaozhouPipeline']
USER_AGENT = ''
COOKIES_ENABLED = True
MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = "dbxiaozhou"
MONGODB_COLLECTION = "xiaozhou"
</code></pre>
| 1 | 2016-07-29T09:32:17Z | 38,656,785 | <p>According to the docs <code>ITEM_PIPELINES</code> setting <a href="http://doc.scrapy.org/en/latest/topics/settings.html#item-pipelines" rel="nofollow">should be dict</a>, and you got list instead <code>ITEM_PIPELINES = ['xiaozhou.pipelines.XiaozhouPipeline']</code></p>
| 2 | 2016-07-29T10:40:54Z | [
"python",
"mongodb",
"scrapy"
] |
Convert regex string from Java to Python | 38,655,341 | <p>I have the following function in Java to replace occurrences of a regex with a blank space:</p>
<pre><code>string.replaceAll("\r?\n[\\s&&[^\r\n]]*", " ")
</code></pre>
<p>In Python, the equivalent would be:</p>
<pre><code>re.sub("\r?\n[\\s&&[^\r\n]]*", " ", string)
</code></pre>
<p>But I just realised that Python doesn't parse regex strings the same way as Java. My question is what is the Python equivalent regex string of <code>\r?\n[\\s&&[^\r\n]]*</code> ?</p>
| 3 | 2016-07-29T09:33:14Z | 38,655,421 | <p>In python, it would be</p>
<pre><code>re.sub(r'\r?\n(?:(?![\r\n])\s)*', " ", stri)
</code></pre>
<p>You may use the same regex in java also.</p>
| 1 | 2016-07-29T09:36:39Z | [
"java",
"python",
"regex",
"python-2.7",
"python-3.x"
] |
Routing from Django to Angular route | 38,655,403 | <p>I have a Django backend and an Angular frontend.
I am using a package called django-invitations. When a user receives an invitation in an email they click on it and are taken to the app. django-invitations requires this line in settings.py</p>
<pre><code>INVITATIONS_SIGNUP_REDIRECT = 'register'
</code></pre>
<p>This is the name of a route and reverse match is used to determine where to go.
The problem is I want the user to be taken to my sign-up page which is </p>
<p><a href="http://example.com/#/registration" rel="nofollow">http://example.com/#/registration</a></p>
<p>This is an Angular route.
My urls.py contains this line</p>
<pre><code>url(r'^register', TemplateView.as_view(template_name='index.html'), name='register'),
</code></pre>
<p>This however takes the user to my index page and the url becomes</p>
<p><a href="http://example.com/registration#/" rel="nofollow">http://example.com/registration#/</a></p>
<p>How do I route a request from my Django backend to an Angular route with hash notation?</p>
| 0 | 2016-07-29T09:35:45Z | 38,656,293 | <p>Got it!</p>
<p>Changed the line in urls.py to</p>
<pre><code>url(r'^register/', views.redirect_to_register, name='register'),
</code></pre>
<p>Added a views.py </p>
<pre><code>from django.shortcuts import render
from django.shortcuts import redirect
def redirect_to_register(request):
return redirect('/#/register')
</code></pre>
<p>And it works.</p>
| 0 | 2016-07-29T10:16:06Z | [
"python",
"angularjs",
"django",
"routes",
"angular-ui-router"
] |
Sum alternate columns of a dataframe | 38,655,464 | <p>I have a dataframe which looks like</p>
<pre><code> A 00:00 00:30 01:00 01:30 02:00 .....22:30 23:00 23:30
1 2 3 3 4 3 1 6 4
2 5 6 2 6 5 2 1 2
</code></pre>
<p>I am trying to get, </p>
<pre><code> A 00:00 01:00 02:00 ..... 23:00
1 6 6 7 7
2 7 8 11 3
</code></pre>
<p>The column <code>23:30</code> gets added in <code>00:00</code>. </p>
<p>I have tried using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html" rel="nofollow"><code>numpy.sum</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>pandas.DataFrame.sum</code></a> which sums all the columns. How can I tell it to sum every alternate column?</p>
| 2 | 2016-07-29T09:38:47Z | 38,655,559 | <p>I think this should work:</p>
<pre><code>In [261]:
df = pd.DataFrame(np.random.randn(5,6), columns=['00:00','00:30','01:00','01:30','02:00','02:30'])
df
Out[261]:
00:00 00:30 01:00 01:30 02:00 02:30
0 0.176952 1.161850 0.894800 -0.246474 1.252235 -0.816835
1 0.817057 -1.338584 -0.983922 -0.073771 -2.188114 1.819888
2 -0.637196 -0.429361 1.267454 0.040461 1.256472 -0.242053
3 0.270544 0.403675 0.890263 1.767279 1.380494 -1.349156
4 -0.752082 0.380903 -0.795439 1.176303 0.176784 0.693317
In [262]:
rhs = df.ix[:,1::2]
df.ix[:,::2] + pd.concat([rhs.ix[:,-1:],rhs.ix[:,:-1]],axis=1).values
Out[262]:
00:00 01:00 02:00
0 -0.639884 2.056650 1.005761
1 2.636945 -2.322505 -2.261885
2 -0.879249 0.838093 1.296933
3 -1.078612 1.293938 3.147772
4 -0.058764 -0.414535 1.353087
In [263]:
rhs
Out[263]:
00:30 01:30 02:30
0 1.161850 -0.246474 -0.816835
1 -1.338584 -0.073771 1.819888
2 -0.429361 0.040461 -0.242053
3 0.403675 1.767279 -1.349156
4 0.380903 1.176303 0.693317
</code></pre>
<p>So in your case as you have 30 min intervals in your column names then the resulting df will use the hourly interval column names in the lhs and the values from the 30 min intervals and add these values</p>
<p>So here we use slice with step <code>.ix[:,::2]</code> to return all rows and step the columns when adding we return a numpy array using <code>.values</code> because otherwise you get all <code>NaN</code> values as pandas will attempt to align on the column names and you'll get no matches.</p>
<p>As you want to add <code>00:00</code> with <code>23:30</code> then we can <code>concat</code> the last column with the remainder of the columns so we get the column alignment when adding the columns</p>
| 2 | 2016-07-29T09:43:41Z | [
"python",
"numpy",
"pandas"
] |
Add a dynamically generated matplotlib picture in Django home | 38,655,635 | <p>I would like to add a dynamically generated picture in my home. I do not want to save the figure, but just show it directly in my home page, after some text.
My <code>views.py</code> looks like:</p>
<pre><code>from django.shortcuts import render
def index(request):
import random
import datetime
import django
import pylab
import PIL, PIL.Image
import io
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
from matplotlib.dates import DateFormatter
fig=Figure()
ax=fig.add_subplot(111)
x=[]
y=[]
now=datetime.datetime.now()
delta=datetime.timedelta(days=1)
for i in range(10):
x.append(now)
now+=delta
y.append(random.randint(0, 1000))
ax.plot_date(x, y, '-')
ax.xaxis.set_major_formatter(DateFormatter('%Y-%m-%d'))
fig.autofmt_xdate()
canvas = FigureCanvas(fig)
graphic1 =django.http.HttpResponse(content_type='image/png')
canvas.print_png(graphic1)
return render(request, 'personal/home.html',{'graphic':graphic1})
</code></pre>
<p>Function <code>index</code> is already included in the <code>urls.py</code>. No problem there. My <code>home.html</code> looks like</p>
<pre><code>{% extends "personal/header.html" %}
{% block content %}
<p> Welcome to my website!</p>
{% include "personal/includes/htmlpic.html" %}
{% endblock %}
</code></pre>
<p>My <code>htmlpic.html</code> is:</p>
<pre><code>{% block graphic %}
<div id="content">
<img src= "data:image/png;base64,{{graphic|safe}}" >
</div>
{% endblock %}
</code></pre>
<p>The error: the figure does not show up. It is a broken link like this:</p>
<pre><code><img src="data:image/png;base64,&lt;HttpResponse status_code=200, " image png">
</code></pre>
<p>It clearly copies the status instead of the image in binary (and adds an extra quote). Can you tell me what I am doing wrong here? Or suggest a SO similar Q&A?</p>
<p>PS. I'm a newbie in Django, please be generous.</p>
| 0 | 2016-07-29T09:47:13Z | 38,657,308 | <p>In my experience, <a href="http://frontend.co.il/articles/avoid-data-uris-english" rel="nofollow">using data URIs is usually a bad idea</a>.</p>
<p>In this case, there is no real need for them anyway. Just create a separate view that returns the image. I am using <code>gnuplot</code> here for simplicity. The command just prints a png image of a sine function to the stdout. (<a href="http://stackoverflow.com/questions/12144877/get-binary-image-data-from-a-matplotlib-canvas">Here</a> are some instructions on getting the image data from a matplotlib canvas.) In <code>views.py</code>:</p>
<pre><code>def my_plot(request):
import subprocess
plot = subprocess.check_output(['gnuplot', '-e', 'set terminal pngcairo; plot sin(x)'])
response = HttpResponse(plot, content_type="image/png")
return response
</code></pre>
<p>Say you map this view to url <code>plot</code>, therefore writing the <code>urls.py</code> as in here:</p>
<pre><code>urlpatterns = patterns(
'',
url(r'^$', views.index, name='home'),
url(r'^plot$', views.my_plot, name='plot')
)
</code></pre>
<p>Then you can simply use the view that generates the image as the source of your image. In <code>htmlpic.html</code>: </p>
<pre><code><img src="plot"></img>
</code></pre>
<p>To me this is a much clearer separation of concerns. One view renders your template, the other renders the image. If you wish to embed this image somewhere else in your program, this way allows you to do so without repeating yourself.</p>
| 1 | 2016-07-29T11:08:07Z | [
"python",
"django",
"matplotlib"
] |
how to add columns label on a Pandas DataFrame | 38,655,701 | <p>I can't understand how can I add column names on a pandas dataframe, an easy example will clarify my issue:</p>
<pre><code>dic = {'a': [4, 1, 3, 1], 'b': [4, 2, 1, 4], 'c': [5, 7, 9, 1]}
df = pd.DataFrame(dic)
</code></pre>
<p>now if I type df than I get</p>
<pre><code> a b c
0 4 4 5
1 1 2 7
2 3 1 9
3 1 4 1
</code></pre>
<p>say now that I generate another dataframe just by summing up the columns on the previous one</p>
<pre><code>a = df.sum()
</code></pre>
<p>if I type 'a' than I get</p>
<pre><code>a 9
b 11
c 22
</code></pre>
<p>That looks like a dataframe without with index and without names on the only column. So I wrote</p>
<pre><code>a.columns = ['column']
</code></pre>
<p>or </p>
<pre><code>a.columns = ['index', 'column']
</code></pre>
<p>and in both cases Python was happy because he didn't provide me any message of errors. But still if I type 'a' I can't see the columns name anywhere. What's wrong here?</p>
| 0 | 2016-07-29T09:49:44Z | 38,655,847 | <p>The method <code>DataFrame.sum()</code> does an aggregation and therefore returns a <code>Series</code>, not a <code>DataFrame</code>. And a Series has no columns, only an index. If you want to create a DataFrame out of your sum you can change <code>a = df.sum()</code> by:</p>
<pre><code>a = pandas.DataFrame(df.sum(), columns = ['whatever_name_you_want'])
</code></pre>
| 1 | 2016-07-29T09:56:09Z | [
"python",
"pandas"
] |
Optimal method to find the max of sublist items within list | 38,655,727 | <p>I have a multidimensional list in the format:</p>
<pre><code>list = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
</code></pre>
<p>How do I obtain the maximum value of the third value of all the sublists. In pseudo code:</p>
<pre><code>max(list[0][2], list[1][2], list[2][2])
</code></pre>
<p>I know this can be done via iterating over the list and extracting the third value into a new list, then simply performing <code>max(list)</code>, but I'm wondering if this can be done using a lambda or list comprehension?</p>
| 3 | 2016-07-29T09:51:08Z | 38,655,758 | <p>Use <code>zip</code> function to get the list of columns then use a simple indexing in order to get the expected column:</p>
<pre><code>>>> lst = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
>>>
>>> max(zip(*lst)[-1]) # in python 3.x max(list(zip(*lst))[-1])
3
</code></pre>
<p>One another alternative and more pythonic approach is passing a <code>key</code> function to <code>max</code> to get the max item based on the key function. In this case you can use <code>itemgetter(-1)</code> in order to get the max item based on intended index then since the <code>max()</code> function returns the whole item from your list (sub-list) you can get the expected item by indexing:</p>
<pre><code>>>> from operator import itemgetter
>>> max(lst, key=itemgetter(-1))[-1]
3
</code></pre>
<p>Or more functional:</p>
<pre><code>>>> key_func = itemgetter(-1)
>>> key_func(max(lst, key=key_func))
3
</code></pre>
| 3 | 2016-07-29T09:52:44Z | [
"python",
"arrays",
"list",
"list-comprehension"
] |
Optimal method to find the max of sublist items within list | 38,655,727 | <p>I have a multidimensional list in the format:</p>
<pre><code>list = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
</code></pre>
<p>How do I obtain the maximum value of the third value of all the sublists. In pseudo code:</p>
<pre><code>max(list[0][2], list[1][2], list[2][2])
</code></pre>
<p>I know this can be done via iterating over the list and extracting the third value into a new list, then simply performing <code>max(list)</code>, but I'm wondering if this can be done using a lambda or list comprehension?</p>
| 3 | 2016-07-29T09:51:08Z | 38,655,781 | <p>Just use <code>max</code> with a generator expression:</p>
<pre><code>>>> lst = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
>>> max(l[2] for l in lst)
3
</code></pre>
<p>Also, don't name your variables <code>list</code>, you are shadowing the type.</p>
| 3 | 2016-07-29T09:53:37Z | [
"python",
"arrays",
"list",
"list-comprehension"
] |
Optimal method to find the max of sublist items within list | 38,655,727 | <p>I have a multidimensional list in the format:</p>
<pre><code>list = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
</code></pre>
<p>How do I obtain the maximum value of the third value of all the sublists. In pseudo code:</p>
<pre><code>max(list[0][2], list[1][2], list[2][2])
</code></pre>
<p>I know this can be done via iterating over the list and extracting the third value into a new list, then simply performing <code>max(list)</code>, but I'm wondering if this can be done using a lambda or list comprehension?</p>
| 3 | 2016-07-29T09:51:08Z | 38,655,794 | <p>applying <code>max</code> on the list will return the maximum list, which isn't what you want. You could indeed use a list comprehension to extract the third element, and then apply <code>max</code> on that comprehension:</p>
<pre><code>>>> lst = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
>>> max([x[2] for x in lst])
3
</code></pre>
| 1 | 2016-07-29T09:54:10Z | [
"python",
"arrays",
"list",
"list-comprehension"
] |
Optimal method to find the max of sublist items within list | 38,655,727 | <p>I have a multidimensional list in the format:</p>
<pre><code>list = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
</code></pre>
<p>How do I obtain the maximum value of the third value of all the sublists. In pseudo code:</p>
<pre><code>max(list[0][2], list[1][2], list[2][2])
</code></pre>
<p>I know this can be done via iterating over the list and extracting the third value into a new list, then simply performing <code>max(list)</code>, but I'm wondering if this can be done using a lambda or list comprehension?</p>
| 3 | 2016-07-29T09:51:08Z | 38,655,959 | <p>This is exactly why <a href="https://docs.python.org/3/library/functions.html#max" rel="nofollow"><code>max</code></a> has a <code>key</code> argument, it allows to use a custom function to select the "maximum" element:</p>
<pre><code>>>> L = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
>>> L2 = max(L, key=labda l: l[2])
>>> L2
[1, 2, 3]
>>> L2[2]
3
</code></pre>
<p>Or with <a href="https://docs.python.org/3/library/operator.html#operator.itemgetter" rel="nofollow">operator.itemgetter</a>:</p>
<pre><code>>>> import operator
>>> L = [[1, 2, 3], [2, 4, 2], [0, 1, 1]]
>>> L2 = max(L, key=operator.itemgetter(2))
>>> L2
[1, 2, 3]
>>> L2[2]
3
</code></pre>
<p>You can also do this on one line and to get the element only instead of the whole list:</p>
<pre><code>>>> element = max(L, key=operator.itemgetter(2))[2]
>>> print(element)
3
</code></pre>
| 3 | 2016-07-29T10:01:31Z | [
"python",
"arrays",
"list",
"list-comprehension"
] |
How to access class attributes of a derived class in the base class in Python3? | 38,655,738 | <p>I want to do something in a base class (<code>FooBase</code>) with the <strong>class attribues</strong> of the derived classes (<code>Foo</code>). I want to do this with Python<strong>3</strong>.</p>
<pre><code>class BaseFoo:
#felder = [] doesn't work
def test():
print(__class__.felder)
class Foo(BaseFoo):
felder = ['eins', 'zwei', 'yep']
if __name__ == '__main__':
Foo.test()
</code></pre>
<p>Maybe there is a different approach to this?</p>
| 0 | 2016-07-29T09:51:35Z | 38,656,262 | <p>You need to make <code>test</code> a <a href="https://docs.python.org/3/library/functions.html#classmethod" rel="nofollow">class method</a>, and give it an argument that it can use to access the class; conventionally this arg is named <code>cls</code>.</p>
<pre><code>class BaseFoo:
@classmethod
def test(cls):
print(cls.felder)
class Foo(BaseFoo):
felder = ['eins', 'zwei', 'yep']
if __name__ == '__main__':
Foo.test()
</code></pre>
<p><strong>output</strong></p>
<pre><code>['eins', 'zwei', 'yep']
</code></pre>
| 1 | 2016-07-29T10:15:01Z | [
"python",
"python-3.x",
"class-attributes"
] |
Python: add to a dictionary in a list | 38,655,798 | <p>I have the following dictionary (It is for creating json), </p>
<pre><code>temp = {'logs':[]}
</code></pre>
<p>I want to append dictionaries, but i only got 1 key:val at a time.</p>
<p>what I tried:</p>
<pre><code>temp['logs'].append({key:val})
</code></pre>
<p>This does as expected and appends the dict to the array.
But now I want to add a key/val pair to this dictionary, how can I do this?
I've tried using append/extend but that just adds a new dictionary to the list.</p>
| 1 | 2016-07-29T09:54:19Z | 38,655,866 | <blockquote>
<p>But now I want to add a key/val pair to this dictionary</p>
</blockquote>
<p>You can <em>index</em> the list and <em>update</em> that dictionary:</p>
<pre><code>temp['logs'][0].update({'new_key': 'new_value'})
</code></pre>
| 1 | 2016-07-29T09:56:45Z | [
"python",
"json",
"list",
"dictionary"
] |
Python: add to a dictionary in a list | 38,655,798 | <p>I have the following dictionary (It is for creating json), </p>
<pre><code>temp = {'logs':[]}
</code></pre>
<p>I want to append dictionaries, but i only got 1 key:val at a time.</p>
<p>what I tried:</p>
<pre><code>temp['logs'].append({key:val})
</code></pre>
<p>This does as expected and appends the dict to the array.
But now I want to add a key/val pair to this dictionary, how can I do this?
I've tried using append/extend but that just adds a new dictionary to the list.</p>
| 1 | 2016-07-29T09:54:19Z | 38,655,975 | <p>You can use this command to change your dict values :</p>
<pre><code>>>> temp['logs'][0]={'no':'val'}
>>> temp
{'logs': [{'no': 'val'}]}
</code></pre>
<p>And this one to add values : </p>
<pre><code>>>> temp['logs'][0].update({'yes':'val'})
>>> temp
{'logs': [{'key': 'val', 'yes': 'val'}]}
</code></pre>
| 0 | 2016-07-29T10:02:06Z | [
"python",
"json",
"list",
"dictionary"
] |
Python: add to a dictionary in a list | 38,655,798 | <p>I have the following dictionary (It is for creating json), </p>
<pre><code>temp = {'logs':[]}
</code></pre>
<p>I want to append dictionaries, but i only got 1 key:val at a time.</p>
<p>what I tried:</p>
<pre><code>temp['logs'].append({key:val})
</code></pre>
<p>This does as expected and appends the dict to the array.
But now I want to add a key/val pair to this dictionary, how can I do this?
I've tried using append/extend but that just adds a new dictionary to the list.</p>
| 1 | 2016-07-29T09:54:19Z | 38,656,040 | <p>There must be unique "key" every time you append it. (If it is for json)</p>
<p>Also making "=" will update your old dictionary</p>
<p>What I have done when I was stuck once is </p>
<pre><code>user = {}
name,password,id1 = [],[],[]
user1=session.query(User).all()
for i in user1:
name=i.name
password=i.password
id1=i.id
user.update({ id1:{
"name" : name,
"password" : password,
}
})
</code></pre>
<p>check this link might be helpful to you</p>
<p><a href="http://stackoverflow.com/questions/38345004/how-to-convert-list-of-json-frames-to-json-frame">How to convert List of JSON frames to JSON frame</a></p>
| 0 | 2016-07-29T10:05:12Z | [
"python",
"json",
"list",
"dictionary"
] |
Python: add to a dictionary in a list | 38,655,798 | <p>I have the following dictionary (It is for creating json), </p>
<pre><code>temp = {'logs':[]}
</code></pre>
<p>I want to append dictionaries, but i only got 1 key:val at a time.</p>
<p>what I tried:</p>
<pre><code>temp['logs'].append({key:val})
</code></pre>
<p>This does as expected and appends the dict to the array.
But now I want to add a key/val pair to this dictionary, how can I do this?
I've tried using append/extend but that just adds a new dictionary to the list.</p>
| 1 | 2016-07-29T09:54:19Z | 38,656,062 | <p>Note that adding a dictionary (or any object) to a list only stores a reference, not a copy.</p>
<p>You can therefor do this:</p>
<pre><code>>>> temp = {'logs': []}
>>> log_entry = {'key1': 'val1'}
>>> temp['logs'].append(log_entry)
>>> temp
{'logs': [{'key1': 'val1'}]}
>>> log_entry['key2'] = 'val2'
>>> temp
{'logs': [{'key2': 'val2', 'key1': 'val1'}]}
</code></pre>
<p>However, you might be able to circumvent to whole issue by using dict comprehension (only in Python >=2.7)</p>
<pre><code>>>> temp = {'logs': [{key: value for key, value in my_generator}]
</code></pre>
| 0 | 2016-07-29T10:06:16Z | [
"python",
"json",
"list",
"dictionary"
] |
How to find a node from a file-like xml object in Python3? | 38,655,843 | <p>These is the content of a file-like object <code>toc</code>:</p>
<pre><code><?xml version='1.0' encoding='utf-8'?>
<ncx xmlns="http://www.daisy.org/z3986/2005/ncx/" version="2005-1" xml:lang="eng">
<head>
...
</head>
<docTitle>
<text>THE_TEXT_I_WANT</text>
</docTitle>
...
</ncx>
</code></pre>
<p>My Python3 codes now:</p>
<pre><code>import xml.etree.ElementTree as ET
# I get toc using open method in zipfile module
# toc : <zipfile.ZipExtFile name='toc.ncx' mode='r' compress_type=deflate>
toc_tree = ET.parse(toc)
for node in toc_tree.iter():
print(node)
print(toc_tree.find('docTitle'))
</code></pre>
<p>The for loop can print out all nodes but <code>find</code> method returns <code>None</code>. <code>findall</code> method returns nothing either. Please anybody tell me why? Is there any better solution?</p>
| 0 | 2016-07-29T09:55:59Z | 38,656,086 | <p>Because there is a (default) namespace in your XML, searching for elements called <code>docTitle</code> will find nothing, as it is searching for un-namespaced elements called <code>docTitle</code>. Instead, you need to use clark notation with the full namespace URI:</p>
<pre><code>toc_tree.find('{http://www.daisy.org/z3986/2005/ncx/}docTitle')
</code></pre>
| 0 | 2016-07-29T10:07:12Z | [
"python",
"xml",
"elementtree"
] |
How is __mro__ different from other double underscore names? | 38,655,863 | <p>I stumbled upon this behavior for double underscore name that I don't understand:</p>
<pre><code>class A:
pass
class B:
pass
class C(A,B):
__id__ = 'c'
c = C()
print(C.__mro__) # print the method resolution order of class C
#print(c.__mro__) # AttributeError: 'C' object has no attribute '__mro__'
print(C.__id__) # print 'c'
print(c.__id__) # print 'c'
</code></pre>
<p>I know about the name mangling for <code>__name</code>, which doesn't apply for <code>__name__</code> (more for overloading operator methods). <code>__id__</code> behaves just like a regular class variable which can be accessed via Class name as well as instance.</p>
<p>However, <code>__mro__</code> can only be accessed via Class name and in fact I can even explicitly introduce <code>__mro__</code> in C:</p>
<pre><code>class C(A,B):
__mro__ = 'bla'
print(C.__mro__) # print the method resolution order of class C
print(c.__mro__) # print 'bla'
</code></pre>
<p>I'd like to understand if this behavior is some python internal magic or can it be achieved in regular python code.</p>
<p>[<strong>python version 3.4.3</strong>]</p>
| 4 | 2016-07-29T09:56:36Z | 38,656,156 |
<p>This has to do with the lookup order.</p>
<p>Letting <a href="https://docs.python.org/3/reference/datamodel.html#implementing-descriptors" rel="nofollow">descriptors</a> aside, python first checks the objects <code>__dict__</code> to find an attribute. If it cannot find it, it will look at the class of the object and the bases of the class to find the attribute. If it cannot be found there either, AttributeError is raised.</p>
<p>This is probably not understandable, so let us show this with a short example:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/python3
class Foo(type):
X = 10
class Bar(metaclass=Foo):
Y = 20
baz = Bar()
print("X on Foo", hasattr(Foo, "X"))
print("X on Bar", hasattr(Bar, "X"))
print("X on baz", hasattr(baz, "X"))
print("Y on Foo", hasattr(Foo, "Y"))
print("Y on Bar", hasattr(Bar, "Y"))
print("Y on baz", hasattr(baz, "Y"))
</code></pre>
<p>The output is:</p>
<pre class="lang-python prettyprint-override"><code>X on Foo True
X on Bar True
X on baz False
Y on Foo False
Y on Bar True
Y on baz True
</code></pre>
<p>As you can see, <code>X</code> has been declared on the <em>metaclass</em> <code>Foo</code>. It is accessible through the instance of the <em>metaclass</em>, the class <code>Bar</code>, but not on the instance <code>baz</code> of <code>Bar</code>, because it is only in the <code>__dict__</code> in <code>Foo</code>, not in the <code>__dict__</code> of <code>Bar</code> or <code>baz</code>. Python only checks <em>one</em> step up in the "meta" hierarchy.</p>
<p>For more on metaclass magic, see the excellent answers on the question <a href="https://stackoverflow.com/q/100003/1248008">What is a metaclass in python?</a>.</p>
<p>This, however, is not sufficient to describe the behaviour, because <code>__mro__</code> is different for each <em>instance</em> of <code>Foo</code> (that is, for each class).</p>
<p>This can be achieved using descriptors. <em>Before</em> the attribute name is looked up at the objects <code>__dict__</code>, python checks the <code>__dict__</code> of the class and its bases to see if there is a descriptor object assigned to the name. A descriptor is any object which has a <a href="https://docs.python.org/3/reference/datamodel.html#object.__get__" rel="nofollow"><code>__get__</code> method</a>. If that is the case, the descriptor objects <code>__get__</code> method is called and the result is returned from the attribute lookup. With a descriptor assigned to an attribute of the metaclass, the behaviour seen can be achieved: The descriptor can return a different value based on the <em>instance</em> argument, but nevertheless the attribute can only be accessed through the class and the metaclass, not instances of the class.</p>
<p>A prime example of descriptors is <a href="https://docs.python.org/3/library/functions.html#property" rel="nofollow"><code>property</code></a>. Here is a simple example with a descriptor which has the same behaviour as <code>__mro__</code>:</p>
<pre class="lang-python prettyprint-override"><code>class Descriptor:
def __get__(self, instance, owner):
return "some value based on {}".format(instance)
class OtherFoo(type):
Z = Descriptor()
class OtherBar(metaclass=OtherFoo):
pass
other_baz = OtherBar()
print("Z on OtherFoo", hasattr(OtherFoo, "Z"))
print("Z on OtherBar", hasattr(OtherBar, "Z"))
print("Z on other_baz", hasattr(other_baz, "Z"))
print("value of Z on OtherFoo", OtherFoo.Z)
print("value of Z on OtherBar", OtherBar.Z)
</code></pre>
<p>The output is:</p>
<pre class="lang-python prettyprint-override"><code>Z on OtherFoo True
Z on OtherBar True
Z on other_baz False
value of Z on OtherFoo some value based on None
value of Z on OtherBar some value based on <class '__main__.OtherBar'>
</code></pre>
<p>As you can see, <code>OtherBar</code> and <code>OtherFoo</code> both have the <code>Z</code> attribute accessible, but <code>other_baz</code> does not. Still, <code>Z</code> can have a different value for each <code>OtherFoo</code> instance, that is, each class using the <code>OtherFoo</code> metaclass.</p>
<p>Metaclasses are confusing at first, and even more so when descriptors are in play. I suggest reading up on metaclasses the <a href="https://stackoverflow.com/q/100003/1248008">linked question</a>, as well as descriptors in python in general.</p>
| 4 | 2016-07-29T10:10:42Z | [
"python",
"python-3.4",
"double-underscore"
] |
Search part of string matching in python | 38,655,972 | <p>I have two files one is "error_dict" file and other is "jbosslogfile" error dict file contains known error messages in each line like
error_dict</p>
<pre><code>"0500 ERROR [org.picketlink.identity.federation]"
"-0500 ERROR [com.gravitant.cloud.appsportfolio.jsf.architecture.beans.StorageAccountBean]"
"Invalid Context Code - APP"
</code></pre>
<hr>
<p>JbossLogFile </p>
<pre><code>2016-06-03 00:19:52,612 -0500 ERROR [org.jboss.as.ejb3.invocation] (EJB default - 3) JBAS014134: EJB Invocation failed on component GraBmsDataLoaderEjbIfc for method public abstract void com.gravitant.bms.common.dataload.GraBmsDataLoaderEjbIfc.loadVMTemplates(java.lang.String,java.lang.String,java.lang.String): javax.ejb.EJBException: com.gravitant.bts.bms.exception.GraException: Display Message:Transaction commit failed -
2016-06-03 00:20:10,809 -0500 ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (EJB default - 4) Duplicate entry '9db562c525e942698c605df2f0c9b26a__FreeBSD10_3-2016-06-02::unreco' for key 'UQ_graresourcetemplate$tmpId_prv_usrGrpId'
2016-06-03 00:20:10,824 -0500 ERROR [com.gravitant.bts.bms.transaction.BTSTransaction] (EJB default - 4) BmsBaseTransaction:Commit()- Transaction commit failed
2016-06-03 00:20:11,001 -0500 ERROR [org.jboss.as.ejb3.invocation] (EJB default - 4) JBAS014134: EJB Invocation failed on component GraBmsDataLoaderEjbIfc for method public abstract void com.gravitant.bms.common.dataload.GraBmsDataLoaderEjbIfc.loadVMTemplates(java.lang.String,java.lang.String,java.lang.String): javax.ejb.EJBException: com.gravitant.bts.bms.exception.GraException: Display Message:Transaction commit failed -
2016-06-03 00:31:56,730 -0500 ERROR [org.picketlink.identity.federation] (http-/10.200.212.143:8081-10) PLFED000263: Service Provider could not handle the request.: java.lang.IllegalArgumentException: PLFED000132: No assertions in reply from IDP
2016-06-03 00:52:01,379 -0500 ERROR [org.picketlink.identity.federation] (http-/10.200.212.143:8081-1) PLFED000263: Service Provider could not handle the request.: java.lang.IllegalArgumentException: PLFED000132: No assertions in reply from IDP
2016-06-03 01:11:49,938 -0500 ERROR [org.picketlink.identity.federation] (http-/10.200.212.143:8081-10) PLFED000263: Service Provider could not handle the request.: java.lang.IllegalArgumentException: PLFED000132: No assertions in reply from IDP
2016-06-03 01:41:59,942 -0500 ERROR [org.picketlink.identity.federation] (http-/10.200.212.143:8081-1) PLFED000263: Service Provider could not handle the request.: java.lang.IllegalArgumentException: PLFED000132: No assertions in reply from IDP
2016-06-03 02:02:04,783 -0500 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/].[Faces Servlet]] (http-/10.200.212.143:8081-1) JBWEB000236: Servlet.service() for servlet Faces Servlet threw exception: javax.faces.application.ViewExpiredException: viewId:/appsportfolio-main.jsf - View /appsportfolio-main.jsf could not be restored.
2016-06-03 02:11:57,211 -0500 ERROR [org.picketlink.identity.federation] (http-/10.200.212.143:8081-1) PLFED000263: Service Provider could not handle the request.: java.lang.IllegalArgumentException: PLFED000132: No assertions in reply from IDP
2016-06-03 02:22:02,739 -0500 ERROR [org.picketlink.identity.federation] (http-/10.200.212.143:8081-10) PLFED000263: Service Provider could not handle the request.: java.lang.IllegalArgumentException: PLFED000132: No assertions in reply from IDP
201
</code></pre>
<p>What i am trying to do is very simple i am using this error_dict file each known error to match against my jboss log file in which case if i found part of string from jboss log file print that line on screen and then yank that line from jboss log file.</p>
<p>Very similar i can achieve it using the notepad++ like mark all lines and then delete bookmark will perform yank operation. Very similar i am trying to implement over in python code which is as following</p>
<pre><code>def MatchCountYankerrors():
ferrorrepo = errorrepopath
conclufile = os.path.join(Decompressfilepath,(appservername+'conclusion'))
ferrorfile = open(ferrorrepo)
confile = open(conclufile)
output = []
for errlines in ferrorfile: #Pick each line from error_dict
c = 0
for eachconline in confile:#pick each line from Jboss log
#if re.search(errlines,eachconline,re.M|re.I):
newerrliens = errlines.strip().split() #error_dict file each line strip and spilit
neweachconline = eachconline.strip().split() #Jbosslog file each line strip and spliting
if newerrliens in neweachconline:
print neweachconline
</code></pre>
<p>Objective:
How to perform following operation using python program
<a href="http://i.stack.imgur.com/AQ0qb.png" rel="nofollow"><img src="http://i.stack.imgur.com/AQ0qb.png" alt="enter image description here"></a></p>
| 0 | 2016-07-29T10:02:02Z | 38,727,204 | <p>Here is code which loop through my error repo files and in final error log file if there is match with line it will yank(delete) that line from log file and before performing this operation it will create backup file.</p>
<pre><code>#This function will be matching error lines from error repo files and if matched it will yank those lines from the files.
def MatchandYankerrors():
ferrorrepo = errorrepopath
conclufile = os.path.join(Decompressfilepath,(appservername+'conclusion'))
ferrorfile = open(ferrorrepo)
output = []
for errlines in ferrorfile: #Pick each line from error_dict
c = 0
newerrliens = errlines.strip() # error_dict file each line strip and spilit
#confile = open(conclufile,"r+")#This will keep opening file every time when we need new error to search.
#confilelines = confile.readlines() #This will read all lines from file.
#confile.seek(0)
i=0
for line in fileinput.input(conclufile,inplace=1,backup='.orig'):
line = line.strip()
if newerrliens in line:
pass
else:
print line
fileinput.close()
</code></pre>
| 0 | 2016-08-02T17:45:49Z | [
"python",
"regex",
"python-2.7"
] |
Reading text file with numpy.loadtxt | 38,655,983 | <p>I am getting an error when trying to read a text file. </p>
<pre><code>import numpy as np
fnam = 'file.txt'
test_fnames = np.loadtxt(fnam, dtype=None, delimiter=',')
test_fnames
</code></pre>
<p>I now get this error:</p>
<pre><code>ValueError: could not convert string to float:
</code></pre>
<p>The file content is just a comma separated list of numbers. Perhaps there is a space at the end of the file that is causing an error?</p>
<pre><code>1,2,3,4,5,6,7,7,8,9122,3,3,45,5,6
</code></pre>
<p>Thanks. The problem was the way I wrote the text file in Torch7. </p>
| 0 | 2016-07-29T10:02:45Z | 38,658,893 | <p>You could use <code>np.genfromtxt()</code> instead of <code>np.loadtxt</code>.
Because the first one let handles missing values :</p>
<pre><code>import numpy as np
fnam = 'file.txt'
test_fnames = np.genfromtxt(fnam, dtype=None, delimiter=',')
</code></pre>
<p>You could also try :</p>
<pre><code>import numpy as np
fnam = 'file.txt'
test_fnames = np.genfromtxt(fnam, dtype=None, delimiter=',')[:,:-1]
</code></pre>
<p>It's just an idea ^^ But if you want, upload your data file somewhere, give me the link and I will see ;)</p>
| 0 | 2016-07-29T12:30:34Z | [
"python",
"numpy",
"file-io"
] |
lxml eTree iterparse depth | 38,656,008 | <p>I am trying to parse some xml which is in the following format:</p>
<pre><code><label>
<name></name>
<sometag></sometag>
<sublabels>
<label></label>
<label></label>
</sublabel>
</label>
</code></pre>
<p>Parsing it with this</p>
<pre><code>for event, element in etree.iterparse(gzip.GzipFile(f), events=('end', ), tag='label'):
if event == 'end':
name = element.xpath('name/text()')
</code></pre>
<p>produces empty <strong>name</strong> variable because of the</p>
<pre><code><sublabels>
<label></label>
<label></label>
</sublabel>
</code></pre>
<p><strong>The question:</strong></p>
<p>Is there any way to set the depth of the iterparse or ignore the sublabel label other than checking if it is empty?</p>
| 4 | 2016-07-29T10:03:48Z | 38,717,524 | <p>The first thing that came to mind</p>
<pre><code>path = []
for event, element in etree.iterparse(gzip.GzipFile(f), events=('start', 'end')):
if event == 'start':
path.append(element.tag)
elif event == 'end':
if element.tag == 'label':
if not 'sublabels' in path:
name = element.xpath('name/text()')
path.pop()
</code></pre>
| 0 | 2016-08-02T10:08:27Z | [
"python",
"lxml"
] |
lxml eTree iterparse depth | 38,656,008 | <p>I am trying to parse some xml which is in the following format:</p>
<pre><code><label>
<name></name>
<sometag></sometag>
<sublabels>
<label></label>
<label></label>
</sublabel>
</label>
</code></pre>
<p>Parsing it with this</p>
<pre><code>for event, element in etree.iterparse(gzip.GzipFile(f), events=('end', ), tag='label'):
if event == 'end':
name = element.xpath('name/text()')
</code></pre>
<p>produces empty <strong>name</strong> variable because of the</p>
<pre><code><sublabels>
<label></label>
<label></label>
</sublabel>
</code></pre>
<p><strong>The question:</strong></p>
<p>Is there any way to set the depth of the iterparse or ignore the sublabel label other than checking if it is empty?</p>
| 4 | 2016-07-29T10:03:48Z | 38,740,507 | <p>This works for me and is inspired by the previous answer:</p>
<pre><code>name = None
level = 0
for event, element in etree.iterparse(gzip.GzipFile(f), events=('end', 'start' ), tag='label'):
# Update current level
if event == 'start': level += 1;
elif event == 'end': level -= 1;
# Get name for top level label
if level == 0:
name = element.xpath('name/text()')
</code></pre>
<p>As an alternate solution, parse the whole file and use xpath to get the top label name:</p>
<pre><code>from lxml import html
with gzip.open(f, 'rb') as f:
file_content = f.read()
tree = html.fromstring(file_content)
name = tree.xpath('//label/name/text()')
</code></pre>
| 2 | 2016-08-03T10:06:13Z | [
"python",
"lxml"
] |
Access JSON data in Python | 38,656,081 | <pre><code>header = {'Content-type': 'application/json','Authorization': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' }
url = 'https://sandbox-authservice.priaid.ch/login'
response = requests.post(url, headers = header, verify=False).json()
token = json.dumps(response)
print token['ValidThrough']
</code></pre>
<p>I want to print the ValidThrough Attribute in my webhook, which is received as JSON data via a POST call. I know this has been asked a number of times here, but print token['ValidThrough']isnt working for me.I receive the error "TypeError: string indices must be integers, not str"</p>
| 0 | 2016-07-29T10:07:02Z | 38,656,191 | <p>Since the response already seems to be in json, there is no need to use <code>json.dumps</code>.</p>
<p><code>json.dumps</code> on a dictionary will return a string which cannot be indexed obviously and hence that error.</p>
| 2 | 2016-07-29T10:11:57Z | [
"python",
"json",
"hook.io"
] |
Access JSON data in Python | 38,656,081 | <pre><code>header = {'Content-type': 'application/json','Authorization': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' }
url = 'https://sandbox-authservice.priaid.ch/login'
response = requests.post(url, headers = header, verify=False).json()
token = json.dumps(response)
print token['ValidThrough']
</code></pre>
<p>I want to print the ValidThrough Attribute in my webhook, which is received as JSON data via a POST call. I know this has been asked a number of times here, but print token['ValidThrough']isnt working for me.I receive the error "TypeError: string indices must be integers, not str"</p>
| 0 | 2016-07-29T10:07:02Z | 38,656,606 | <p>a requests response <code>.json()</code> method already loads the content of the string to json.</p>
<p>You should use that, but your code later serializes it back to a string, and hence the error (<code>token</code> is a string representation of the dict you are expecting, not the dict). You should just omit the <code>json.dumps(response)</code> line, and use <code>response['ValidThrough']</code></p>
<p>There's another error here, even if you assume that the <code>.json()</code> returns a string that should be unserialized again you should've used <code>json.loads(response)</code> in order to load it into a dict (not dumps to serialize it again)</p>
| 0 | 2016-07-29T10:32:01Z | [
"python",
"json",
"hook.io"
] |
Tweepy script affecting MongoDB service shutting down | 38,656,126 | <p>I have the following Python script that uses <a href="http://docs.tweepy.org" rel="nofollow">Tweepy</a> to get tweets and store them into MongoDB using <a href="https://api.mongodb.com/python/current/" rel="nofollow">PyMongo</a>. What's happening is whenever I run this script, my MongoDB instance gets shut down, and the script stops with errors. I have no idea why this would be happening, because even if the script is terminating or Tweepy is encountering errors, it shouldn't be affecting MongoDB's running state. This line is identified as the first trigger point: <code>stream.filter(locations=[-119.970703, 48.994636, -109.951172, 59.955010])</code></p>
<p><strong>Script</strong></p>
<pre><code>import tweepy, sys, json, traceback
from bson.json_util import loads as json_to_bson
from hashlib import sha1
from datetime import datetime
from pymongo import MongoClient
from time import sleep, strptime, mktime
client = MongoClient()
mode = None
class Stream(tweepy.StreamListener):
def on_status(self, data):
save(data)
def on_error(self, code):
pause()
def now():
return str(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
def pause():
sys.stdout.flush()
sleep((60*15)+5)
def save(data):
bson = json_to_bson(json.dumps(data._json))
tweet_date = strptime(bson['created_at'], "%a %b %d %H:%M:%S +0000 %Y")
tweet_date = str(datetime.fromtimestamp(mktime(tweet_date)))
bson['created_at'] = tweet_date
bson['text_hash'] = sha1(bson['text'].encode('punycode')).hexdigest()
bson['collected_at'] = now()
bson['collection_type'] = mode
if client.grebe.tweets.find_one({'text_hash': bson['text_hash']}) == None:
client.grebe.tweets.insert_one(bson)
def api():
CONSUMER_KEY = 'key'
CONSUMER_SECRET = 'secret'
ACCESS_TOKEN_KEY = 'tokenkey'
ACCESS_TOKEN_SECRET = 'tokensecret'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN_KEY, ACCESS_TOKEN_SECRET)
return tweepy.API(auth)
def main():
mystream = Stream()
stream = tweepy.Stream(api().auth, mystream)
stream.filter(locations=[-119.970703, 48.994636, -109.951172, 59.955010])
main()
</code></pre>
<p><strong>Errors</strong></p>
<pre><code>File "/usr/local/lib/python2.7/dist-packages/tweepy/streaming.py", line 445, in filter
self._start(async)
File "/usr/local/lib/python2.7/dist-packages/tweepy/streaming.py", line 361, in _start
self._run()
File "/usr/local/lib/python2.7/dist-packages/tweepy/streaming.py", line 294, in _run
raise exception
AutoReconnect: connection closed
</code></pre>
| 0 | 2016-07-29T10:09:21Z | 39,656,533 | <p>Not exactly sure yet why, but the following answer to a question about permission problems on the MongoDB lock helped solve this problem: <a href="http://stackoverflow.com/a/15982017/863923">http://stackoverflow.com/a/15982017/863923</a></p>
| 0 | 2016-09-23T08:48:29Z | [
"python",
"mongodb",
"pymongo",
"tweepy"
] |
Using a dynamic array python multi-threads | 38,656,201 | <p>In python, I have a thread to add elements to an array. And I have another thread that uses the first element of that array then deletes it.
The problem is that the second thread is faster than the first one, so I need that it waits for the first one to add other elements and then process instead of going into an error index is out of range.</p>
<p>What is the fastest way?</p>
| 0 | 2016-07-29T10:12:28Z | 38,656,365 | <p>You should be using the <a href="https://docs.python.org/2/library/queue.html" rel="nofollow">Synchronised Queue</a> class or similar.</p>
<p>The Queue class handles overlength and underlength with blocking and optional timeouts. It's also threadsafe.</p>
<pre><code>import threading
import Queue
import time
import logging
logging.basicConfig(level=logging.DEBUG,format='%(threadName)s: %(message)s')
q = Queue.Queue(10)
class Producer(threading.Thread):
def __init__(self,group=None,target=None,name=None,args=None,kwargs=None):
if args is None:
args = ()
if kwargs is None:
kwargs = {}
super(Producer,self).__init__(group=group,target=target,name=name,args=args,kwargs=kwargs)
self.max_count = 10
self.delay = 3
def run(self):
count = 0
logging.debug('Starting run')
while count <= self.max_count:
q.put(count)
logging.debug('Putting idx {0} in queue, queue length = {1}'.format(count,q.qsize()))
count += 1
time.sleep(self.delay)
logging.debug('Finished run')
class Consumer(threading.Thread):
def __init__(self,group=None,target=None,name=None,args=None,kwargs=None):
if args is None:
args = ()
if kwargs is None:
kwargs = {}
super(Consumer,self).__init__(group=group,target=target,name=name,args=args,kwargs=kwargs)
self.timeout = 10
self.delay = 1
def run(self):
logging.debug('Starting run')
while True:
try:
work = q.get(True,self.timeout)
except Queue.Empty:
logging.debug('Queue still empty after {0} giving up'.format(self.timeout))
break
logging.debug('Received idx {0} from queue, queue length = {1}'.format(work,q.qsize()))
time.sleep(self.delay)
logging.debug('Finished run')
def main():
p = Producer(name='producer')
c = Consumer(name='consumer')
p.daemon = True
c.daemon = True
p.start()
time.sleep(8)
c.start()
</code></pre>
<p>When run:</p>
<pre><code>>>> main()
producer: Starting run
producer: Putting idx 0 in queue, queue length = 1
producer: Putting idx 1 in queue, queue length = 2
producer: Putting idx 2 in queue, queue length = 3
consumer: Starting run
consumer: Received idx 0 from queue, queue length = 2
producer: Putting idx 3 in queue, queue length = 3
consumer: Received idx 1 from queue, queue length = 2
consumer: Received idx 2 from queue, queue length = 1
consumer: Received idx 3 from queue, queue length = 0
producer: Putting idx 4 in queue, queue length = 1
consumer: Received idx 4 from queue, queue length = 0
producer: Putting idx 5 in queue, queue length = 1
consumer: Received idx 5 from queue, queue length = 0
producer: Putting idx 6 in queue, queue length = 1
consumer: Received idx 6 from queue, queue length = 0
producer: Putting idx 7 in queue, queue length = 1
consumer: Received idx 7 from queue, queue length = 0
producer: Putting idx 8 in queue, queue length = 1
consumer: Received idx 8 from queue, queue length = 0
producer: Putting idx 9 in queue, queue length = 1
consumer: Received idx 9 from queue, queue length = 0
producer: Putting idx 10 in queue, queue length = 1
consumer: Received idx 10 from queue, queue length = 0
producer: Finished run
consumer: Queue still empty after 10 giving up
consumer: Finished run
</code></pre>
| 3 | 2016-07-29T10:19:24Z | [
"python"
] |
Visible deprecation warning using boolean operation on numpy array | 38,656,284 | <p>I'm having an issue where I keep receiving a warning stating:</p>
<pre><code>VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0;
dimension is 744 but corresponding boolean dimension is 1
</code></pre>
<p>When I try to use this:</p>
<pre><code>x_low = xcontacts[(xcontacts[5:6] <= 2000).any(1), :]
x_med = xcontacts[(xcontacts[5:6] <= 4000).any(1), :]
x_med = xcontacts[(xcontacts[5:6] > 2000).any(1), :]
x_hi = xcontacts[(xcontacts[5:6] > 4000).any(1), :]
</code></pre>
<p>On an array of shape:</p>
<pre><code>xcontacts.shape
Out[46]: (744L, 6L)
</code></pre>
<p>Here's a sample of the array:</p>
<pre><code>[[ 1. 0. 0. 4. 0. 228.681 ]
[ 2. 4. 0. 8. 0. 219.145 ]
[ 3. 8. 0. 12. 0. 450.269 ]
...,
[ 60. 236. 96. 240. 96. 933.4565]
[ 61. 240. 96. 244. 96. 646.449 ]
[ 62. 244. 96. 248. 96. 533.657 ]]
</code></pre>
<p>I'm trying to create three new arrays which are copies of the first but after a boolean operation has been performed on the final column, removing rows that do not agree with the operator:</p>
<pre><code>x_low where col5 <= 2000
x_med where 2000 < col5 <= 4000
x_hi where 4000 < col5
</code></pre>
<p>Does anyone know what I'm doing wrong?</p>
| 2 | 2016-07-29T10:15:44Z | 38,656,991 | <p>Thanks to @Syrtis Major for this:</p>
<pre><code>x_low = xcontacts[(xcontacts[:,5] <= 2000)]
x_med = xcontacts[(xcontacts[:,5] <= 4000)]
x_med = xcontacts[(xcontacts[:,5] > 2000)]
x_hi = xcontacts[(xcontacts[:,5] > 4000)]
</code></pre>
| 0 | 2016-07-29T10:51:57Z | [
"python",
"arrays",
"numpy",
"boolean-operations"
] |
get_pdf api.v8 Odoo. What parameter should I send as "records" | 38,656,347 | <p>In the source code of record.py I found</p>
<pre><code>@api.v8
def get_pdf(self, records, report_name, html=None, data=None):
return Report.get_pdf(self._model, self._cr, self._uid, records.ids,
report_name, html=html, data=data, context=self._context)
</code></pre>
<p>I inherited "record" in my custom module. And I defined a button like this:</p>
<pre><code><record id="report_maker_form" model="ir.ui.view">
<field name="name">Impression</field>
<field name="model">cust_report</field>
<field eval="1" name="priority"/>
<field name="arch" type="xml">
<form>
<header>
<button string="Envoyer le rapport" type="object" name="send_report_cust"/>
</header>
<sheet>
<group>
<field name='date'/>
</group>
</sheet>
</form>
</field>
</record>
</code></pre>
<p>The function send_report_cust is defined like this in the inherited report.py.</p>
<pre><code>@api.one
def send_report_cust(self):
#self.pool.get('report').get_pdf(self, None, "report_vote_document", None, None)
self.get_pdf(None, "report_vote_document", None, None)
</code></pre>
<p>So "report_vote_document" is my report_name. I'm just testing to create a report with minimal template. report_vote_document doesn't require any specific records yet, it's just a testing text in template format. So I send as "records" : "None" in parameters for get_pdf.
I get this error:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'ids'
</code></pre>
<p>Which is kind of an obvious error since "records" is needed in the get_pdf's body, but I don't know what thoses records are supposed to be. Can anyone tell me what's supposed to be in this "records". What should I send?</p>
<p>EDIT: I'm trying to call get_pdf but something is wrong with the arguments I give to it.</p>
<p>Here is what I did:</p>
<pre><code>@api.one
def send_report_cust(self):
self.get_pdf(self, "my_report_name", "my_report_template", None)
</code></pre>
<p>I also tried this for the last line.</p>
<p><code>self.get_pdf(**my_model_name** , "my_report_name", "my_report_template", None)</code></p>
<p>The error I get is : </p>
<pre><code>File "/usr/lib/python2.7/dist-packages/openerp/addons/report/models/report.py", line 508, in _get_report_from_name
idreport = report_obj.search(cr, uid, conditions)[0]
IndexError: list index out of range
</code></pre>
<p>I tried to get the error with some pdb.set_trace in the source code of the module "report", in "report/models/report.py". Which I tested with my button using "send_report_cust" (let's call it case A) and the basic automatic use of report (case B) with (which works but won't allow me to have my own button and to do some changes before and after the pdf creation in the same function.)</p>
<p>First in the <code>@api.v8</code> of <code>get_pdf</code>, which show all went right in this. But this get_pdf call the <code>@api.v7</code> of <code>get_pdf</code> . In this one, the error occurs on this line:</p>
<pre><code>report = self._get_report_from_name(cr, uid, report_name)
</code></pre>
<p>So here again, I went in _get_report_from_name and used pdb.set_trace().</p>
<p>The whole function goes right and every single variable as exactly the same value in case A than in case B but when _get_report_from_name goes to the line </p>
<pre><code>idreport = report_obj.search(cr, uid, conditions)[0]
</code></pre>
<p>The error occurs on case A but not on case B.</p>
<p>So I did "print report_obj.search(cr, uid, conditions)" which is an empty list for case A (which is what the error describes but I don't get it) and a list with one int for case B. I checked every single variables amongs the 3 functions I tested with pdb.set_trace() and everythings are identical.</p>
| 0 | 2016-07-29T10:18:28Z | 38,660,614 | <p>Records are database entries in object form. For example <code>account.invoice</code>: when you press the print button on an invoice, it will be the record for the report. In your example, <code>self</code> will be the record of the model <code>cust_report</code> model you pressed the button on.</p>
<p>Every Odoo report is defined for a model. It will need at least one record of the model while printing.</p>
| 1 | 2016-07-29T13:51:53Z | [
"python",
"openerp"
] |
Python looping through multiple lists | 38,656,370 | <p>I've this code:</p>
<pre><code>for i in range(0, len(codiceCassExcel)):
count1step += 1
for j in range(0, len(vwLinesToList)):
if data_reg[i] == vwLinesToList[j][1]:
if codiceCassExcel[i] == vwLinesToList[j][0]:
#Gestione movimento diverso da 601 e non bolle nostre
if tipo_mov[i] != 601 and len(vwLinesToList[j][7]) != 8:
count2step += 1
if ((int(qta_movimentata[i]) + int(vwLinesToList[j][4])) != 0) or ((int(-qta_movimentata[i]) + int(vwLinesToList[j][3])) != 0):
imballoColumnIn.append(vwLinesToList[j][0]),
dateColumnIn.append(vwLinesToList[j][1]),
absColumnIn.append(vwLinesToList[j][2]),
inColumnIn.append(vwLinesToList[j][3]),
outColumnIn.append(vwLinesToList[j][4]),
ddtColumnIn.append(vwLinesToList[j][7]),
wkColumnIn.append(vwLinesToList[j][8])
elif vwLinesToList[j][7] == bolla_excel[i]:
if ((int(qta_movimentata[i]) + int(vwLinesToList[j][4])) != 0) or (
(int(-qta_movimentata[i]) + int(vwLinesToList[j][3])) != 0):
imballoColumn.append(vwLinesToList[j][0]),
dateColumn.append(vwLinesToList[j][1]),
absColumn.append(vwLinesToList[j][2]),
inColumn.append(vwLinesToList[j][3]),
outColumn.append(vwLinesToList[j][4]),
ddtColumn.append(vwLinesToList[j][7]),
wkColumn.append(vwLinesToList[j][8])
</code></pre>
<p>I've 5 lists with hundred of items and a lists with similar items (vwLinesToLists). I want to check if:</p>
<pre><code>firstListItem[i] and secondListItem[i](and so on...) is equal to
vwLinesToList[j][1], vwLinesToList[j][2], vwLinesToList[j][3]
If it's true, check if nListItem - vwLinesToList[j][6] != 0:
append each vwLinesToList[item] to separate list
</code></pre>
<p>I need an hint about write my code without all this nested stuff.
Thank you in advance</p>
| 0 | 2016-07-29T10:19:35Z | 38,656,461 | <p>Use <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow"><strong>zip</strong></a> method to iterate over your lists. See <a href="http://stackoverflow.com/questions/13704860/zip-lists-in-python">zip lists in python</a> for code samples.</p>
<p>Also consider using of <a href="https://docs.python.org/2/library/itertools.html#itertools.izip_longest" rel="nofollow">izip_longest</a> function which may be useful to...</p>
| 1 | 2016-07-29T10:24:20Z | [
"python",
"list",
"for-loop",
"nested-lists"
] |
Python:Selecting a range of lines from one file and copying it to another file | 38,656,396 | <p>I have a file and i want to select certain lines in the file and copy it to another file.</p>
<p>In the first file the line where i want to copy has the word "XYZ" and from here i want to select the next 200 lines(including the match line) and copy it to another file.</p>
<p>The below is my code</p>
<pre><code>match_1 = 'This is match word'
with open('myfile.txt') as f1:
for num, line in enumerate(f1, 1):
if log_1 in line:
print line
</code></pre>
<p>The above code leads me to the start of the line and i need to select 200 lines form the matched line and then copy,move it to another text file. </p>
<p>I tried couple of options like while statements,but i am not able to build the logic fully.Please help</p>
| 1 | 2016-07-29T10:21:07Z | 38,656,610 | <p>You can use <code>itertools.islice</code> on the file object once the match is found:</p>
<pre><code>from itertools import islice
# some code before matching line is found
chunk = line + ''.join(islice(f1, 200))
</code></pre>
<p>This will consume the iterator <code>f1</code> to the next 200 lines, and so the <code>num</code> count in your loop may not be coherent if this is placed in your loop.</p>
<p>If you do not need the other lines in the file after finding the match, you can use:</p>
<pre><code>from itertools import islice
with open('myfile.txt') as f1, open('myotherfile.txt') as f_out:
for num, line in enumerate(f1, 1):
if log_1 in line:
break
chunk = line + ''.join(islice(f1, 200))
f_out.write(chunk)
</code></pre>
| 1 | 2016-07-29T10:32:09Z | [
"python",
"python-2.7",
"file"
] |
Python:Selecting a range of lines from one file and copying it to another file | 38,656,396 | <p>I have a file and i want to select certain lines in the file and copy it to another file.</p>
<p>In the first file the line where i want to copy has the word "XYZ" and from here i want to select the next 200 lines(including the match line) and copy it to another file.</p>
<p>The below is my code</p>
<pre><code>match_1 = 'This is match word'
with open('myfile.txt') as f1:
for num, line in enumerate(f1, 1):
if log_1 in line:
print line
</code></pre>
<p>The above code leads me to the start of the line and i need to select 200 lines form the matched line and then copy,move it to another text file. </p>
<p>I tried couple of options like while statements,but i am not able to build the logic fully.Please help</p>
| 1 | 2016-07-29T10:21:07Z | 38,657,797 | <p>How about something like</p>
<pre><code>>>> with open("pokemonhunt", "r") as f:
... Lines = []
... numlines = 5
... count = -1
... for line in f:
... if "pokemon" in line: % gotta catch 'em all!!!
... count = 0
... if -1 < count < numlines:
... Lines.append(line)
... count += count
... if count == numlines:
... with open("pokebox","w") as tmp: tmp.write("".join(Lines))
</code></pre>
<p>Or have I misunderstood your question?</p>
| 0 | 2016-07-29T11:33:46Z | [
"python",
"python-2.7",
"file"
] |
Conditionally override python module on import: Design issues | 38,656,563 | <p>in my setup/design, requesting a module of a subpackage automatically loads all other modules from that package. I seek a way to circumvent this.</p>
<h2>Disclaimer</h2>
<p><em>A few months ago, I asked a question which is related. However, since this issue is new and different, I created this as a new question.</em></p>
<p><em>For those who want to read on the previous problem and attempt, see <a href="http://stackoverflow.com/questions/34266878/mask-a-python-submodule-from-its-packages-init-py">Mask a python submodule from its package's <code>__init__.py</code></a></em></p>
<h2>The Setup</h2>
<p>In one of my packages, I have a subpackage called <code>config</code> holding a bunch of config files for other submodules and subpackages:</p>
<pre class="lang-none prettyprint-override"><code>mypackage
|
+-- subpkg_a
| |
| +-- __init__.py
| +-- <some modules here>.py
|
+-- config
| |
. +-- __init__.py
. +-- subpkg_a_sample.py
+-- .gitignore (ignores everything except __init__ or *_sample.py)
</code></pre>
<h2>The Requirement</h2>
<p>Since the above setup resides in a repository, colleagues can (and shall) clone it in order to use <code>mypackage</code> on their local systems. However, their configuration may be different, which is why I want to provide the ability to override the sample config given in <code>*_sample.py</code> and have them provide their own configuration file <em>only locally</em>.</p>
<p>The rationale is that I want people to contribute code where they can use config settings directly like this (in order to keep the code as general as possible):</p>
<pre class="lang-py prettyprint-override"><code>from config import subpkg_a as conf_a
print(conf_a.MY_CONFIG_SETTING)
</code></pre>
<p>However, if they have to make local adaptions to the configuration files to fit certain settings to their local setup, they <strong>should not have to change <code>*_sample.py</code></strong>, as it is part of the repository and contains general-purpose settings and examples. And, if they would change the <code>*_sample.py</code> config file, there are always users who accidently check in their local changes (especially deletions) and thus need to be protected from themselves...</p>
<p><strong>Thus, I need the possibility to override the <code>*_sample.py</code> with a local copy, if a local copy is present, and otherwise load <code>*_sample.py</code> when the certain config is imported.</strong></p>
<h2>The Solution I Came Up With</h2>
<p>Currently, I am using the following code inside <code>config/__init__.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import imp
import re
# Extend the __all__ list for all sub-packages that provide <pkg>_sample.py config files
__all__ = []
_cfgbase = os.path.dirname(os.path.realpath(__file__))
_r = re.compile('^(?P<key>.+?)_sample\.py$')
for f in os.listdir(_cfgbase):
m = _r.match(f)
if m: __all__.append(m.group('key'))
# Load local override file, if any. Otherwise load respective *_sample.py
for cfgmodule in __all__:
if os.path.isfile(os.path.join(_cfgbase, cfgmodule + '.py')):
locals()[cfgmodule] = imp.load_source('mypackage.config.' + cfgmodule, os.path.join(_cfgbase, cfgmodule + '.py'))
else:
locals()[cfgmodule] = imp.load_source('mypackage.config.' + cfgmodule, os.path.join(_cfgbase, cfgmodule + '_sample.py'))
</code></pre>
<p>What this code does, is:</p>
<ol>
<li><p>Scan the directory of <em>this</em> <code>__init__.py</code> for files that end in <code>_sample.py</code> and store their name (i.e. everything before <code>_sample.py</code>) in the package's <code>__all__</code> list. These are the packages for which there are config files available that can be loaded.</p></li>
<li><p>Iterate over the <code>__all__</code> list and freshly import the local override, if there is any, or import the respective <code>*_sample.py</code> otherwise. No matter which file was imported, it is made accessible under the name <em>without the <code>_sample.py</code> extension</em> via the config module. I have done this to keep the code that relies on the config module as clean and generic as possible.</p></li>
</ol>
<h2>Finally, the Problem</h2>
<p>The issue I now am facing is, that even when a user only imports the configuration for a single subpackage, i.e. <code>from config import subpkg_a as conf_a</code>, all other configurations that are available in that directory are immediately loaded as well.</p>
<p>This behavior is obvious by the conditional import in my <code>__init__.py</code> (see above). But since there are configuration files for subpackages, that rely on other imports (e.g. <code>celery</code> or <code>mpl_toolkits.basemap</code>) and that may require a significant effort to be installed under certain environments, any user that just needs a fraction of the configs and subpackages is required to install <em>all</em> packages and modules that are imported in a configuration file. Even if the user and his/her respective code does not require the respective config.</p>
<p>I feel like having established a poor design, but I do not know how to do better. So I ask you:</p>
<p><strong>Do you see any possibility to change the config's <code>__init__.py</code> (or even the entire setup) such that it does not load all config files when a user requests a single one?</strong></p>
<p>I am grateful for hints and answers of any kind. Cheers!</p>
| 0 | 2016-07-29T10:30:15Z | 38,656,772 | <p>Just make local configuration <em>explicit</em>.</p>
<p>Have people that want to override configuration create a <code>local_config.py</code> file. You can add that to <code>.gitignore</code>. Have users import <strong>that</strong> module in their own code.</p>
<p>At the top of the <code>local_config.py</code> module, teach your users to import the desired sample config:</p>
<pre><code>from config.subpkg_a_sample import *
</code></pre>
<p>Note the <code>import *</code> here. Now <em>all names from <code>subpkg_a_sample</code></em> are imported into <code>local_config</code>. The user can easily override anything they need to. Then in the rest of the software, use</p>
<pre><code>try:
import local_config as config
except ImportError:
warn('No local configuration available')
import config.default_config as config
</code></pre>
<p>or similar approaches to get a default configuration in place.</p>
<p>Another approach is to add:</p>
<pre><code>try:
from local_config import *
except ImportError:
pass
</code></pre>
<p>to all your <code>*_sample.py</code> modules, making those modules responsible for applying local configuration. That way names are also overridden by local configuration.</p>
<p>Requiring users to create a <code>local_config.py</code> is no more strenuous to what you already have, where you ask users to pick a sample config and provide overrides for that you magically interpolate.</p>
| 1 | 2016-07-29T10:40:34Z | [
"python",
"python-2.7",
"import",
"module"
] |
input dimensions to a one dimensional convolutional network in keras | 38,656,566 | <p>really finding it hard to understand the input dimensions to the convolutional 1d <a href="http://keras.io/layers/convolutional/#convolution1d" rel="nofollow">layer</a> in keras:</p>
<p>Input shape</p>
<p>3D tensor with shape: (samples, steps, input_dim).</p>
<p>Output shape</p>
<p>3D tensor with shape: (samples, new_steps, nb_filter). steps value might have changed due to padding.</p>
<p>I want my network to take in a time series of prices (101, in order) and output 4 probabilities. My current non-convolutional network which does this fairly well (with a training set of 28000) looks like this:</p>
<pre><code>standardModel = Sequential()
standardModel.add(Dense(input_dim=101, output_dim=100, W_regularizer=l2(0.5), activation='sigmoid'))
standardModel.add(Dense(4, W_regularizer=l2(0.7), activation='softmax'))
</code></pre>
<p>To improve this, I want to make a feature map from the input layer which has a local receptive field of length 10. (and therefore has 10 shared weights and 1 shared bias). I then want to use max pooling and feed this in to a hidden layer of 40 or so neurons and then output this with 4 neurons with softmax in the outer layer. </p>
<p><a href="http://i.stack.imgur.com/Kx8yT.png" rel="nofollow">picture (it's quite awful sorry!)</a></p>
<p>So ideally, the convolutional layer would take a 2d tensor of dimensions:</p>
<p>(minibatch_size, 101)</p>
<p>and output a 3d tensor of dimensions</p>
<p>(minibatch_size, 91, no_of_featuremaps)</p>
<p>However, the keras layer seems to require a dimension in the input called step. I've tried understanding this and still don't quite get it. In my case, should step be 1 as each step in the vector is an increase in the time by 1? Also, what is new_step? </p>
<p>In addition, how do you turn the output of the pooling layers (a 3d tensor) into input suitable for the standard hidden layer (i.e a Dense keras layer) in the form of a 2d tensor?</p>
<p>Update: After the very helpful suggestions given, I tried making a convolutional network like so:</p>
<pre><code>conv = Sequential()
conv.add(Convolution1D(64, 10, input_shape=(1,101)))
conv.add(Activation('relu'))
conv.add(MaxPooling1D(2))
conv.add(Flatten())
conv.add(Dense(10))
conv.add(Activation('tanh'))
conv.add(Dense(4))
conv.add(Activation('softmax'))
</code></pre>
<p>The line conv.Add(Flatten()) throws a range exceeds valid bounds error. Interestingly, this error is <strong>not</strong> thrown for just this code:</p>
<pre><code>conv = Sequential()
conv.add(Convolution1D(64, 10, input_shape=(1,101)))
conv.add(Activation('relu'))
conv.add(MaxPooling1D(2))
conv.add(Flatten())
</code></pre>
<p>doing </p>
<pre><code>print conv.input_shape
print conv.output_shape
</code></pre>
<p>results in </p>
<pre><code>(None, 1, 101
(None, -256)
</code></pre>
<p>being returned</p>
<p>Update 2:</p>
<p>Changed </p>
<pre><code>conv.add(Convolution1D(64, 10, input_shape=(1,101)))
</code></pre>
<p>to</p>
<pre><code>conv.add(Convolution1D(10, 10, input_shape=(101,1))
</code></pre>
<p>and it started working. However, is there any important different between
inputting (None, 101, 1) to a 1d conv layer or (None, 1, 101) that I should be aware of? Why does (None, 1, 101) not work?</p>
| 1 | 2016-07-29T10:30:23Z | 38,673,493 | <p>The reason why it look like this is that Keras designer intended to make 1-dimensional convolutional framework to be interpreted as a framework to deal with sequences. To fully understand the difference - try to imagine that you have a sequence of a multiple feature vectors. Then your output will be at least two dimensional - where first dimension is connected with time and other dimensions are connected with features. 1-dimensional convolutional framework was designed to in some way bold this time dimension and try to find the reoccuring patterns in data - rather than performing a classical multidimensional convolutional transformation.</p>
<p>In your case you must simply reshape your data to have shape (dataset_size, 101, 1) - because you have only one feature. It could be easly done using <code>numpy.reshape</code> function. To understand what does a new step mean - you must understand that you are doing the convolution over time - so you change the temporal structure of your data - which lead to new time-connected structure. In order to get your data to a format which is suitable for dense / static layers use <code>keras.layers.flatten</code> layer - the same as in classic convolutional case.</p>
<p><strong>UPDATE:</strong> As I mentioned before - the first dimension of input is connected with time. So the difference between <code>(1, 101)</code> and <code>(101, 1)</code> lies in that in first case you have one time step with 101 features and in second - 101 timesteps with 1 feature. The problem which you mentioned after your first change has its origin in making pooling with size 2 on such input. Having only one timestep - you cannot pool any value on a time window of size 2 - simply because there is not enough timesteps to do that.</p>
| 2 | 2016-07-30T11:34:47Z | [
"python",
"neural-network",
"theano",
"conv-neural-network",
"keras"
] |
'Gui' object has no attribute 'after' | 38,656,585 | <p>I took my working tkinter code (which only drew window/buttons and so on) and tried to add some code from the approved answer here: <a href="http://stackoverflow.com/questions/16938647/python-code-for-serial-data-to-print-on-window">python code for serial data to print on window.</a></p>
<p>The approved answer works by itself with very small modifications, but added to my code I get the error "'Gui' object has no attribute 'after'"</p>
<p>What I don't understand is why the attribute "after" is looked for in class Gui instead of in method process_serial.
</p>
<pre><code>from tkinter import *
from tkinter import ttk
import serial
import threading
import queue
class SerialThread(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
s = serial.Serial('COM11',115200)
while True:
if s.inWaiting():
text = s.readline(s.inWaiting())
self.queue.put(text)
class Gui():
def __init__(self, master):
###MAIN FRAME###
mainFrame = Frame(master, width=50000, height=40000)
mainFrame.pack(fill = BOTH, expand = 1)
###LIST FRAME###
listFrame = Frame(mainFrame)
listFrame.pack(side = TOP, fill = BOTH, expand = 1)
self.sensorList = ttk.Treeview(listFrame)
self.sensorList["columns"]=("MAC","Type","Value","Voltage","Firmware","Rate","RSSI")
self.sensorList.column("MAC", width=200, minwidth=200)
self.sensorList.column("Type", width=100, minwidth=100)
self.sensorList.column("Value", width=100, minwidth=100)
self.sensorList.column("Voltage", width=100, minwidth=100)
self.sensorList.column("Firmware", width=100, minwidth=100)
self.sensorList.column("Rate", width=100, minwidth=100)
self.sensorList.column("RSSI", width=100, minwidth=100)
self.sensorList.heading("MAC", text="MAC")
self.sensorList.heading("Type", text="Type")
self.sensorList.heading("Value", text="Value")
self.sensorList.heading("Voltage", text="Voltage")
self.sensorList.heading("Firmware", text="Firmware")
self.sensorList.heading("Rate", text="Rate")
self.sensorList.heading("RSSI", text="RSSI")
self.sensorList.pack(fill = BOTH, expand = 1, pady=5, padx=5)
###TEXT AREA FRAME###
textAreaFrame = Frame(mainFrame)
textAreaFrame.pack(side = TOP, fill = BOTH, expand = 1)
self.textArea = Text(textAreaFrame)
self.textArea.pack(fill = BOTH, expand = 1, pady=5, padx=5)
###INPUT FRAME###
inputFrame = Frame(mainFrame)
inputFrame.pack(side = BOTTOM, fill = X, expand = 0)
self.input = Entry(inputFrame)
self.input.pack(side=LEFT, fill = X, expand = 1, pady=5, padx=5)
self.comboAction = ttk.Combobox(inputFrame)
self.comboAction.pack(side = LEFT, pady=5, padx=5)
self.comboDevice = ttk.Combobox(inputFrame)
self.comboDevice.pack(side = LEFT, pady=5, padx=5)
self.sendButton = Button(
inputFrame, text="SEND", command=mainFrame.quit
)
self.sendButton.pack(side=LEFT,pady=5, padx=5)
#self.button = Button(
# mainFrame, text="QUIT", fg="red", command=mainFrame.quit
#)
#self.button.pack(side=LEFT)
#self.hi_there = Button(mainFrame, text="Hello", command=self.say_hi)
#self.hi_there.pack(side=LEFT)
###AFFIX MINIMUM SIZE OF MAIN WINDOW TO PREVENT POOR SIZING###
master.update()
master.minsize(root.winfo_width(), root.winfo_height())
master.minsize(master.winfo_width(), master.winfo_height())
###SERIAL PORT###
self.queue = queue.Queue()
thread = SerialThread(self.queue)
thread.start()
self.process_serial()
def process_serial(self):
while self.queue.qsize():
try:
self.textArea.delete(1.0, 'end')
self.textArea.insert('end', self.queue.get())
except Queue.Empty:
pass
self.after(100, self.process_serial)
def say_hi(self):
s = self.input.get()
print ("hi there, everyone!" + s)
root = Tk()
gui = Gui(root)
root.mainloop()
root.destroy() # optional; see description below
</code></pre>
| 0 | 2016-07-29T10:31:17Z | 38,656,720 | <p>The the method <code>after</code> was a inherited from <code>Tkinter.Tk</code>. <a href="http://stackoverflow.com/questions/16938647/python-code-for-serial-data-to-print-on-window">Check mentioned question</a></p>
<p>You probably should subclass Tkinter.Tk</p>
<pre><code>...
import Tkinter
class Gui(Tkinter.Tk)
...
</code></pre>
| 1 | 2016-07-29T10:37:48Z | [
"python",
"multithreading",
"tkinter",
"pyserial"
] |
'Gui' object has no attribute 'after' | 38,656,585 | <p>I took my working tkinter code (which only drew window/buttons and so on) and tried to add some code from the approved answer here: <a href="http://stackoverflow.com/questions/16938647/python-code-for-serial-data-to-print-on-window">python code for serial data to print on window.</a></p>
<p>The approved answer works by itself with very small modifications, but added to my code I get the error "'Gui' object has no attribute 'after'"</p>
<p>What I don't understand is why the attribute "after" is looked for in class Gui instead of in method process_serial.
</p>
<pre><code>from tkinter import *
from tkinter import ttk
import serial
import threading
import queue
class SerialThread(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
s = serial.Serial('COM11',115200)
while True:
if s.inWaiting():
text = s.readline(s.inWaiting())
self.queue.put(text)
class Gui():
def __init__(self, master):
###MAIN FRAME###
mainFrame = Frame(master, width=50000, height=40000)
mainFrame.pack(fill = BOTH, expand = 1)
###LIST FRAME###
listFrame = Frame(mainFrame)
listFrame.pack(side = TOP, fill = BOTH, expand = 1)
self.sensorList = ttk.Treeview(listFrame)
self.sensorList["columns"]=("MAC","Type","Value","Voltage","Firmware","Rate","RSSI")
self.sensorList.column("MAC", width=200, minwidth=200)
self.sensorList.column("Type", width=100, minwidth=100)
self.sensorList.column("Value", width=100, minwidth=100)
self.sensorList.column("Voltage", width=100, minwidth=100)
self.sensorList.column("Firmware", width=100, minwidth=100)
self.sensorList.column("Rate", width=100, minwidth=100)
self.sensorList.column("RSSI", width=100, minwidth=100)
self.sensorList.heading("MAC", text="MAC")
self.sensorList.heading("Type", text="Type")
self.sensorList.heading("Value", text="Value")
self.sensorList.heading("Voltage", text="Voltage")
self.sensorList.heading("Firmware", text="Firmware")
self.sensorList.heading("Rate", text="Rate")
self.sensorList.heading("RSSI", text="RSSI")
self.sensorList.pack(fill = BOTH, expand = 1, pady=5, padx=5)
###TEXT AREA FRAME###
textAreaFrame = Frame(mainFrame)
textAreaFrame.pack(side = TOP, fill = BOTH, expand = 1)
self.textArea = Text(textAreaFrame)
self.textArea.pack(fill = BOTH, expand = 1, pady=5, padx=5)
###INPUT FRAME###
inputFrame = Frame(mainFrame)
inputFrame.pack(side = BOTTOM, fill = X, expand = 0)
self.input = Entry(inputFrame)
self.input.pack(side=LEFT, fill = X, expand = 1, pady=5, padx=5)
self.comboAction = ttk.Combobox(inputFrame)
self.comboAction.pack(side = LEFT, pady=5, padx=5)
self.comboDevice = ttk.Combobox(inputFrame)
self.comboDevice.pack(side = LEFT, pady=5, padx=5)
self.sendButton = Button(
inputFrame, text="SEND", command=mainFrame.quit
)
self.sendButton.pack(side=LEFT,pady=5, padx=5)
#self.button = Button(
# mainFrame, text="QUIT", fg="red", command=mainFrame.quit
#)
#self.button.pack(side=LEFT)
#self.hi_there = Button(mainFrame, text="Hello", command=self.say_hi)
#self.hi_there.pack(side=LEFT)
###AFFIX MINIMUM SIZE OF MAIN WINDOW TO PREVENT POOR SIZING###
master.update()
master.minsize(root.winfo_width(), root.winfo_height())
master.minsize(master.winfo_width(), master.winfo_height())
###SERIAL PORT###
self.queue = queue.Queue()
thread = SerialThread(self.queue)
thread.start()
self.process_serial()
def process_serial(self):
while self.queue.qsize():
try:
self.textArea.delete(1.0, 'end')
self.textArea.insert('end', self.queue.get())
except Queue.Empty:
pass
self.after(100, self.process_serial)
def say_hi(self):
s = self.input.get()
print ("hi there, everyone!" + s)
root = Tk()
gui = Gui(root)
root.mainloop()
root.destroy() # optional; see description below
</code></pre>
| 0 | 2016-07-29T10:31:17Z | 38,656,773 | <p>The culprit is in this line in the process_serial function:</p>
<pre><code>self.after(100, self.process_serial)
</code></pre>
<p>The self variable that is in here refers to the Gui object, not to a tkinter object that has the 'after' function.</p>
<p>There is a mismatch between your code and the code from the linked question. Your class does not extend a tkinter object. The class in the answer extended the tkinter Tk object like so:</p>
<pre><code>class App(tk.Tk):
</code></pre>
<p>Thereby inheriting functions from the Tk class.</p>
<p>To solve this for your code, replace self in the process_serial function with a tkinter object, like self.textArea.</p>
<pre><code>self.textArea.after(100, self.process_serial)
</code></pre>
<p>Alternatively, you could subclass tk.Tk just like in the linked answer. But I do not see the added benefit here.</p>
| 4 | 2016-07-29T10:40:35Z | [
"python",
"multithreading",
"tkinter",
"pyserial"
] |
How to compute percentiles from frequency table? | 38,656,609 | <p>I have CSV file:</p>
<pre><code>fr id
1 10000152
1 10000212
1 10000847
1 10001018
2 10001052
2 10001246
14 10001908
...........
</code></pre>
<p>This is a frequency table, where <code>id</code> is integer variable and <code>fr</code> is number of occurrences given value. File is sorted ascending by value.
I would like to compute percentiles (ie. 90%, 80%, 70% ... 10%) of variable.</p>
<p>I have done this in pure Python, similar to this pseudocode:</p>
<pre><code>bucket=sum(fr)/10.0
percentile=1
sum=0
for (current_fr, current_id) in zip(fr,id):
sum=sum+current_fr
if (sum > percentile*bucket):
print "%i percentile: %i" % (percentile*10,current_id)
percentile=percentile+1
</code></pre>
<p>But this code is very raw: it doesn't take into account that percentile should be between values from the set, it can't step back etc.</p>
<p>Is there any more elegant, universal, ready-made solution?</p>
| 0 | 2016-07-29T10:32:09Z | 38,657,471 | <p>Seems like you want cumulative sum of <code>fr</code>. You can do</p>
<pre><code>cumfr = [sum(fr(:i+1)) for i in range(len(fr))]
</code></pre>
<p>Then the percentiles are</p>
<pre><code>percentile = [100*i/cumfr[-1] for i in cumfr]
</code></pre>
| 0 | 2016-07-29T11:16:51Z | [
"python",
"numpy",
"pandas",
"statistics"
] |
Building MultiIndex in Pandas DataFrame | 38,656,625 | <p>I am reading in two files into Python, both with the form:</p>
<pre><code> 0.00902317 0.0270695 0.0451159 0.0631622 \
0000010 6.962980e-05 7.063750e-05 7.165970e-05 7.269680e-05
1000010 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
2000010 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
</code></pre>
<p>The first row is an ID number, and the columns are different ages. The two files have different ages comprising them, and only a few common ID#s.</p>
<p>Ultimately I am combining the two dataframes to find the common ID#s. But I want the resulting dataframe</p>
<pre><code> File 1 File 2
0.00902317 0.0270695 0.0675493 0.1091622 \
0000010 6.962980e-05 7.063750e-05 0.000000e+00 0.000000e+00
1000010 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
2000010 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
</code></pre>
<p>Is there a way to make a dataframe that looks like this, multiindexing columns?</p>
<p>Apologies if this is a simple question, I am new to working with dataframes.</p>
| 2 | 2016-07-29T10:33:02Z | 38,657,256 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p>
<pre><code>print (pd.concat([df1, df2], axis=1, keys=['File 1','File 2']))
File 1 File 2
0.00902317 0.0270695 0.0451159 0.0631622 0.0675493 0.1091622
0000010 0.00007 0.000071 0.000072 0.000073 0.0 0.0
1000010 0.00000 0.000000 0.000000 0.000000 0.0 0.0
2000010 0.00000 0.000000 0.000000 0.000000 0.0 0.0
</code></pre>
| 2 | 2016-07-29T11:05:07Z | [
"python",
"pandas",
"indexing",
"dataframe",
"multi-index"
] |
Alias in list comprehension and iterators | 38,656,729 | <p>Suppose I have something like:</p>
<pre><code>for e in some_list:
fct(e.my_first_property, e. e.my_second_property)
fct2(e.my_first_property)
</code></pre>
<p>That's a bit repetitive to write, so I could use</p>
<pre><code>for e in some_list:
p1 = e.my_first_property
p2 = e.my_second_property
fct(p1, p2)
fct2(p1)
</code></pre>
<p>Still a bit lengthy. So I wonder whether there is some syntax that I could define p1 and p2 right in the <code>for ... in ...</code> statement, maybe like:</p>
<pre><code>for e in some_list with p1 as e.my_first_property, p2 as e.my_second_property:
fct(p1, p2)
fct2(p1)
</code></pre>
<p>That would be particularly helpful inside list comprehensions, where I can not use the intermediate variables.</p>
<p>I never saw such a syntax, so I guess it does not exist. But you never know...</p>
| 0 | 2016-07-29T10:38:05Z | 38,656,774 | <p>You can use <a href="https://docs.python.org/2/library/operator.html#operator.attrgetter" rel="nofollow"><code>operator.attrgetter</code></a> with <code>map</code>:</p>
<pre><code>from operator import attrgetter
for p1, p2 in map(attrgetter(my_first_property, my_second_property), some_list):
fct(p1, p2)
fct2(p1)
</code></pre>
<p>In Python 2.x, <code>map</code> returns a list (as a opposed to an iterator in Python 3.x) which may not be needed if the size of your original list is substantial. Therefore, it might be more preferable to do this with a <em>generator expression</em> in place of <code>map</code>:</p>
<pre><code>for p1, p2 in ((e.my_first_property, e.my_second_property) for e in some_list):
fct(p1, p2)
fct2(p1)
</code></pre>
| 2 | 2016-07-29T10:40:38Z | [
"python"
] |
Formatting calculated fields in Django to have spaces as decimal thousand separator | 38,656,910 | <p>I've come across a strange behaviour of Django formatting. Best described by an example:</p>
<p>Model:</p>
<pre><code>class MyModel(models.Model):
value= models.DecimalField(max_digits=8, decimal_places=2)
</code></pre>
<p>my_template.html:</p>
<pre><code>{% load humanize %}
{{calculated_sum|floatformat|intcomma}} # improperly formatted (commas instead of spaces)
{{some_qs.0.value|floatformat|intcomma}} #properly formatted (spaces instead of
#commas as thousand separator
</code></pre>
<p>View that uses the template to generate an HTML content for an email:</p>
<pre><code>some_qs = MyModel.objects.all()
calculated_sum= legal_entity_own_instance.get_latest_orders_sum()
context = Context({'calculated_sum':calculated_sum, 'some_qs':some_qs})
html_content = render_to_string('my_template.html', context)
</code></pre>
<p>For some reason, <code>some_qs.0.value</code> is formatted as expected (with spaces instead of commas as a thousands separator). That is, 12345 renders as 12 345. But the strange think is that <code>calculated_sum</code> is formatted with commas (12345 is formatted as 12,345).</p>
<p>Definition of get_latest_orders_sum:</p>
<pre><code>def get_latest_orders_sum(self):
qs = MyModel.objects.filter(...).aggregate(Sum('value'))
order_sum = qs['value__sum']
return order_sum
</code></pre>
<p>Does anyone know what could be the reason of wrong formatting ?</p>
| 1 | 2016-07-29T10:47:45Z | 38,661,017 | <p>did you try changing <code>return order_sum</code> to <code>return int(order_sum)</code> or doing the logic to add a space in your get_latest_orders_sum func. ie </p>
<pre><code>return str(order_sum).replace("," " ")
</code></pre>
| 0 | 2016-07-29T14:12:57Z | [
"python",
"django"
] |
In Pandas, when selecting with "where", delete columns where condition is not met rather than filling with NaN | 38,656,930 | <p>In Numpy, using <code>where</code> will give you a subset of the original array. For example,</p>
<pre><code>import numpy as np
np.where(np.arange(5)>2)[0]
</code></pre>
<p>will return <code>array([3, 4])</code>. I'd like to do something similar with Pandas. However, if I define a similar DataFrame like so:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=range(5)).T
</code></pre>
<p>resulting a <code>df</code> which looks like</p>
<pre><code> 0 1 2 3 4
0 0 1 2 3 4
</code></pre>
<p>and apply</p>
<pre><code>df.where(lambda x: x>2)
</code></pre>
<p>I get</p>
<pre><code> 0 1 2 3 4
0 NaN NaN NaN 3 4
</code></pre>
<p>However, I'd like to get this:</p>
<pre><code> 3 4
0 3 4
</code></pre>
<p>with the columns where the condition is not met omitted. How can this be done?</p>
| 2 | 2016-07-29T10:48:25Z | 38,657,145 | <p>IIUC you can call <code>dropna</code> passing <code>axis=1</code> to drop columns containing any <code>NaN</code> values:</p>
<pre><code>In [272]:
df[df > 2].dropna(axis=1)
Out[272]:
3 4
0 3 4
</code></pre>
| 3 | 2016-07-29T10:59:26Z | [
"python",
"pandas"
] |
In Pandas, when selecting with "where", delete columns where condition is not met rather than filling with NaN | 38,656,930 | <p>In Numpy, using <code>where</code> will give you a subset of the original array. For example,</p>
<pre><code>import numpy as np
np.where(np.arange(5)>2)[0]
</code></pre>
<p>will return <code>array([3, 4])</code>. I'd like to do something similar with Pandas. However, if I define a similar DataFrame like so:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=range(5)).T
</code></pre>
<p>resulting a <code>df</code> which looks like</p>
<pre><code> 0 1 2 3 4
0 0 1 2 3 4
</code></pre>
<p>and apply</p>
<pre><code>df.where(lambda x: x>2)
</code></pre>
<p>I get</p>
<pre><code> 0 1 2 3 4
0 NaN NaN NaN 3 4
</code></pre>
<p>However, I'd like to get this:</p>
<pre><code> 3 4
0 3 4
</code></pre>
<p>with the columns where the condition is not met omitted. How can this be done?</p>
| 2 | 2016-07-29T10:48:25Z | 38,657,605 | <p>You can drop the columns where the column names are smaller than 3, <em>provided the column names are integers</em>, if the column values are higher than 2.</p>
<pre><code>print(df.drop(df[df.columns[df.columns < 3]] > 2, 1))
</code></pre>
<p>output:</p>
<pre><code> 3 4
0 3 4
</code></pre>
| 0 | 2016-07-29T11:23:43Z | [
"python",
"pandas"
] |
Write non-Unicode using csv module | 38,656,958 | <p>While migrating to Python 3, I noticed some files we generate using the built-in <code>csv</code> now have <code>b'</code> prefix around each strings...</p>
<p>Here's the code, that should generate a .csv for a list of <code>dogs</code>, according to some parameters defined by <code>export_fields</code> (thus always returns unicode data):</p>
<pre><code>file_content = StringIO()
csv_writer = csv.writer(
file_content, delimiter='\t', quotechar='"', quoting=csv.QUOTE_MINIMAL
)
csv_writer.writerow([
header_name.encode('cp1252') for _v, header_name in export_fields
])
# Write content
for dog in dogs:
csv_writer.writerow([
get_value(dog).encode('cp1252') for get_value, _header in export_fields
])
</code></pre>
<p>The problem is once I returns <code>file_content.getvalue()</code>, I get:</p>
<pre><code>b'Does he bark?' b'Full Name' b'Gender'
b'Sometimes, yes' b'Woofy the dog' b'Male'
</code></pre>
<p>Instead of <sup><sub>(indentation has been modified to be readable on SO)</sub></sup>:</p>
<pre><code>'Does he bark?' 'Full Name' 'Gender'
'Sometimes, yes' 'Woofy the dog' 'Male'
</code></pre>
<p>I did not find any <code>encoding</code> parameter in the <code>csv</code> module. I would like the whole file to be encoded in cp1252, so I don't really care either the encoding is done through the iteration of the lines or on the file construted itself. </p>
<p>So, does anyone know how to generate a proper string, containing only cp1252 encoded strings?</p>
| 0 | 2016-07-29T10:50:01Z | 38,657,058 | <p>The <code>csv</code> module deals with <em>text</em>, and converts anything that is not a string to a string using <code>str()</code>. </p>
<p>Don't pass in <code>bytes</code> objects. Pass in <code>str</code> objects or types that cleanly convert to strings with <code>str()</code>. That means you <em>should not encode strings</em>.</p>
<p>If you need <code>cp1252</code> output, encode the <code>StringIO</code> value:</p>
<pre><code>file_content.getvalue().encode('cp1252')
</code></pre>
<p>as <code>StringIO</code> objects also deal in text only.</p>
<p>Better yet, use a <a href="https://docs.python.org/3/library/io.html#io.BytesIO" rel="nofollow"><code>BytesIO</code> object</a> with a <a href="https://docs.python.org/3/library/io.html#io.TextIOWrapper" rel="nofollow"><code>TextIOWrapper()</code></a> to do the encoding for you as the <code>csv</code> module writes to the file object:</p>
<pre><code>from io import BytesIO, TextIOWrapper
file_content = BytesIO()
wrapper = TextIOWrapper(file_content, encoding='cp1252', line_buffering=True)
csv_writer = csv.writer(
wrapper, delimiter='\t', quotechar='"', quoting=csv.QUOTE_MINIMAL)
# write rows
result = file_content.getvalue()
</code></pre>
<p>I've enabled line-buffering on the wrapper so that it'll auto-flush to the <code>BytesIO</code> instance every time a row is written.</p>
<p>Now <code>file_content.getvalue()</code> produces a bytestring:</p>
<pre><code>>>> from io import BytesIO, TextIOWrapper
>>> import csv
>>> file_content = BytesIO()
>>> wrapper = TextIOWrapper(file_content, encoding='cp1252', line_buffering=True)
>>> csv_writer = csv.writer(wrapper, delimiter='\t', quotechar='"', quoting=csv.QUOTE_MINIMAL)
>>> csv_writer.writerow(['Does he bark?', 'Full Name', 'Gender'])
36
>>> csv_writer.writerow(['Sometimes, yes', 'Woofy the dog', 'Male'])
35
>>> file_content.getvalue()
b'Does he bark?\tFull Name\tGender\r\nSometimes, yes\tWoofy the dog\tMale\r\n'
</code></pre>
| 1 | 2016-07-29T10:55:29Z | [
"python",
"python-3.x",
"csv",
"encoding",
"stringio"
] |
Python: create a dictionary from data in excel | 38,656,975 | <p>I have data </p>
<pre><code> sign number result
Qjobstatus 1 РабоÑÐ°Ñ Ð¿Ð¾Ð»Ð½Ñй ÑабоÑий денÑ
Qjobstatus 2 РабоÑÐ°Ñ Ð½ÐµÐ¿Ð¾Ð»Ð½Ñй ÑабоÑий денÑ
Qjobstatus 3 Ðе ÑабоÑаÑ
Qcountry 1 РоÑÑиÑ
Qcountry 2 УкÑаина
Qcountry 3 ÐелаÑÑÑÑ
Qcountry 4 ÐзеÑбайджан
Qcountry 5 ÐÑмениÑ
Qcountry 6 ÐÑÑзиÑ
Qcountry 7 ÐазаÑ
ÑÑан
Qcountry 8 ÐÑÑгÑзÑÑан
Qcountry 9 Ðолдова
</code></pre>
<p>I need to create dictionary where <code>sign == Qcountry</code>.
I want to get</p>
<pre><code>dict = {1: "РоÑÑиÑ",
2: "УкÑаина",
3: "ÐелаÑÑÑÑ", ...}
</code></pre>
<p>I tried </p>
<pre><code>if df.sign.contains('Qcountry'):
dict((ind, el) for (ind, el) in (df.number, df.result))
</code></pre>
<p>but it doesn't work</p>
| 1 | 2016-07-29T10:50:49Z | 38,657,231 | <p>IIUC then you can just call <code>dict</code> on the np array:</p>
<pre><code>In [284]:
dict(df.loc[df['sign']=='Qcountry','number':].values)
Out[284]:
{1: 'РоÑÑиÑ',
2: 'УкÑаина',
3: 'ÐелаÑÑÑÑ',
4: 'ÐзеÑбайджан',
5: 'ÐÑмениÑ',
6: 'ÐÑÑзиÑ',
7: 'ÐазаÑ
ÑÑан',
8: 'ÐÑÑгÑзÑÑан',
9: 'Ðолдова'}
</code></pre>
| 2 | 2016-07-29T11:03:49Z | [
"python",
"pandas",
"dictionary"
] |
Python: create a dictionary from data in excel | 38,656,975 | <p>I have data </p>
<pre><code> sign number result
Qjobstatus 1 РабоÑÐ°Ñ Ð¿Ð¾Ð»Ð½Ñй ÑабоÑий денÑ
Qjobstatus 2 РабоÑÐ°Ñ Ð½ÐµÐ¿Ð¾Ð»Ð½Ñй ÑабоÑий денÑ
Qjobstatus 3 Ðе ÑабоÑаÑ
Qcountry 1 РоÑÑиÑ
Qcountry 2 УкÑаина
Qcountry 3 ÐелаÑÑÑÑ
Qcountry 4 ÐзеÑбайджан
Qcountry 5 ÐÑмениÑ
Qcountry 6 ÐÑÑзиÑ
Qcountry 7 ÐазаÑ
ÑÑан
Qcountry 8 ÐÑÑгÑзÑÑан
Qcountry 9 Ðолдова
</code></pre>
<p>I need to create dictionary where <code>sign == Qcountry</code>.
I want to get</p>
<pre><code>dict = {1: "РоÑÑиÑ",
2: "УкÑаина",
3: "ÐелаÑÑÑÑ", ...}
</code></pre>
<p>I tried </p>
<pre><code>if df.sign.contains('Qcountry'):
dict((ind, el) for (ind, el) in (df.number, df.result))
</code></pre>
<p>but it doesn't work</p>
| 1 | 2016-07-29T10:50:49Z | 38,657,281 | <p>A bit of a roundabout method but try:</p>
<pre><code>df = df[df['sign'] == 'Qcountry']
transposed = df.T.to_dict()
result = {transposed[item]['number']: transposed[item]['result']
for item in transposed}
</code></pre>
| 1 | 2016-07-29T11:06:26Z | [
"python",
"pandas",
"dictionary"
] |
Python: create a dictionary from data in excel | 38,656,975 | <p>I have data </p>
<pre><code> sign number result
Qjobstatus 1 РабоÑÐ°Ñ Ð¿Ð¾Ð»Ð½Ñй ÑабоÑий денÑ
Qjobstatus 2 РабоÑÐ°Ñ Ð½ÐµÐ¿Ð¾Ð»Ð½Ñй ÑабоÑий денÑ
Qjobstatus 3 Ðе ÑабоÑаÑ
Qcountry 1 РоÑÑиÑ
Qcountry 2 УкÑаина
Qcountry 3 ÐелаÑÑÑÑ
Qcountry 4 ÐзеÑбайджан
Qcountry 5 ÐÑмениÑ
Qcountry 6 ÐÑÑзиÑ
Qcountry 7 ÐазаÑ
ÑÑан
Qcountry 8 ÐÑÑгÑзÑÑан
Qcountry 9 Ðолдова
</code></pre>
<p>I need to create dictionary where <code>sign == Qcountry</code>.
I want to get</p>
<pre><code>dict = {1: "РоÑÑиÑ",
2: "УкÑаина",
3: "ÐелаÑÑÑÑ", ...}
</code></pre>
<p>I tried </p>
<pre><code>if df.sign.contains('Qcountry'):
dict((ind, el) for (ind, el) in (df.number, df.result))
</code></pre>
<p>but it doesn't work</p>
| 1 | 2016-07-29T10:50:49Z | 38,657,494 | <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_dict.html" rel="nofollow"><code>to_dict</code></a>:</p>
<pre><code>print (df[df['sign']=='Qcountry'].set_index('number')['result'].to_dict())
{1: 'РоÑÑиÑ',
2: 'УкÑаина',
3: 'ÐелаÑÑÑÑ',
4: 'ÐзеÑбайджан',
5: 'ÐÑмениÑ',
6: 'ÐÑÑзиÑ',
7: 'ÐазаÑ
ÑÑан',
8: 'ÐÑÑгÑзÑÑан',
9: 'Ðолдова'}
</code></pre>
| 1 | 2016-07-29T11:17:59Z | [
"python",
"pandas",
"dictionary"
] |
How to call a Python script with arguments from Java class | 38,657,109 | <p>I am using <em>Python 3.4</em>.</p>
<p>I have a Python script <code>myscript.py</code> :</p>
<pre><code>import sys
def returnvalue(str) :
if str == "hi" :
return "yes"
else :
return "no"
print("calling python function with parameters:")
print(sys.argv[1])
str = sys.argv[1]
res = returnvalue(str)
target = open("file.txt", 'w')
target.write(res)
target.close()
</code></pre>
<p>I need to call this python script from the java class <code>PythonJava.java</code></p>
<pre><code>public class PythonJava
{
String arg1;
public void setArg1(String arg1) {
this.arg1 = arg1;
}
public void runPython()
{ //need to call myscript.py and also pass arg1 as its arguments.
//and also myscript.py path is in C:\Demo\myscript.py
}
</code></pre>
<p>and I am calling <code>runPython()</code> from another Java class by creating an object of <code>PythonJava</code></p>
<pre><code>obj.setArg1("hi");
...
obj.runPython();
</code></pre>
<p>I have tried many ways but none of them are properly working. I used Jython and also ProcessBuilder but the script was not write into file.txt. Can you suggest a way to properly implement this?</p>
| 1 | 2016-07-29T10:57:57Z | 38,657,448 | <p>Have you looked at these? They suggest different ways of doing this:</p>
<p><a href="http://stackoverflow.com/questions/27235286/call-python-code-from-java-by-passing-parameters-and-results">Call Python code from Java by passing parameters and results</a></p>
<p><a href="http://stackoverflow.com/questions/9381906/how-to-call-a-python-method-from-a-java-class">How to call a python method from a java class?</a></p>
<p>In short one solution could be:</p>
<pre><code>public void runPython()
{ //need to call myscript.py and also pass arg1 as its arguments.
//and also myscript.py path is in C:\Demo\myscript.py
String[] cmd = {
"python",
"C:/Demo/myscript.py",
this.arg1,
};
Runtime.getRuntime().exec(cmd);
}
</code></pre>
<p>edit: just make sure you change the variable name from str to something else, as noted by cdarke </p>
<p>Your python code (change str to something else, e.g. arg and specify a path for file):</p>
<pre><code>def returnvalue(arg) :
if arg == "hi" :
return "yes"
return "no"
print("calling python function with parameters:")
print(sys.argv[1])
arg = sys.argv[1]
res = returnvalue(arg)
print(res)
with open("C:/path/to/where/you/want/file.txt", 'w') as target: # specify path or else it will be created where you run your java code
target.write(res)
</code></pre>
| 0 | 2016-07-29T11:15:46Z | [
"java",
"python",
"python-3.4"
] |
scikits learn SVM - 1-dimensional Separating Hyperplane | 38,657,138 | <p>How to plot the separating "hyperplane" for 1-dimensional data using scikit svm ?</p>
<p>I follow this guide for 2-dimensional data : <a href="http://scikit-learn.org/stable/auto_examples/svm/plot_svm_margin.html" rel="nofollow">http://scikit-learn.org/stable/auto_examples/svm/plot_svm_margin.html</a>, but don't know how to make it works for 1-dimensional data</p>
<pre><code>pos = np.random.randn(20, 1) + 1
neg = np.random.randn(20, 1) - 1
X = np.r_[pos, neg]
Y = [0] * 20 + [1] * 20
clf = svm.SVC(kernel='linear', C=0.05)
clf.fit(X, Y)
# how to get "hyperplane" and margins values ??
</code></pre>
<p>thanks</p>
| 0 | 2016-07-29T10:59:07Z | 38,661,632 | <p>the <code>.coef_</code> member of <code>clf</code> will return the "hyperplane," which, in one dimension, is just a point. Check out <a href="http://stackoverflow.com/questions/23186804/graph-point-on-straight-line-number-line-in-python">this post</a> for info on how to plot points on a numberline.</p>
| 0 | 2016-07-29T14:44:28Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn",
"svm"
] |
scikits learn SVM - 1-dimensional Separating Hyperplane | 38,657,138 | <p>How to plot the separating "hyperplane" for 1-dimensional data using scikit svm ?</p>
<p>I follow this guide for 2-dimensional data : <a href="http://scikit-learn.org/stable/auto_examples/svm/plot_svm_margin.html" rel="nofollow">http://scikit-learn.org/stable/auto_examples/svm/plot_svm_margin.html</a>, but don't know how to make it works for 1-dimensional data</p>
<pre><code>pos = np.random.randn(20, 1) + 1
neg = np.random.randn(20, 1) - 1
X = np.r_[pos, neg]
Y = [0] * 20 + [1] * 20
clf = svm.SVC(kernel='linear', C=0.05)
clf.fit(X, Y)
# how to get "hyperplane" and margins values ??
</code></pre>
<p>thanks</p>
| 0 | 2016-07-29T10:59:07Z | 38,666,962 | <p>The separating hyperplane for two-dimensional data is a line, whereas for one-dimensional data the hyperplane boils down to a point. The easiest way to plot the separating hyperplane for one-dimensional data is a bit of a hack: the <strong>data are made two-dimensional</strong> by adding a second feature which takes the value 0 for all the samples. By doing so, the second component of the weight vector is zero, i.e. <strong>w</strong> = [<em>w<sub>0</sub></em>, 0] (see the appendix at the end of this post). As <em>w<sub>1</sub></em> = 0 and <em>w<sub>1</sub></em> is in the denominator of the expression that defines the slope and the y-intercept term of the separating line (see appendix), both coefficients are â. In this case it is convenient to solve the equation of the separating hyperplane for <em>x</em>, which results in <em>x</em> = <em>x<sub>0</sub></em> = -b/<em>w<sub>0</sub></em>. The margin turns out to be ±<em>w<sub>0</sub></em> (see appendix for details).</p>
<p>The following script implements this approach:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
np.random.seed(0)
pos = np.hstack((np.random.randn(20, 1) + 1, np.zeros((20, 1))))
neg = np.hstack((np.random.randn(20, 1) - 1, np.zeros((20, 1))))
X = np.r_[pos, neg]
Y = [0] * 20 + [1] * 20
clf = svm.SVC(kernel='linear')
clf.fit(X, Y)
w = clf.coef_[0]
x_0 = -clf.intercept_[0]/w[0]
margin = w[0]
plt.figure()
x_min, x_max = np.floor(X.min()), np.ceil(X.max())
y_min, y_max = -3, 3
yy = np.linspace(y_min, y_max)
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.predict(np.c_[XX.ravel(), np.zeros(XX.size)]).reshape(XX.shape)
plt.pcolormesh(XX, YY, Z, cmap=plt.cm.Paired)
plt.plot(x_0*np.ones(shape=yy.shape), yy, 'k-')
plt.plot(x_0*np.ones(shape=yy.shape) - margin, yy, 'k--')
plt.plot(x_0*np.ones(shape=yy.shape) + margin, yy, 'k--')
plt.scatter(pos, np.zeros(shape=pos.shape), s=80, marker='o', facecolors='none')
plt.scatter(neg, np.zeros(shape=neg.shape), s=80, marker='^', facecolors='none')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.show()
</code></pre>
<p>Although the code above is self explanatory, here are some tips. <code>X</code> dimensions are 40 rows by 2 columns: the values in the first column are random numbers while all the elements of the second column are zeros. In the code, the weight vector <strong>w</strong> = [<em>w<sub>0</sub></em>, 0] and the intercept <em>b</em> are <code>clf_coef_[0]</code> and <code>clf.intercept_[0]</code>, respectively, wehre <code>clf</code> if the object returned by <code>sklearn.svm.SVC</code>.</p>
<p>And this is the plot you get when the script is run:</p>
<p><a href="http://i.stack.imgur.com/ZIKBE.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZIKBE.png" alt="Plot of separating hyperplane for one-dimensional data"></a></p>
<p>For the sake of clarity I'd suggest to tweak the code above by adding/subtracting a small constant to the second feature, for example:</p>
<pre><code>plt.scatter(pos, .3 + np.zeros(shape=pos.shape), ...)
plt.scatter(neg, -.3 + np.zeros(shape=neg.shape), ...)
</code></pre>
<p>By doing so the visualization is significantly improved since the different classes are shown without overlap.</p>
<hr>
<h3>Appendix</h3>
<p>The separating hyperplane is usually expressed as <strong>w</strong><sup>t</sup><strong>x</strong> + <em>b</em> = 0, where <strong>x</strong> is a <em>n</em>-dimensional vector, <strong>w</strong> is the weight vector and <em>b</em> is the bias or intercept. For <em>n</em> = 2 we have <em>w<sub>0</sub>.x</em> + <em>w<sub>1</sub>.y</em> + <em>b</em> = 0. After some algebra we obtain <em>y</em> = -(<em>w<sub>0</sub></em>/<em>w<sub>1</sub></em>).<em>x</em> + (-<em>b</em>/<em>w<sub>1</sub></em>). It clearly emerges from this expression that the discriminant hyperplane in a 2D feature space is a line of equation <em>y</em> = <em>a.x</em> + <em>y<sub>0</sub></em>, where the slope is given by <em>a</em> = -<em>w<sub>0</sub></em>/<em>w<sub>1</sub></em> and the y-intercept term is <em>y<sub>0</sub></em> = -<em>b</em>/<em>w<sub>1</sub></em>. In SVM, the margin of a separating hyperplane is ±â<strong>w</strong>â, which for 2D reduces to sqrt(<em>w<sub>0</sub><sup>2</sup></em> + <em>w<sub>1</sub><sup>2</sup></em>).</p>
| 0 | 2016-07-29T20:26:09Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn",
"svm"
] |
Openstack IP address Filter | 38,657,243 | <p>The command : <code>neutron.list_ports()["ports"]</code> (Python) gives me all the IP addresses that I have on my machine.</p>
<p>For example:</p>
<pre><code>[{u'status': u'ACTIVE',
u'name': u'',
u'allowed_address_pairs': [],
u'admin_state_up': True,
u'network_id': u'7da####81c2##79e2',
u'dns_name': u'',
u'extra_dhcp_opts': [],
u'dns_assignment': [{u'hostname': u'host-193-164-#5-##',
u'ip_address': u'193.164.#5.##',
u'fqdn': u'host-193-164-#5-##.openstacklocal.'}],
u'binding:vnic_type': u'normal',
u'device_owner': u'compute:None',
u'tenant_id': u'155##748a###3895###8b890',
u'mac_address': u'fa:##:3e:##:##:cr',
u'port_security_enabled': True,
u'fixed_ips': [{u'subnet_id': u'66####e6-###-####-a7#f-4017###6d762',
u'ip_address': u'193.164.#5.##'}],
u'id': u'170##4c7d-571f-###-a089-5c4###97d29',
u'security_groups': [u'ba6d##2-bd#58-40#c2-a5c#2-9###92a4##e'],
u'device_id': u'da##5d-###-4d6f-b##b-c3###8435'},
{u'status': u'DOWN',
u'name': u'',
u'allowed_address_pairs': [],
u'admin_state_up': True,
u'network_id': u'##',
u'dns_name': u'',
u'extra_dhcp_opts': [],
u'dns_assignment': [{u'hostname': u'host-##',
u'ip_address': u'##',
u'fqdn': u'host-##.openstacklocal.'}],
u'binding:vnic_type': u'normal',
u'device_owner': u'',
u'tenant_id': u'##',
u'mac_address': u'f##9:f7',
u'port_security_enabled': True,
u'fixed_ips': [{u'subnet_id': u'##62',
u'ip_address': u'####'}],
u'id': u'34f##b7c-######9138-##39##30e9',
u'security_groups': [u'ba##-bd58-40##5c2-9##92a4##'],
u'device_id': u''}]
</code></pre>
<p>I put the "#" to hide the IPs...</p>
<p>I want to be able to distinguish between three type of IPs : </p>
<ol>
<li>A resevered and allocated IP</li>
<li>A reservered and NonAllocated IP</li>
<li>A NonReserved but Allocated IP</li>
</ol>
| 0 | 2016-07-29T11:04:27Z | 38,657,877 | <p>list -> num -> dict</p>
<pre><code>a = [{'name': 'one'}, {'name': 'two'}]
a[0]['name']
</code></pre>
<p>access to elements</p>
<pre><code>network = neutron.list_ports()["ports"]
for elem in network:
print a['status']
print a['network_id']
print a['ip_address']
print a['dns_assignment'][0]['ip_address']
</code></pre>
| 0 | 2016-07-29T11:38:02Z | [
"python",
"django",
"ip-address",
"openstack",
"keystone"
] |
Skimage: how to combine RGB channels? | 38,657,282 | <p>I have three grey channels of one image, each channel is a 2-dim array with values from 0 to 255. I need to combine this three images to one RGB and get something like this: [[234, 45, 67], ...], ... [[34, 7, 162], ...]! I use skimage, numpy to solve this problem. I can't find in documentation appropriate function(</p>
| 0 | 2016-07-29T11:06:28Z | 38,657,655 | <p>Use numpy's dstack (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html</a>)</p>
<pre><code>import numpy as np
r = np.random.rand(10)
g = np.random.rand(10)
b = np.random.rand(10)
zipped = np.dstack((r, g, b))
</code></pre>
| 1 | 2016-07-29T11:26:10Z | [
"python",
"skimage"
] |
Return value from python script via CGI | 38,657,311 | <p>I work on a raspberry Pi. I did a small html page with 2 buttons calling executables via CGI and doing things. I would like to add a temperature display on the same page.</p>
<p>I have a small python script that reads the temperature of a sensor and I can successfully display it on console or via direct CGI (typing the url, the only thing displayed is the temperature). I would like to see the temperature directly on my html page, and be able to refresh the value by clicking a button.</p>
<p>I can't find any answer that might help me returning a value from the python script and displaying it in a textbox or something. Do you have any track for me to follow ?</p>
| 0 | 2016-07-29T11:08:26Z | 38,657,536 | <p>You can write out a complete html page with CGI</p>
<p>A simple CGI script would be</p>
<pre><code>temp=100
print("Content-Type: text/html") # HTML is following
print() # blank line, end of headers
print("<!doctype html>")
print("<html><head><title>Example</title></head><body>")
print("<p>Temperature is:",temp)
print("</p>")
peint('<a href="#">Reload<a>')
print("</body></html>")
</code></pre>
<p>you can also use javascript in a page to (re)load the value with <a href="https://en.wikipedia.org/wiki/Ajax_(programming)" rel="nofollow">Ajax</a> - there are examples on the page (only replace the php-script with your python-cgi)</p>
| 0 | 2016-07-29T11:19:56Z | [
"python",
"cgi",
"raspberry-pi2"
] |
How can I get the type of a class without instantiating it in Python? | 38,657,337 | <p>Given an object it is easy to get the object type in python:</p>
<pre><code>obj = SomeClass()
t = type(obj)
</code></pre>
<p>How can I do this without first creating an instance of the class?</p>
<p>Background: Once you have the type you easily can create instances of that type. Using this mechanism I want to pass a type to an object which will later create instances of that type. It would be very reasonable not to have to create an object first which will immediately be thrown away again.</p>
| 0 | 2016-07-29T11:09:16Z | 38,657,719 | <p>See this example code:</p>
<pre><code>class MyClass(object):
pass
if __name__ == '__main__':
o1 = MyClass()
c = MyClass
o2 = c()
print(type(o1), type(o2), MyClass)
</code></pre>
<p>Defining a class binds it to its name (here: <code>MyClass</code>), which is nothing else but a reference to that definition. In this example, issuing <code>c = MyClass</code> just mirrors the class reference to another variable, the contents of the variables <code>c</code> and <code>MyClass</code> now are exactly the same. Thus, you can instantiate objects of that class by calling either of them (i.e. <code>MyClass()</code> or <code>c()</code>), resulting in the same effect.</p>
<p>Furthermore, testing for the type of an instantiated object results in the exact same class reference. You can even go one step further and do:</p>
<pre><code>o3 = type(o1)()
</code></pre>
<p>Which creates a new instance of the class of which <code>o1</code> is.</p>
| 1 | 2016-07-29T11:28:58Z | [
"python",
"python-3.x",
"oop"
] |
Pandas: groupby by date and transform nunique returning too many entries | 38,657,341 | <p>I am trying to do a simple group-by in Pandas and it is not working as it should:</p>
<pre><code>url='https://raw.githubusercontent.com/108michael/ms_thesis/master/raw_bills'
bills=pd.read_csv(url)
bills.date.nunique()
11
bills.dtypes
date float64
bills object
id.thomas int64
dtype: object
bills[['date', 'bills']].groupby(['date']).bills.transform('nunique')
0 3627
1 7454
2 7454
3 7454
4 3627
5 7454
6 7454
7 3627
8 7454
9 7454
10 3627
11 7454
12 7454
13 7454
14 7454
15 7454
16 3627
17 3627
18 7454
</code></pre>
<p>I've done this sort of group-by before, and it usually works fine.</p>
<p>Any suggestions on this?</p>
| 0 | 2016-07-29T11:09:29Z | 38,657,514 | <p>I'm not sure what you're asking, but don't you want to use:</p>
<pre><code>bills[['date', 'bills']].groupby('date').bills.nunique()
date
2005.0 6820
2006.0 3738
2007.0 7454
2008.0 3627
2009.0 7324
2010.0 3297
2011.0 5787
2012.0 4647
2013.0 5694
2014.0 3211
2015.0 5
Name: bills, dtype: int64
</code></pre>
| 1 | 2016-07-29T11:18:45Z | [
"python",
"pandas",
"group-by"
] |
C# returning tuple - like Python | 38,657,476 | <p>I'm new to OOP and C#. I have a strong Python background and I was wondering is there an equivalent in C# for this</p>
<pre><code>#Python
def outputSmth():
num1 = 3
num2 = 3
str1 = "Hi"
return (num1, num2, str1) #Returning a tuple that can later be accessed
# by index
</code></pre>
<p>If there is no direct equivalent, which I doubt there is, what is the most proper way to do it?</p>
<p>Here's my function :</p>
<pre><code>//C#
static tuple PrintUserCreationMnu()
{
Console.Clear();
Console.WriteLine("---- Create the user ----\n");
Console.Write("User ID : "); string usrID = Console.ReadLine();
Console.Write("First Name : "); string f_Name = Console.ReadLine();
Console.Write("Last Name : "); string l_Name = Console.ReadLine();
Console.Write("Expected leaving date : "); string l_Date = Console.ReadLine();
return ; //Here I'd like to return all my inputted values
}
</code></pre>
<p>Thank you!</p>
| 0 | 2016-07-29T11:17:15Z | 38,657,527 | <p>You could simply return a <code>string[]</code> or <code>List<string></code>:</p>
<pre><code>static string[] PrintUserCreationMnu()
{
// ...
return new[]{ usrID, f_Name, l_Name, l_Date};
}
</code></pre>
<p>But in general it would be better to create a custom type <code>User</code> with meaningful properties and return that. Then you can access them via property instead of by index:</p>
<pre><code>static User PrintUserCreationMnu()
{
// ...
return new User{ UserId = usrID, FirstName = f_Name, LastName = l_Name, Date = l_Date};
}
</code></pre>
<p>Even better would be to use the correct types, <code>int</code> for an <code>Id</code> and <code>DateTime</code> for the date. You can use the <code>...TryParse</code> methods(f.e. <code>int.TryParse</code>) to ensure that the format is valid.</p>
<hr>
<p>For the sake of completeness, yes, .NET has also <a href="https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx" rel="nofollow">tuples</a>. But i wouldn't use them often because it's not clear what <code>Item4</code> is. No one knows that before he looks at the source code. So it's not a good type to return from a method(even less if it's <code>public</code>). However, here it is:</p>
<pre><code>static Tuple<string, string, string, string> PrintUserCreationMnu()
{
// ...
return Tuple.Create(usrID, f_Name, l_Name, l_Date);
}
</code></pre>
| 1 | 2016-07-29T11:19:15Z | [
"c#",
"python",
"return",
"tuples"
] |
C# returning tuple - like Python | 38,657,476 | <p>I'm new to OOP and C#. I have a strong Python background and I was wondering is there an equivalent in C# for this</p>
<pre><code>#Python
def outputSmth():
num1 = 3
num2 = 3
str1 = "Hi"
return (num1, num2, str1) #Returning a tuple that can later be accessed
# by index
</code></pre>
<p>If there is no direct equivalent, which I doubt there is, what is the most proper way to do it?</p>
<p>Here's my function :</p>
<pre><code>//C#
static tuple PrintUserCreationMnu()
{
Console.Clear();
Console.WriteLine("---- Create the user ----\n");
Console.Write("User ID : "); string usrID = Console.ReadLine();
Console.Write("First Name : "); string f_Name = Console.ReadLine();
Console.Write("Last Name : "); string l_Name = Console.ReadLine();
Console.Write("Expected leaving date : "); string l_Date = Console.ReadLine();
return ; //Here I'd like to return all my inputted values
}
</code></pre>
<p>Thank you!</p>
| 0 | 2016-07-29T11:17:15Z | 38,657,543 | <p>Tuples are coming in C# 7 (current is 6.0). So, unfortunately not yet. </p>
<p>I cant wait for them, especially when dealing with methods that return more that one parameter (without having to set up structures to deal with them). </p>
<p>For now you need to return something like a <code>List</code>, <code>Dictionary</code> or a <code>Tuple</code> type but you have to new them up and assemble them in a method call and the break then apart on receiving them from the call. Further, you are limited to a single type in most of these constructs so you would have to box in and out of something like object. A bit of a pain! </p>
<p>You can find more on C# 7.0 features <a href="https://www.infoq.com/news/2015/04/CSharp-7-Tuples" rel="nofollow">here</a>. </p>
<p>I think your best option is going to be a dynamic expando object. I am afraid it's the best I can do!</p>
<p><strong>Method:</strong> </p>
<pre><code>public object GetPerson()
{
// Do some stuff
dynamic person = new ExpandoObject();
person.Name = "Bob";
person.Surname = "Smith";
person.DoB = new DateTime(1980, 10, 25);
return person;
}
</code></pre>
<p><strong>Call:</strong></p>
<pre><code>dynamic personResult = GetPerson();
string fullname = personResult.Name + " " + personResult.Surname;
DateTime dob = personResult.DoB;
</code></pre>
| 0 | 2016-07-29T11:20:21Z | [
"c#",
"python",
"return",
"tuples"
] |
python ravel vs. transpose when used in reshape | 38,657,496 | <p>I have a 2D array <code>v</code>, <code>v.shape=(M_1,M_2)</code>, which I want to reshape into a 3D array with <code>v.shape=(M_2,N_1,N_2)</code>, and <code>M_1=N_1*N_2</code>. </p>
<p>I came up with the following lines which produce the same result:</p>
<pre><code>np.reshape(v.T, reshape_tuple)
</code></pre>
<p>and</p>
<pre><code>np.reshape(v.ravel(order='F'), reshape_tuple)
</code></pre>
<p>for <code>reshape_tuple=(M_2,N_1,N_2)</code>.</p>
<p>Which one is computationally better and in what sense (comp time, memory, etc.) if the original <code>v</code> is a huge (possibly complex-valued) matrix?</p>
<p>My guess would be that using the transpose is better, but if <code>reshape</code> does an automatic <code>ravel</code> then maybe the ravel-option is faster (though <code>reshape</code> might be doing the <code>ravel</code> in C or Fortran and then it's not clear)? </p>
| 3 | 2016-07-29T11:18:04Z | 38,663,695 | <p>The order in which they do things - reshape, change strides, and make a copy - differs, but they end up doing the same thing.</p>
<p>I like to use <code>__array_interface__</code> to see where the data buffer is located, and other changes. I suppose I should add the <code>flags</code> to see the <code>order</code>. But we/you know that <code>transpose</code> changes the order to to <code>F</code> already, right?</p>
<pre><code>In [549]: x=np.arange(6).reshape(2,3)
In [550]: x.__array_interface__
Out[550]:
{'data': (187732024, False),
'descr': [('', '<i4')],
'shape': (2, 3),
'strides': None,
'typestr': '<i4',
'version': 3}
</code></pre>
<p>transpose is a view, with different shape, strides and order:</p>
<pre><code>In [551]: x.T.__array_interface__
Out[551]:
{'data': (187732024, False),
'descr': [('', '<i4')],
'shape': (3, 2),
'strides': (4, 12),
'typestr': '<i4',
'version': 3}
</code></pre>
<p>ravel with different order is a copy (different data buffer pointer)</p>
<pre><code>In [552]: x.ravel(order='F').__array_interface__
Out[552]:
{'data': (182286992, False),
'descr': [('', '<i4')],
'shape': (6,),
'strides': None,
'typestr': '<i4',
'version': 3}
</code></pre>
<p>transpose ravel is also a copy. I think the same data pointer is just a case of memory reuse (since I'm not assigning to a variable) - but that can be checked.</p>
<pre><code>In [553]: x.T.ravel().__array_interface__
Out[553]:
{'data': (182286992, False),
'descr': [('', '<i4')],
'shape': (6,),
'strides': None,
'typestr': '<i4',
'version': 3}
</code></pre>
<p>add the reshape:</p>
<pre><code>In [554]: x.T.ravel().reshape(2,3).__array_interface__
Out[554]:
{'data': (182286992, False),
'descr': [('', '<i4')],
'shape': (2, 3),
'strides': None,
'typestr': '<i4',
'version': 3}
In [555]: x.ravel(order='F').reshape(2,3).__array_interface__
Out[555]:
{'data': (182286992, False),
'descr': [('', '<i4')],
'shape': (2, 3),
'strides': None,
'typestr': '<i4',
'version': 3}
</code></pre>
<p>I think there's an implicit 'ravel' in reshape:</p>
<pre><code>In [558]: x.T.reshape(2,3).__array_interface__
Out[558]:
{'data': (182286992, False),
'descr': [('', '<i4')],
'shape': (2, 3),
'strides': None,
'typestr': '<i4',
'version': 3}
</code></pre>
<p>(I should rework these examples to get rid of that memory reuse ambiguity.) In any case, reshape after transpose requires the same memory copy that a ravel with order change does. And as far as I can tell only one copy is required for either case. The other operations just involve changes to attributes like shape.</p>
<p>May be it's clearer if we just look at the arrays</p>
<pre><code>In [565]: x.T
Out[565]:
array([[0, 3],
[1, 4],
[2, 5]])
</code></pre>
<p>In the <code>T</code> we can still step through the array in numeric order. But after reshape, the <code>1</code> isn't anywhere close to the <code>0</code>. Clearly there's been a copy.</p>
<pre><code>In [566]: x.T.reshape(2,3)
Out[566]:
array([[0, 3, 1],
[4, 2, 5]])
</code></pre>
<p>the order of values after the ravel looks similar, and more obviously so after reshape.</p>
<pre><code>In [567]: x.ravel(order='F')
Out[567]: array([0, 3, 1, 4, 2, 5])
In [568]: x.ravel(order='F').reshape(2,3)
Out[568]:
array([[0, 3, 1],
[4, 2, 5]])
</code></pre>
| 1 | 2016-07-29T16:40:57Z | [
"python",
"numpy",
"reshape"
] |
Detect whether to fetch from psycopg2 cursor or not? | 38,657,566 | <p>Let's say if I execute the following command.</p>
<pre><code>insert into hello (username) values ('me')
</code></pre>
<p>and I ran like</p>
<pre><code>cursor.fetchall()
</code></pre>
<p>I get the following error</p>
<pre><code>psycopg2.ProgrammingError: no results to fetch
</code></pre>
<p>How can I detect whether to call fetchall() or not without checking the query is "insert" or "select"?</p>
<p>Thanks.</p>
| 0 | 2016-07-29T11:21:59Z | 38,657,798 | <p>So what you <em>appear</em> to be asking is:</p>
<p>Is there any way, given a PostgreSQL cursor, to fetch the data from that cursor if there is any?</p>
<p>My own personal preference would simply be to attempt to run the <code>fetchall()</code> and then catch the inevitable exception.</p>
<pre><code>try:
data = cursor.fetchall()
do_something_with(data)
except psycopg2.ProgrammingError:
# take some other action
</code></pre>
| 1 | 2016-07-29T11:33:51Z | [
"python",
"psycopg2"
] |
Add headers from csv to dictionary keys and the columns below them as a list of values | 38,657,578 | <p>I have the following csv file:</p>
<pre><code>h1 h2 h3 h4
10 11 12 13
14 15 16 17
18 19 20 21
</code></pre>
<p>And the output that I'd like to obtain is a dictionary:</p>
<pre><code>dict = {'h1': ['10','14','18'], 'h2': ['11','15','19'],
'h3': ['12','16','20'], 'h4': ['13','17','21']}
</code></pre>
<p>I have tried the following but I didn't get exactly what I need:</p>
<pre><code>import csv
from collections import defaultdict
def get_columns_from_source_file():
source_file_reader = csv.DictReader(open('custom_delimited_file'))
columns_storage = defaultdict(list)
for source_file_row in source_file_reader:
for source_file_column, source_file_value in source_file_row.items():
columns_storage.setdefault(source_file_column, []).append(source_file_value)
return columns_storage
print(get_columns_from_source_file())
</code></pre>
<p>What I get is:</p>
<pre><code>defaultdict(<class 'list'>, {'h1\th2\th3\th4': ['10\t11\t12\t13', '14\t15\t16\t17', '18\t19\t20\t21']})
</code></pre>
| 0 | 2016-07-29T11:22:28Z | 38,657,775 | <p>You just have to add the <code>delimiter='\t'</code> argument and you'll get what you want:</p>
<pre><code>import csv
from collections import defaultdict
def get_columns_from_source_file():
source_file_reader = csv.DictReader(open('test.csv'), delimiter='\t')
columns_storage = defaultdict(list)
for source_file_row in source_file_reader:
for source_file_column, source_file_value in source_file_row.items():
columns_storage.setdefault(source_file_column, []).append(source_file_value)
return columns_storage
print(get_columns_from_source_file())
</code></pre>
<h3>Result:</h3>
<pre><code>defaultdict(<class 'list'>, {'h1': ['10', '14', '18'], 'h2': ['11', '15', '19'], 'h3': ['12', '16', '20'], 'h4': ['13', '17', '21']})
</code></pre>
| 3 | 2016-07-29T11:32:26Z | [
"python",
"python-3.x",
"csv"
] |
Generating models for Flask-AppBuilder using flask-sqlqcodegen | 38,657,634 | <p>I'm a beginner in Python and Flask ecosystems, trying to create a small proof-of-concept Web application for a research project. I'm using Debian Linux 7.9, PostgreSQL 9.5, SQLAlchemy (latest) and Flask-AppBuilder (latest). Since creating models manually is tedious and error-prone, I searched the mighty Internet and discovered the <code>flask-sqlacodegen</code> project (note that this a fork of <code>sqlacodegen</code> with improved features for Flask users). I installed <code>flask-sqlqcodegen</code> from GitHub (cloned repo and then ran <code>python setup.py install</code>). However, when trying to use it to generate models, it produces an error, as follows:</p>
<pre><code>> sqlacodegen postgresql+psycopg2://USER:PASS@HOST/DBNAME --flask
Traceback (most recent call last):
File "/usr/local/bin/sqlacodegen", line 9, in <module>
load_entry_point('sqlacodegen==1.1.5.pre2', 'console_scripts', 'sqlacodegen')()
File "/usr/local/lib/python2.7/dist-packages/sqlacodegen-1.1.5.pre2-py2.7.egg/sqlacodegen/main.py", line 57, in main
args.flask, fkcols)
File "/usr/local/lib/python2.7/dist-packages/sqlacodegen-1.1.5.pre2-py2.7.egg/sqlacodegen/codegen.py", line 597, in __init__
model = ModelClass(table, links[table.name], inflect_engine, not nojoined)
File "/usr/local/lib/python2.7/dist-packages/sqlacodegen-1.1.5.pre2-py2.7.egg/sqlacodegen/codegen.py", line 319, in __init__
relationship_ = ManyToOneRelationship(self.name, target_cls, constraint, inflect_engine)
File "/usr/local/lib/python2.7/dist-packages/sqlacodegen-1.1.5.pre2-py2.7.egg/sqlacodegen/codegen.py", line 455, in __init__
colname = constraint.columns[0]
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/_collections.py", line 194, in __getitem__
return self._data[key]
KeyError: 0
</code></pre>
<p>What is going on? Any help will be much appreciated.</p>
| 1 | 2016-07-29T11:25:16Z | 38,680,067 | <p>Upon some Internet searching, I ran across an issue on GitHub, which described exactly the same problem. However, the most recent recommendation at the time produced another error instead of the original one. In the <a href="https://github.com/ksindi/flask-sqlacodegen/issues/8" rel="nofollow">discussion</a> with the author of <code>flask-sqlcodegen</code>, it appeared that there exist a pull request (PR) kindly provided by a project contributor that apparently should fix the problem. After updating my local repository, followed by rebuilding and reinstalling the software, I was able to successfully generate models for my database. The whole process consists of the following steps.</p>
<ol>
<li>Change to directory with a local repo of <code>flask-sqlcodegen</code>.</li>
<li>If you made any changes, like I did, stash them: <code>git stash</code>.</li>
<li>Update repo: <code>git pull origin master</code> (now includes that PR).</li>
<li>Rebuild/install software: <code>python setup.py install</code>.</li>
<li>If you need your prior changes, restore them: <code>git stash pop</code>. Otherwise, discard them: <code>git reset --hard</code>.</li>
<li><p>Change to your Flask application directory and auto-generate the models, as follows.</p>
<p><code>sqlacodegen --flask --outfile models.py postgresql+psycopg2://USER:PASS@HOST/DBNAME</code></p></li>
</ol>
<p><strong>Acknowledgements:</strong> Big thank you to Kamil Sindi (the <code>flask-sqlcodegen</code>'s author) for the nice software and rapid & helpful feedback as well as to Alisdair Venn for that valuable pull request.</p>
| 1 | 2016-07-31T01:53:47Z | [
"python",
"postgresql",
"flask",
"flask-sqlalchemy"
] |
Adding pandas dataframe iteratively in a memory efficient way | 38,657,712 | <p>I am trying to iteratively add some pandas dataframe that I read from a set of csv files, and after the 16th file or so I get a memory error. The new files are pandas of around 300k rows.</p>
<p>Is there a way to do this in the hard drive (for example with hdf5) or in a more memory efficient way?</p>
<p>See code below. Note that sum_of_all_files start as an empty dataframe.</p>
<pre><code>sum_of_all_files = pd.DataFrame()
for file_name in list_of_files:
file_df=pd.read_csv(file_name,index_col=0,header=None).dropna()
sum_of_all_files=sum_of_all_files.add(file_df,fill_value=0, axis='index')
</code></pre>
<p>Thanks!</p>
<p><strong>EDIT</strong>: I want to add by index, i.e. if two rows have the same index, add them. I have corrected the code above by adding " axis='index' " in the last line.</p>
| 2 | 2016-07-29T11:28:46Z | 38,657,807 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a>:</p>
<pre><code>files = glob.glob('files/*.csv')
dfs = [pd.read_csv(file_name,index_col=0,header=None).dropna() for file_name in files]
df = pd.concat(dfs).sum()
print (df)
</code></pre>
| 1 | 2016-07-29T11:34:18Z | [
"python",
"pandas",
"memory-management",
"dataframe"
] |
Adding pandas dataframe iteratively in a memory efficient way | 38,657,712 | <p>I am trying to iteratively add some pandas dataframe that I read from a set of csv files, and after the 16th file or so I get a memory error. The new files are pandas of around 300k rows.</p>
<p>Is there a way to do this in the hard drive (for example with hdf5) or in a more memory efficient way?</p>
<p>See code below. Note that sum_of_all_files start as an empty dataframe.</p>
<pre><code>sum_of_all_files = pd.DataFrame()
for file_name in list_of_files:
file_df=pd.read_csv(file_name,index_col=0,header=None).dropna()
sum_of_all_files=sum_of_all_files.add(file_df,fill_value=0, axis='index')
</code></pre>
<p>Thanks!</p>
<p><strong>EDIT</strong>: I want to add by index, i.e. if two rows have the same index, add them. I have corrected the code above by adding " axis='index' " in the last line.</p>
| 2 | 2016-07-29T11:28:46Z | 38,657,939 | <p><strong>UPDATE:</strong> i would simply add reading all CSVs in chunks to your solution. I think you are already doing it very well in terms of memory saving...</p>
<pre><code>sum_of_all_files = pd.DataFrame()
for file_name in list_of_files:
for file_df in pd.read_csv(file_name, index_col=0, header=None, chunksize=10**5)
sum_of_all_files = sum_of_all_files.add(file_df.dropna(), fill_value=0, axis='index')
</code></pre>
<p><strong>OLD answer:</strong></p>
<p><strong>Idea:</strong> we will read first file into <code>total</code> DF and then we will read one file in each iteration step starting from the second file in your <code>list_of_files</code> and add it on the fly to the <code>total</code> DF</p>
<p>PS you can go further and read each CSV files in chunks if there are huge files that don't fit into memory:</p>
<pre><code>total = pd.read_csv(list_of_files[0], index_col=0, header=None).dropna()
for f in list_of_files[1:]:
for chunk in pd.read_csv(f,index_col=0,header=None, chunksize=10**5):
total = total.add(chunk.dropna(), fill_value=0, axis='index')
</code></pre>
| 1 | 2016-07-29T11:41:52Z | [
"python",
"pandas",
"memory-management",
"dataframe"
] |
Docker Apache Proxy point to running Python Script | 38,657,735 | <p>Hoping that someone can help me understand what I have done wrong in my setup. I have a Ubuntu Server set up with Docker. I have an Apache container(running on port 80) set up to run as a Proxy and use Virtual Hosts to point to a port dependent on domain name.</p>
<pre><code><VirtualHost *:80>
ServerName myDomain.com
ServerAlias www.myDomain.com
<Proxy *>
Allow from localhost
</Proxy>
ProxyPass / http://myDomain:8080/
</VirtualHost>
</code></pre>
<p>For a specific docker container, I have a python script running on port 80 (confirmed by going to <em>SERVER_IP:PORT</em>) however when I go to the domain it only shows the default apache page (on the apache proxy container)</p>
<p>I have also got other containers running LAMP stacks (with volume mapped to a folder on the apache proxy container. example <code>/var/www/html</code> is mapped to <code>/var/www/html/website.com</code>) and they work correctly. </p>
<p>Does anyone have any ideas as to why I can't see the output of the python script at that domain but am able to when navigating to <em>IP_ADDR:PORT</em>? <strong>All help and better ideas of a setup is appreciated!! THANK YOU!!!</strong></p>
<p>EDIT: Python script is running under <code>/root/pythonscript/</code> could this be the cause?</p>
| 1 | 2016-07-29T11:30:13Z | 38,657,805 | <p>I assume this is running on localhost? so you want to collect on the domain but route to localhost?</p>
<p>I also assume that the python script is running on port 8080? And that apache is running on port 80?</p>
<p>Try this </p>
<pre><code><VirtualHost *:80>
ServerName myDomain.com
ServerAlias www.myDomain.com
<Proxy *>
Allow from localhost
</Proxy>
ProxyPass / http://127.0.0.1:8080/
</VirtualHost>
</code></pre>
| 0 | 2016-07-29T11:34:16Z | [
"python",
"apache",
"docker",
"proxy",
"virtualhost"
] |
Docker Apache Proxy point to running Python Script | 38,657,735 | <p>Hoping that someone can help me understand what I have done wrong in my setup. I have a Ubuntu Server set up with Docker. I have an Apache container(running on port 80) set up to run as a Proxy and use Virtual Hosts to point to a port dependent on domain name.</p>
<pre><code><VirtualHost *:80>
ServerName myDomain.com
ServerAlias www.myDomain.com
<Proxy *>
Allow from localhost
</Proxy>
ProxyPass / http://myDomain:8080/
</VirtualHost>
</code></pre>
<p>For a specific docker container, I have a python script running on port 80 (confirmed by going to <em>SERVER_IP:PORT</em>) however when I go to the domain it only shows the default apache page (on the apache proxy container)</p>
<p>I have also got other containers running LAMP stacks (with volume mapped to a folder on the apache proxy container. example <code>/var/www/html</code> is mapped to <code>/var/www/html/website.com</code>) and they work correctly. </p>
<p>Does anyone have any ideas as to why I can't see the output of the python script at that domain but am able to when navigating to <em>IP_ADDR:PORT</em>? <strong>All help and better ideas of a setup is appreciated!! THANK YOU!!!</strong></p>
<p>EDIT: Python script is running under <code>/root/pythonscript/</code> could this be the cause?</p>
| 1 | 2016-07-29T11:30:13Z | 38,658,319 | <p>OMG! I am sorry to waste your time, stupid me didn't run <code>a2ensite domain.com.conf</code> before I reloaded apache2 service... I apologise... That has fixed the issue!</p>
| 0 | 2016-07-29T12:00:51Z | [
"python",
"apache",
"docker",
"proxy",
"virtualhost"
] |
how to remove duplicate values in a dataset : python | 38,657,741 | <p>I want to remove duplicate items in a dataset by keeping the ones with highest value. Now I am using pandas :</p>
<pre><code>c_maxes = hospProfiling.groupby(['Hospital_ID', 'District_ID'], group_keys=False)\
.apply(lambda x: x.ix[x['Hospital_employees'].idxmax()])
print c_maxes
c_maxes.to_csv('data/external/HospitalProfilingMaxes.csv')
</code></pre>
<p>Doing this is leading to the initial dataset : <code>Hospital_ID,District_ID,Hospital_employees</code> to become <code>Hospital_ID,District_ID,Hospital_ID,District_ID,Hospital_employees</code>.</p>
<p>The columns used to group are being duplicated . Whats the error here ?</p>
<p>Edit: </p>
<p>On using the groupby() function, an extra column at the beginning of the data is added. The column doesn't have a name, it's just a sequence number for all rows. This is shown in the output second answer of the ques here. I want to remove this extra column as I don't need it. I tried this:</p>
<p><code>hospProfiling.drop(hospProfiling.columns[0], axis=1)</code></p>
<p>This code doesn't remove the column. How can it be removed?</p>
| 1 | 2016-07-29T11:30:22Z | 38,658,251 | <p>I think you need:</p>
<pre><code>hospProfiling.loc[hospProfiling.groupby(['Hospital_ID', 'District_ID'])['Hospital_employees']
.idxmax()]
</code></pre>
<p>I was very surprised with another answer and I do some research, if function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow"><code>idxmax</code></a> is useless or not:</p>
<p>Sample:</p>
<pre><code>hospProfiling = pd.DataFrame({'Hospital_ID': {0: 'A', 1: 'A', 2: 'B', 3: 'A', 4: 'A', 5: 'B', 6: 'A', 7: 'A', 8: 'B', 9: 'B', 10: 'A', 11: 'B', 12: 'A'}, 'Name': {0: 'Sam', 1: 'Annie', 2: 'Fred', 3: 'Sam', 4: 'Annie', 5: 'Fred', 6: 'Sam', 7: 'Annie', 8: 'Fred', 9: 'James', 10: 'Alan', 11: 'Julie', 12: 'Greg'}, 'District_ID': {0: 'M', 1: 'F', 2: 'M', 3: 'M', 4: 'F', 5: 'M', 6: 'M', 7: 'F', 8: 'M', 9: 'M', 10: 'M', 11: 'F', 12: 'M'}, 'Hospital_employees': {0: 25, 1: 41, 2: 70, 3: 44, 4: 12, 5: 14, 6: 20, 7: 10, 8: 30, 9: 18, 10: 56, 11: 28, 12: 33}, 'Val': {0: 100, 1: 7, 2: 14, 3: 200, 4: 5, 5: 20, 6: 1, 7: 0, 8: 7, 9: 9, 10: 6, 11: 9, 12: 47}})
hospProfiling = hospProfiling[['Hospital_ID','District_ID','Hospital_employees','Val','Name']]
hospProfiling.sort_values(by=['Hospital_ID','District_ID'], inplace=True)
print (hospProfiling)
Hospital_ID District_ID Hospital_employees Val Name
1 A F 41 7 Annie
4 A F 12 5 Annie
7 A F 10 0 Annie
0 A M 25 100 Sam
3 A M 44 200 Sam
6 A M 20 1 Sam
10 A M 56 6 Alan
12 A M 33 47 Greg
11 B F 28 9 Julie
2 B M 70 14 Fred
5 B M 14 20 Fred
8 B M 30 7 Fred
9 B M 18 9 James
</code></pre>
<p>Main difference is how to handle another columns, if use <code>max</code> it return max values from each column - here <code>Hospital_employees</code> and <code>Val</code>:</p>
<pre><code>c_maxes = hospProfiling.groupby(['Hospital_ID','District_ID'],as_index = False).max()
print (c_maxes)
Hospital_ID District_ID Hospital_employees Name Val
0 A F 41 Annie 7
1 A M 56 Sam 200
2 B F 28 Julie 9
3 B M 70 James 20
c_maxes = hospProfiling.groupby(['Hospital_ID','District_ID'],as_index = False)
.agg({'Hospital_employees': max})
print (c_maxes)
Hospital_ID District_ID Hospital_employees
0 A F 41
1 A M 56
2 B F 28
3 B M 70
</code></pre>
<p>Function <code>idxmax</code> return indexes of maximal values in another column:</p>
<pre><code>print (hospProfiling.groupby(['Hospital_ID', 'District_ID'])['Hospital_employees'].idxmax())
A F 1
M 10
B F 11
M 2
Name: Hospital_employees, dtype: int64
</code></pre>
<p>And then you only select <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a>:</p>
<pre><code>c_maxes = hospProfiling.loc[hospProfiling.groupby(['Hospital_ID', 'District_ID'])['Hospital_employees']
.idxmax()]
print (c_maxes)
District_ID Hospital_ID Hospital_employees Name Val
1 F A 41 Annie 7
10 M A 56 Alan 6
11 F B 28 Julie 9
2 M B 70 Fred 14
</code></pre>
| 1 | 2016-07-29T11:57:13Z | [
"python",
"pandas"
] |
how to remove duplicate values in a dataset : python | 38,657,741 | <p>I want to remove duplicate items in a dataset by keeping the ones with highest value. Now I am using pandas :</p>
<pre><code>c_maxes = hospProfiling.groupby(['Hospital_ID', 'District_ID'], group_keys=False)\
.apply(lambda x: x.ix[x['Hospital_employees'].idxmax()])
print c_maxes
c_maxes.to_csv('data/external/HospitalProfilingMaxes.csv')
</code></pre>
<p>Doing this is leading to the initial dataset : <code>Hospital_ID,District_ID,Hospital_employees</code> to become <code>Hospital_ID,District_ID,Hospital_ID,District_ID,Hospital_employees</code>.</p>
<p>The columns used to group are being duplicated . Whats the error here ?</p>
<p>Edit: </p>
<p>On using the groupby() function, an extra column at the beginning of the data is added. The column doesn't have a name, it's just a sequence number for all rows. This is shown in the output second answer of the ques here. I want to remove this extra column as I don't need it. I tried this:</p>
<p><code>hospProfiling.drop(hospProfiling.columns[0], axis=1)</code></p>
<p>This code doesn't remove the column. How can it be removed?</p>
| 1 | 2016-07-29T11:30:22Z | 38,658,258 | <p>Why not using groupby <code>max</code> method? </p>
<pre><code>hopsProfiling.groupby(['Hospital_ID','District_ID'],as_index = False).max()
</code></pre>
<p>And if you happen to have more than three columns, replace max by agg:</p>
<pre><code>hopsProfiling.groupby(['Hospital_ID','District_ID'],as_index = False).agg({'Hospital employees': max})
</code></pre>
| 3 | 2016-07-29T11:57:25Z | [
"python",
"pandas"
] |
Download docxtpl generated file with cherrypy | 38,657,787 | <p>I am using docxtpl to generate a word document, and wondering how a user can download this file once generated using cherrypy, please see my code below.</p>
<p>the only solution i could come up with is to save it to the www folder and create a link to the location, but i am sure this can be simplified.</p>
<p>code:</p>
<pre><code>import os, os.path
import random
import string
import cherrypy
from docxtpl import DocxTemplate
import sys
from auth import AuthController, require, member_of, name_is
import socket
reload(sys)
sys.setdefaultencoding('utf8')
cherrypy.config.update({'server.socket_port': 8000})
cherrypy.server.socket_host = '0.0.0.0'
cherrypy.engine.restart()
class Root(object):
_cp_config = {
'tools.sessions.on': True,
'tools.auth.on': True }
@cherrypy.expose()
def default(self, **kwargs):
print kwargs
if kwargs.get('csa_no'):
# print kwargs.get('csa_no')
tpl=DocxTemplate('csa_tpl.docx')
sd = tpl.new_subdoc()
p = sd.add_paragraph('This 1st insert')
sd2 = tpl.new_subdoc()
p = sd2.add_paragraph('This 2nd insert')
context1 = {
'date': 'jkh',
'company_name' : 'Test Footer',
'cost' : '10,000',
'project_description': kwargs['project_description'],
'site': kwargs['site'],
'sp': kwargs['sp'],
'wo': kwargs['WO'],
'contract_manager': kwargs['contract_manager'],
'csa_no': kwargs['csa_no'],
'variation_reason': kwargs['variation_reason'],
'variation_scope_of_works': kwargs['variation_scope_of_works'],
'Initial_Contract_Value': '$300,000',
'variation_total': '$20,000',
'Ammended_Contract_Value': '$320,000',
'project_manager': kwargs['project_manager'],
'construction_manager': 'Dane Wilson',
'date': '30/04/2016',
}
tpl.render(context1)
file_name_with_path = '/var/www/html/csa_documents/' + kwargs['sp'] + '-'+ kwargs['WO'] + '_' + kwargs['site'] + '_-_' + 'CSA' + kwargs['csa_no'] +'.docx'
file_name = kwargs['sp'] + '-'+ kwargs['WO'] + '_' + kwargs['site'] + '_-_' + 'CSA' + kwargs['csa_no'] +'.docx'
print file_name
print file_name_with_path
tpl.save(file_name_with_path)
return '''
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta charset="utf-8" http-equiv="X-UA-Compatible" content="IE=9" />
<link href="//ajax.googleapis.com/ajax/libs/jquerymobile/1.4.2/jquery.mobile.min.css" rel="stylesheet">
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/jquerymobile/1.4.2/jquery.mobile.min.js"></script>
<title>Broadspectrum</title>
</head>
<body>
<div data-role="header" data-theme="b">
<h1>TCSS Gateway</h1>
</div>
<h2>Success</h2>
<a href="http://192.168.1.7">another submission</a>
<a href="http://192.168.1.7/csa_documents/%s">Download & Review CSA Document</a>
</body>
''' % file_name
</code></pre>
| 0 | 2016-07-29T11:33:14Z | 38,665,815 | <p>The short answer is that basically you need to write some in-memory stream (e.g. <code>BytesIO</code>), pre-set some HTTP headers and return a <a href="http://docs.cherrypy.org/en/latest/pkg/cherrypy.lib.html?highlight=file_generator#cherrypy.lib.file_generator" rel="nofollow"><code>file_generator</code></a>.
Your question is almost the same as one asked a month ago and <a href="http://stackoverflow.com/a/38131584/595220">here</a> is my answer to it.</p>
<p>I've drafted a little snippet for you in python3:</p>
<pre><code>from io import BytesIO
import cherrypy
from cherrypy.lib import file_generator
from docxtpl import DocxTemplate
class GenerateDocx:
@cherrypy.expose
def build_docx(self, *args, **kwargs):
iostream = BytesIO()
tpl = DocxTemplate('csa_tpl.docx')
...
# build your tpl here
...
tpl.get_docx().save(iostream)
cherrypy.response.headers['Content-Type'] = (
# set the correct MIME type for docx
'application/vnd.openxmlformats-officedocument'
'.wordprocessingml.document'
)
cherrypy.response.headers['Content-Disposition'] = (
'attachment; filename={fname}.docx'.format(
fname='put your file name here'
)
)
iostream.seek(0)
return file_generator(iostream)
</code></pre>
<p>UPD: I've just checked that <a href="https://github.com/cherrypy/cherrypy/blob/master/cherrypy/lib/encoding.py#L232" rel="nofollow">the response body gets automatically wrapped with <code>file_generator</code> if the return value of a handler has <code>read()</code> method support</a></p>
| 1 | 2016-07-29T18:59:07Z | [
"python",
"cherrypy"
] |
Python Tkinter label refresh | 38,657,868 | <p>I'm trying to build a gui that creates a password and i've got as far as generating the password and making it appear in a label. However when the button is clicked multiple times it appears the old password doesnt dissapear, it just overlays on top. I'm also getting an error that i cant seem to rectify, although it doesnt seem to affect the gui.</p>
<p>The code so far is:</p>
<pre><code>from tkinter import *
import random
myGui = Tk()
myGui.geometry('300x200+700+250')
myGui.title('Password Generator')
def passwordgen():
password = ''
for i in range(8):
##----runs the for loop 8 times
if (i == 0) or (i == 4):
password = password + chr(random.randint(97, 122))
if (i == 1) or (i == 5):
password = password + chr(random.randint(65, 90))
if (i == 2) or (i == 6):
password = password + chr(random.randint(48, 57))
if (i == 3) or (i == 7):
password = password + chr(random.randint(33, 47))
passLabel = Label(myGui, text=password)
passLabel.grid(row=0, column=1, sticky=E)
genPassBtn = Button(myGui, text="Generate Password", command=passwordgen)
genPassBtn.bind("<Button-1>", passwordgen)
genPassBtn.grid(row=0, column=0, sticky=W)
myGui.mainloop()
</code></pre>
<p>The error i receive is:</p>
<pre><code>return self.func(*args)
TypeError: passwordgen() takes 0 positional arguments but 1 was given
</code></pre>
<p>The outcome i am hoping to achieve is to create a gui that generates a password, generates a hash value for generated password, checks the password strength, loads the generated hash to a text file and then can verify the password against stored hashes.</p>
<p>Further on now and from advice received i have amended the code and added extra to check the strength. The code now looks like this:</p>
<pre><code>from tkinter import *
import random
myGui = Tk()
myGui.geometry('300x200+700+250')
myGui.title('Password Generator')
def passwordgen():
password = ''
for i in range(8):
##----runs the for loop 8 times
if (i == 0) or (i == 4):
password = password + chr(random.randint(97, 122))
if (i == 1) or (i == 5):
password = password + chr(random.randint(65, 90))
if (i == 2) or (i == 6):
password = password + chr(random.randint(48, 57))
if (i == 3) or (i == 7):
password = password + chr(random.randint(33, 47))
strPassword.set(password)
def checkPassword():
strength = ['Blank', 'Very Weak', 'Weak', 'Medium', 'Strong', 'Very Strong']
score = 1
password = strPassword.get()
if len(password) < 1:
return strength[0]
if len(password) < 4:
return strength[1]
if len(password) >= 8:
score += 1
if re.search('[0-9]', password):
score += 1
if re.search('[a-z]', password) and re.search('[A-Z]', password):
score += 1
if re.search('.', password):
score += 1
passwordStrength.set(strength[score])
genPassBtn = Button(myGui, text="Generate Password", command=passwordgen)
strPassword = StringVar()
lblPassword = Label(myGui, textvariable=strPassword)
lblPassword.grid(row=0, column=1, sticky=W)
genPassBtn.grid(row=0, column=0, sticky=W)
passwordStrength = StringVar()
checkStrBtn = Button(myGui, text="Check Strength", command=checkPassword)
checkStrBtn.grid(row=1, column=0)
checkStrLab = Label(myGui, textvariable=passwordStrength)
checkStrLab.grid(row=1, column=1)
myGui.mainloop()
</code></pre>
| 0 | 2016-07-29T11:37:19Z | 38,658,132 | <p>Try this example.</p>
<pre><code>from tkinter import *
import random
myGui = Tk()
myGui.geometry('300x200+700+250')
myGui.title('Password Generator')
def passwordgen():
password = ''
for i in range(8):
##----runs the for loop 8 times
if (i == 0) or (i == 4):
password = password + chr(random.randint(97, 122))
if (i == 1) or (i == 5):
password = password + chr(random.randint(65, 90))
if (i == 2) or (i == 6):
password = password + chr(random.randint(48, 57))
if (i == 3) or (i == 7):
password = password + chr(random.randint(33, 47))
strPassword.set(password)
genPassBtn = Button(myGui, text="Generate Password", command=passwordgen)
strPassword = StringVar()
lblPassword = Label(myGui, textvariable=strPassword)
lblPassword.grid(row=0,column=1, sticky=W)
genPassBtn.grid(row=0, column=0, sticky=W)
myGui.mainloop()
</code></pre>
<p>Here's what I've done</p>
<ol>
<li>Rather than creating a new label each time, I change the text of a label using the StringVar called strPassword.</li>
<li>You don't need to bind a button to a click to call a function, using Button(... , command=myFunction) does this already.</li>
</ol>
| 2 | 2016-07-29T11:51:26Z | [
"python",
"user-interface",
"tkinter"
] |
Does enumerate over slice perform sublist materialization? | 38,657,870 | <p>Does this:</p>
<pre><code>for i,v in enumerate(lst[from:to]):
</code></pre>
<p>or this:</p>
<pre><code>for i,v in enumerate(itertools.islice(lst,from,to)):
</code></pre>
<p>...make a copy of iterated sublist?</p>
| 2 | 2016-07-29T11:37:23Z | 38,658,522 | <p>Assuming that <code>lst</code> is a regular Python list, and not a Numpy array, Pandas dataframe, or some custom class supporting slice indexing, then the slice <code>[...:...]</code> will create a new list, whereas <a href="https://docs.python.org/3/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice</code></a> does not.</p>
<p>As suggested in comments, you can see this for yourself by creating both <code>enumerate</code> objects and modifying the original list before consuming them:</p>
<pre><code>>>> lst = [1, 2, 3, 4, 5]
>>> e1 = enumerate(lst[1:4])
>>> e2 = enumerate(itertools.islice(lst, 1, 4))
>>> del lst[2] # remove second element
>>> list(e1)
[(0, 2), (1, 3), (2, 4)] # shows content of original list
>>> list(e2)
[(0, 2), (1, 4), (2, 5)] # second element skipped
</code></pre>
<p>Also note that this does in fact have nothing to do with <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a>, which will create a generator in both cases (on top of whatever iterable was created before by the slice).</p>
<p>You could also just create the two variants of slices and check their types:</p>
<pre><code>>>> type(lst[1:4])
list # a new list
>>> type(itertools.islice(lst, 1, 4))
itertools.islice # some sort of generator
</code></pre>
| 2 | 2016-07-29T12:11:13Z | [
"python",
"python-3.x"
] |
Grep and replace fuzzy pattern of string | 38,658,041 | <p>I have a few files with python code and decorators like this:</p>
<pre><code>@trace('api.module.function_name', info=None, custom_args=False)
</code></pre>
<p>The only difference between these decorators is the string 'api.module.function_name' - func name and module are different. And depending on the this param name sometimes this decorator is one-lined, some times it is two- or three-lined.</p>
<p>I want to replace these decorators with the other one - more simple, like "@my_new_decorator".</p>
<p>I thought about some regex but I have no idea if it's possible for such "fuzzy" search. I tried <code>^@trace([A-Za-z0-9]\, custom_args=False)$</code>
But it doesn't work.</p>
<p>Is there a way to do it?</p>
| 0 | 2016-07-29T11:47:09Z | 38,658,501 | <p>Something like this should work:</p>
<pre class="lang-none prettyprint-override"><code>(\n|^)\s*@trace\(\s*'[^']*',\s*info=None,\s*custom_args=False\s*\)\s*(\r|\n|$)
</code></pre>
<p>See the <a href="https://regex101.com/r/wU7zK5/2" rel="nofollow">demo</a></p>
| 1 | 2016-07-29T12:10:10Z | [
"python",
"regex"
] |
Grep and replace fuzzy pattern of string | 38,658,041 | <p>I have a few files with python code and decorators like this:</p>
<pre><code>@trace('api.module.function_name', info=None, custom_args=False)
</code></pre>
<p>The only difference between these decorators is the string 'api.module.function_name' - func name and module are different. And depending on the this param name sometimes this decorator is one-lined, some times it is two- or three-lined.</p>
<p>I want to replace these decorators with the other one - more simple, like "@my_new_decorator".</p>
<p>I thought about some regex but I have no idea if it's possible for such "fuzzy" search. I tried <code>^@trace([A-Za-z0-9]\, custom_args=False)$</code>
But it doesn't work.</p>
<p>Is there a way to do it?</p>
| 0 | 2016-07-29T11:47:09Z | 38,658,571 | <p>Use <code>^@trace\('api\.(.+)\.(.+)', info=None, custom_args=False\)$</code> with a multiline flag.</p>
<p>You may want to use <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow">re.sub</a> :</p>
<pre><code>>>> import re
>>> pattern = re.compile('^@trace\('api\.(.+)\.(.+)', info=None, custom_args=False\)$', re.M)
>>> re.sub(pattern, '@my_new_decorator('\1', '\2')', '@trace('api.module.function_name', info=None, custom_args=False)')
@my_new_decorator('module', 'function_name')
</code></pre>
<p>See <a href="https://www.debuggex.com/r/NzIGSGobeLVuPdBj" rel="nofollow">this</a> for the demo of the regex</p>
<p>As you can see <code>\1</code> expand to the first group in the regex <code>(.+)</code></p>
| 1 | 2016-07-29T12:13:56Z | [
"python",
"regex"
] |
How can i find all ydl_opts | 38,658,046 | <pre><code>ydl_opts = {
'verbose': True, #like this
'format': '{}'.format(int(comboget)), #format,vebrose,ottmpl
'outtmpl': '%(title)s-%(id)s.%(ext)s', #how can i find
'noplaylist': mt, #all dictionary
'logger': MyLogger(), #options
'progress_hooks': [durum], #how can i find
}
ydl = youtube_dl.YoutubeDL(ydl_opts)
ydl.download([url])
</code></pre>
<p>how can i find all ydl_opts in here <a href="https://github.com/rg3/youtube-dl" rel="nofollow">https://github.com/rg3/youtube-dl</a></p>
| 0 | 2016-07-29T11:47:22Z | 38,659,184 | <p>All options for the Python Module are listed in <a href="https://github.com/rg3/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L128-L278" rel="nofollow">YoutubeDL.py</a></p>
<p>Here is a small excerpt</p>
<blockquote>
<pre><code>username: Username for authentication purposes.
password: Password for authentication purposes.
videopassword: Password for accessing a video.
usenetrc: Use netrc for authentication instead.
verbose: Print additional info to stdout.
quiet: Do not print messages to stdout.
no_warnings: Do not print out anything for warnings.
forceurl: Force printing final URL.
forcetitle: Force printing title.
forceid: Force printing ID.
forcethumbnail: Force printing thumbnail URL.
forcedescription: Force printing description.
forcefilename: Force printing final filename.
forceduration: Force printing duration.
forcejson: Force printing info_dict as JSON.
dump_single_json: Force printing the info_dict of the whole playlist
(or video) as a single JSON line.
...
</code></pre>
</blockquote>
| 1 | 2016-07-29T12:42:39Z | [
"python",
"python-3.x",
"youtube-dl"
] |
Python Selenium Webdriver - Grab div after specified one | 38,658,052 | <p>I am trying to use Python Selenium Firefox Webdriver to grab the h2 content 'My Data Title' from this HTML</p>
<pre><code><div class="box">
<ul class="navigation">
<li class="live">
<span>
Section Details
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
My Data Title
</h2>
</div>
<div class="box">
<ul class="navigation">
<li class="live">
<span>
Another Section
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
Another Title
</h2>
</div>
</code></pre>
<p>Each div has a class of <strong>box</strong> so I can't easily identify the one I want. Is there a way to tell Selenium to grab the h2 in the box class that comes after the one that has the span called <strong>'Section Details'</strong>?</p>
| 2 | 2016-07-29T11:47:39Z | 38,658,352 | <p>yeah, you need to do some complicated xpath searching:</p>
<pre><code>referenceElementList = driver.find_elements_by_xpath("//span")
for eachElement in referenceElementList:
if eachElement.get_attribute("innerHTML") == 'Section Details':
elementYouWant = eachElement.find_element_by_xpath("../../../following-sibling::div/h2")
elementYouWant.get_attribute("innerHTML") should give you "My Data Title"
</code></pre>
<p>My code reads: </p>
<ol>
<li>find all span elements regardless of where they are in HTML and store them in a list called <code>referenceElementList</code>;</li>
<li>iterate all <code>span</code> elements in <code>referenceElementList</code> one by one, looking for a span whose <code>innerHTML</code> attribute is '<code>Section Details</code>'.</li>
<li>if there is a match, we have found the span, and we navigate backwards three levels to locate the enclosing <code>div[@class='box']</code>, and find this <code>div</code> element next sibling, which is the second <code>div</code> element, </li>
<li>Lastly, we locate the <code>h2</code> element from its parent.</li>
</ol>
<p>Can you please tell me if my code works? I might have gone wrong somewhere navigating backwards.</p>
<p><strong>There is potential difficulty you may encounter, the innerHTML attribute may contain tab, new line and space characters, in that case, you need regex to do some filtering first.</strong></p>
| 1 | 2016-07-29T12:02:19Z | [
"python",
"selenium",
"selenium-webdriver"
] |
Python Selenium Webdriver - Grab div after specified one | 38,658,052 | <p>I am trying to use Python Selenium Firefox Webdriver to grab the h2 content 'My Data Title' from this HTML</p>
<pre><code><div class="box">
<ul class="navigation">
<li class="live">
<span>
Section Details
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
My Data Title
</h2>
</div>
<div class="box">
<ul class="navigation">
<li class="live">
<span>
Another Section
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
Another Title
</h2>
</div>
</code></pre>
<p>Each div has a class of <strong>box</strong> so I can't easily identify the one I want. Is there a way to tell Selenium to grab the h2 in the box class that comes after the one that has the span called <strong>'Section Details'</strong>?</p>
| 2 | 2016-07-29T11:47:39Z | 38,658,446 | <p>Here is an XPath to select the title following the text "Section Details":</p>
<pre><code>//div[@class='box'][normalize-space(.)='Section Details']/following::h2
</code></pre>
| 2 | 2016-07-29T12:06:53Z | [
"python",
"selenium",
"selenium-webdriver"
] |
Python Selenium Webdriver - Grab div after specified one | 38,658,052 | <p>I am trying to use Python Selenium Firefox Webdriver to grab the h2 content 'My Data Title' from this HTML</p>
<pre><code><div class="box">
<ul class="navigation">
<li class="live">
<span>
Section Details
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
My Data Title
</h2>
</div>
<div class="box">
<ul class="navigation">
<li class="live">
<span>
Another Section
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
Another Title
</h2>
</div>
</code></pre>
<p>Each div has a class of <strong>box</strong> so I can't easily identify the one I want. Is there a way to tell Selenium to grab the h2 in the box class that comes after the one that has the span called <strong>'Section Details'</strong>?</p>
| 2 | 2016-07-29T11:47:39Z | 38,658,548 | <p>If you want grab the <code>h2</code> in the box class that comes after the one that has the span with text <code>Section Details</code> try below <code>xpath</code> using <code>preceding</code> :-</p>
<pre><code>(//h2[preceding::span[normalize-space(text()) = 'Section Details']])[1]
</code></pre>
<p>or using <code>following</code> :</p>
<pre><code>(//span[normalize-space(text()) = 'Section Details']/following::h2)[1]
</code></pre>
<p>and for <code>Another Section</code> just change the span text in <code>xpath</code> as:-</p>
<pre><code>(//h2[preceding::span[normalize-space(text()) = 'Another Section']])[1]
</code></pre>
<p>or</p>
<pre><code>(//span[normalize-space(text()) = 'Another Section']/following::h2)[1]
</code></pre>
| 2 | 2016-07-29T12:12:36Z | [
"python",
"selenium",
"selenium-webdriver"
] |
How to remove .com from an url in python? | 38,658,085 | <p>I want to remove the domain in an url
For e.g. User entered www.google.com
But I only need www.google</p>
<p>How to do this in python?
Thanks </p>
| 0 | 2016-07-29T11:48:56Z | 38,658,108 | <p>If you want to remove 4 characters at the end, <a class='doc-link' href="http://stackoverflow.com/documentation/python/289/indexing-and-slicing#t=201607291151112404835">slice it</a></p>
<pre><code>url = 'www.google.com'
cut_url = str[:-4]
# output : 'www.google'
</code></pre>
<hr>
<p><em>More advanced answer</em></p>
<p>If you have a list of all the possible domains <code>domains</code>:</p>
<pre><code>domains = ['com', 'uk', 'fr', 'net', 'co', 'nz'] # and so on...
while True:
domain = url.split('.')[-1]
if domain in domains:
url = '.'.join(url.split('.')[:-1])
else:
break
</code></pre>
<p>Or if, for example, you have a domains list where <code>.co</code> and <code>.uk</code> are not separated:</p>
<pre><code>domains = ['.com', '.co.uk', '.fr', '.net', '.co.nz'] # and so on...
for domain in domains:
if url.endswith(domain):
cut_url = url[:-len(domain)]
break
else: # there is no indentation mistake here.
# else after for will be executed if for did not break
print('no known domain found')
</code></pre>
| 0 | 2016-07-29T11:50:10Z | [
"python",
"urlparse"
] |
How to remove .com from an url in python? | 38,658,085 | <p>I want to remove the domain in an url
For e.g. User entered www.google.com
But I only need www.google</p>
<p>How to do this in python?
Thanks </p>
| 0 | 2016-07-29T11:48:56Z | 38,658,143 | <p>This is a very general question. But the narrowest answer would be as follows (assuming <code>url</code> holds the URL in question):</p>
<pre><code>if url.endswith(".com"):
url = url[:-4]
</code></pre>
<p>If you want to remove the last period and everything to the right of it the code would be a little more complicated:</p>
<pre><code>pos = url.rfind('.') # find rightmost dot
if pos >= 0: # found one
url = url[:pos]
</code></pre>
| 3 | 2016-07-29T11:52:01Z | [
"python",
"urlparse"
] |
How to remove .com from an url in python? | 38,658,085 | <p>I want to remove the domain in an url
For e.g. User entered www.google.com
But I only need www.google</p>
<p>How to do this in python?
Thanks </p>
| 0 | 2016-07-29T11:48:56Z | 38,658,372 | <p>To solve this without having the problem of dealing with domain name, you can look for the dots from left hand side and stop at the second dot.</p>
<pre><code>t = 'www.google.com'
a = t.split('.')[1]
pos = t.find(a)
t = t[:pos+len(a)]
>>> 'www.google'
</code></pre>
| 0 | 2016-07-29T12:03:11Z | [
"python",
"urlparse"
] |
How to remove .com from an url in python? | 38,658,085 | <p>I want to remove the domain in an url
For e.g. User entered www.google.com
But I only need www.google</p>
<p>How to do this in python?
Thanks </p>
| 0 | 2016-07-29T11:48:56Z | 38,658,634 | <p>What you need here is <code>rstrip</code> function.</p>
<p>Try this code:</p>
<pre><code>url = 'www.google.com'
url2 = 'www.google'
new_url = url.rstrip('.com')
print (new_url)
new_url2 = url2.rstrip('.com')
print (new_url2)
</code></pre>
<p><code>rstrip</code> will only strip if the string is present, in this case ".com". If not, it will just leave it. <code>rstrip</code> is for stripping 'right-most' matched string and <code>lstrip</code> is the opposite of this. Check these <a href="http://python-reference.readthedocs.io/en/latest/docs/str/rstrip.html" rel="nofollow">docs</a>.
Also check <a href="http://python-reference.readthedocs.io/en/latest/docs/str/strip.html" rel="nofollow">strip</a> and <a href="http://python-reference.readthedocs.io/en/latest/docs/str/lstrip.html" rel="nofollow">lstrip</a> functions.</p>
<h1>UPDATE</h1>
<p>As @SteveJessop pointed out that the above example <strong>is NOT the right solution</strong> so i'm submitting another solution, though it's related to another answer here, it does check first if the string ends with a '.com'.</p>
<pre><code>url = 'www.foo.com'
if url.endswith('.com'):
url = url[:-4]
print (url)
</code></pre>
| -1 | 2016-07-29T12:16:26Z | [
"python",
"urlparse"
] |
how to fix "list indices must be integers, not list" | 38,658,135 | <p>Hi i would like to know what is wrong with this code</p>
<pre><code>f = open('test.txt', 'a+')
yourResult = [line.split(',') for line in f.readlines()]
for answer in yourResult:
print (yourResult[answer])
a = raw_input('What Was That')
Format = (answer + ' : ' + a + ', ')
f.write(Format)
print (Format)
File = open('test.txt', 'r')
</code></pre>
| 0 | 2016-07-29T11:51:32Z | 38,658,219 | <p>When you use a for this way, <code>answer</code> is one of the elements of yourResult, not the index of such an element.<br>
To do what you want with indexes, do :</p>
<pre><code>for answer in range(len(yourResult)):
print(yourResult[answer])
</code></pre>
<p>In this case, you'll look inside yourResult by using indexes.</p>
<p>If not, you can do :</p>
<pre><code>for answer in yourResult:
print(answer)
</code></pre>
<p>In this case, you loop directly at the elements inside the array, thus not needing to access them through indexes.</p>
| 0 | 2016-07-29T11:55:47Z | [
"python",
"typeerror"
] |
how to fix "list indices must be integers, not list" | 38,658,135 | <p>Hi i would like to know what is wrong with this code</p>
<pre><code>f = open('test.txt', 'a+')
yourResult = [line.split(',') for line in f.readlines()]
for answer in yourResult:
print (yourResult[answer])
a = raw_input('What Was That')
Format = (answer + ' : ' + a + ', ')
f.write(Format)
print (Format)
File = open('test.txt', 'r')
</code></pre>
| 0 | 2016-07-29T11:51:32Z | 38,658,234 | <p><code>yourResult[answer]</code> can't work, the <code>[]</code> are expecting integer. when you do:</p>
<p><code>for answer in yourResult:</code>, <code>answer</code> is a <strong>list</strong>(thanks bruno desthuilliers).</p>
<p>You should do: </p>
<pre><code>for answer in yourResult:
print (answer)
</code></pre>
<p>here, answer will be <code>yourResult[0]</code>, then <code>yourResult[1]</code> etc.. </p>
| 1 | 2016-07-29T11:56:42Z | [
"python",
"typeerror"
] |
how to fix "list indices must be integers, not list" | 38,658,135 | <p>Hi i would like to know what is wrong with this code</p>
<pre><code>f = open('test.txt', 'a+')
yourResult = [line.split(',') for line in f.readlines()]
for answer in yourResult:
print (yourResult[answer])
a = raw_input('What Was That')
Format = (answer + ' : ' + a + ', ')
f.write(Format)
print (Format)
File = open('test.txt', 'r')
</code></pre>
| 0 | 2016-07-29T11:51:32Z | 38,658,301 | <p>Python's <code>for</code> loop doesn't yield indices but the item in the sequence itself, so here, inside the loop, <code>answer</code> is already an element of <code>yourResult</code>. IOW, you want:</p>
<pre><code>for answer in yourResult:
print (answer)
</code></pre>
<p>As a side note:</p>
<p>1/ a <code>file</code> object is an iterable, so you don't need to use <code>readlines()</code>, you can (and should) directly iterate over the file (it will avoid loading the whole content in memory):</p>
<p>2/ <code>open()</code> is context manager which takes care of properly closing the file.</p>
<p>The clean version of your code would then be:</p>
<pre><code>with open('test.txt', 'r') as f:
yourResult = [line.split(',') for line in f]
for answer in yourResult:
print(answer)
</code></pre>
| 1 | 2016-07-29T11:59:39Z | [
"python",
"typeerror"
] |
how to fix "list indices must be integers, not list" | 38,658,135 | <p>Hi i would like to know what is wrong with this code</p>
<pre><code>f = open('test.txt', 'a+')
yourResult = [line.split(',') for line in f.readlines()]
for answer in yourResult:
print (yourResult[answer])
a = raw_input('What Was That')
Format = (answer + ' : ' + a + ', ')
f.write(Format)
print (Format)
File = open('test.txt', 'r')
</code></pre>
| 0 | 2016-07-29T11:51:32Z | 38,658,374 | <p>You don't need to use </p>
<pre><code>print(yourResult[answer])
</code></pre>
<p>using</p>
<pre><code>print(answer)
</code></pre>
<p>will do what I expect you want.
Since you have created yourResult as a list, yourResult[x] expects x to be a number from 0 to len(yourResult)-1.
By doing</p>
<pre><code>for answer in yourResult
</code></pre>
<p>You are iteratively setting answer to each item in yourResult.</p>
| 0 | 2016-07-29T12:03:20Z | [
"python",
"typeerror"
] |
Python Scrapy 301 redirects | 38,658,247 | <p>I have a little problem, printing the redirected urls (new URLs after 301 redirection) when scraping a given website. My idea is to only print them and not scrape them. My current peace of code is:</p>
<pre><code>import scrapy
import os
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class MySpider(CrawlSpider):
name = 'rust'
allowed_domains = ['example.com']
start_urls = ['http://example.com']
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(LinkExtractor(), callback='parse_item', follow=True),
)
def parse_item(self, response):
#if response.status == 301:
print response.url
</code></pre>
<p>However this does not print the redirected urls. Any help will be appreciated.
Thank you.</p>
| 0 | 2016-07-29T11:57:07Z | 38,659,886 | <p>To parse any responses that are not 200 you need to add <code>handle_httpstatus_list</code> parameter to your spider. In your case something like: </p>
<pre><code>class MySpider(scrapy.Spider):
handle_httpstatus_list = [301]
... # rest of the spider
</code></pre>
| 3 | 2016-07-29T13:15:26Z | [
"python",
"scrapy"
] |
Django CMS â Show different content for users and guests in same template | 38,658,253 | <p>I would like to have different content for users and guests in my home page's template using <strong>Django 1.9</strong> and <strong>Django CMS 3.3.1</strong>.</p>
<p>It could be acomplished by making subpages and showing the corresponding content in the ancestor based on authentication conditional, but that makes the page structure overly complicated.</p>
<p>Is there an easy way of adding these <strong>placeholders</strong> straight to the <strong>template</strong>?</p>
<p><strong>I have tried this:</strong></p>
<pre><code>{% extends "base.html" %}
{% load cms_tags %}
{% block title %}{% page_attribute "page_title" %}{% endblock title %}
{% block content %}
{% if not user.is_authenticated %}
{% placeholder "guests" %}
{% endif %}
{% if user.is_authenticated %}
{% placeholder "authenticated" %}
{% endif %}
{% placeholder "content" %}
{% endblock content %}
</code></pre>
<p>But as I am authenticated when I'm editing the content, I cannot access the <code>guests</code> placeholder.</p>
| 0 | 2016-07-29T11:57:16Z | 38,659,327 | <p><strong>Try this:</strong></p>
<pre><code>{% block content %}
{% if request.toolbar.build_mode or request.toolbar.edit_mode %}
{% placeholder "guests" %}
{% placeholder "authenticated" %}
{% else %}
{% if not user.is_authenticated %}
{% placeholder "guests" %}
{% endif %}
{% if user.is_authenticated %}
{% placeholder "authenticated" %}
{% endif %}
{% endif %}
{% placeholder "content" %}
{% endblock content %}
</code></pre>
<p>I have some experience with Django CMS but don't know if this will work. The idea is to check if we're in edit mode by inspecting corresponding request variables. See <a href="http://stackoverflow.com/questions/28191037/django-cms-3-detect-if-i-am-facing-structure-or-content">this answer</a>.</p>
| 3 | 2016-07-29T12:48:46Z | [
"python",
"django",
"django-templates",
"django-cms"
] |
Write to file from different functions(python) | 38,658,376 | <p>is it possible to write to a single file from different function python.</p>
<pre><code>from __future__ import print_function
f = open("txt.txt", "wb")
def f1():
...write to txt.txt
def f2():
...write to txt.txt
</code></pre>
<p>is it possible?</p>
| 0 | 2016-07-29T12:03:24Z | 38,658,680 | <p>Just taking the previous suggestions and putting it into code. Thanks all.</p>
<p><strong>functions.py:</strong></p>
<pre><code>def f1(file):
file.write("Function one.")
def f2(file):
file.write("Function two.")
</code></pre>
<p><strong>main.py:</strong></p>
<pre><code>from functions import f1, f2
with open('text.txt','w') as f:
f1(f)
f2(f)
</code></pre>
| 1 | 2016-07-29T12:19:09Z | [
"python"
] |
Create blob from URL in gae using python | 38,658,417 | <p>I'm trying to create a blob from an image URL but I can't figure out how to do that. </p>
<p>I've already read the documentation from google about creating blob, but it talks only about creating blob from a form and using <code>blobstore.create_upload_url('/upload_photo').</code>
I read some question over there but I didn't found anything that could help me. </p>
<p>My app has a list of image URL and I want to save theese images into the blobstore so I will able to serve them afterwards. I think that blobstore is the solution but if there is a better way to do this, please tell me!</p>
<p><strong>EDIT</strong>
I'm trying to use google cloud storage:</p>
<pre><code>class Upload(webapp2.RequestHandler):
def get(self):
self.response.write('Upload blob from url')
url = 'https://500px.com/photo/163151085/evening-light-by-jack-resci'
url_preview = 'https://drscdn.500px.org/photo/163151085/q%3D50_w%3D140_h%3D140/d3b8d92296f9381a91f6d41b1c607c92?v=3'
result = urlfetch.fetch(url_preview)
if result.status_code == 200:
doSomethingWithResult(result.content)
self.response.write(url_preview)
def doSomethingWithResult(content):
gcs_file_name = '/%s/%s' % ('provajack-1993', 'prova.jpg')
content_type = mimetypes.guess_type('prova.jpg')[0]
with cloudstorage.open(gcs_file_name, 'w', content_type=content_type, options={b'x-goog-acl': b'public-read'}) as f:
f.write(content)
return images.get_serving_url(blobstore.create_gs_key('/gs' + gcs_file_name))
</code></pre>
<p>(found in stackoverflow) but this code give me an error: </p>
<pre><code>*File "/base/data/home/apps/e~places-1993/1.394547465865256081/main.py", line 54, in doSomethingWithResult
with cloudstorage.open(gcs_file_name, 'w', content_type=content_type, options={b'x-goog-acl': b'public-read'}) as f:
AttributeError: 'module' object has no attribute 'open'*
</code></pre>
<p>I can't understand why. I have to set something in cloud storage?</p>
| 0 | 2016-07-29T12:05:28Z | 38,658,773 | <p>The <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/functions#open" rel="nofollow"><code>cloudstorage.open()</code></a> clearly exists, so the error likely indicates some library/naming conflict in your environment.</p>
<p>A method for debugging such conflict is described in this post: <a href="http://stackoverflow.com/questions/21167528/google-cloud-sdk-fatal-errors-on-both-update-and-attempted-reinstall-mac-osx-1">google-cloud-sdk - Fatal errors on both update and attempted reinstall Mac OSX 10.7.5</a></p>
| 0 | 2016-07-29T12:23:35Z | [
"python",
"python-2.7",
"google-app-engine",
"blobstore",
"google-cloud-datastore"
] |
recursively collect string blocks in python | 38,658,470 | <p>I have a custom data file formatted like this:</p>
<pre><code>{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}
</code></pre>
<p>I want to collect the blocks of data, meaning string between each set of {} while maintaining a hierarhcy. This data is not a typical json format so that is not a possible solution. </p>
<p>My idea was to create a class object like so</p>
<pre><code>class Block:
def __init__(self, header, children):
self.header = header
self.children = children
</code></pre>
<p>Where i would then loop through the data line by line 'somehow' collecting the necessary data so my resulting output would like something like this...</p>
<pre><code>Block("data = {}", [
Block("friends = {max = 0 0,\n min = 0 0,}", []),
Block("family = {version = 1}", [...])
])
</code></pre>
<p>In short I'm looking for help on ways I can serialize this into useful data I can then easily manipulate. So my approach is to break into objects by using the {} as dividers.
If anyone has suggestions on ways to better approach this I'm all up for ideas. Thank you again.</p>
<p>So far I've just implemented the basic snippets of code</p>
<pre><code>class Block:
def __init__(self, content, children):
self.content = content
self.children = children
def GetBlock(strArr=[]):
print len(strArr)
# blocks = []
blockStart = "{"
blockEnd = "}"
with open(filepath, 'r') as file:
data = file.readlines()
blocks = GetBlock(strArr=data)
</code></pre>
| 1 | 2016-07-29T12:08:29Z | 38,659,444 | <p>You can create a <code>to_block</code> function that takes the lines from your file as an iterator and recursively creates a nested dictionary from those. (Of course you could also use a custom <code>Block</code> class, but I don't really see the benefit in doing so.)</p>
<pre><code>def to_block(lines):
block = {}
for line in lines:
if line.strip().endswith(("}", "},")):
break
key, value = map(str.strip, line.split(" = "))
if value.endswith("{"):
value = to_block(lines)
block[key] = value
return block
</code></pre>
<p>When calling it, you have to strip the first line, though. Also, evaluating the "leafs" to e.g. numbers or strings is left as an excercise to the reader. </p>
<pre><code>>>> to_block(iter(data.splitlines()[1:]))
{'data': {'family': {'version': '1,',
'cars': {'bike': '"trek",', 'car': '"ford",', 'van': '"honda",'},
'presets': {'travelers': 'False,', 'size': '10,', 'location': '"italy",'}},
'friends': {'max': '0 0,', 'min': '0 0,'}}}
</code></pre>
<p>Or when reading from a file:</p>
<pre><code>with open("data.txt") as f:
next(f) # skip first line
res = to_block(f)
</code></pre>
<hr>
<p>Alternatively, you can do some preprocessing to transform that string into a JSON(-ish) string and then use <code>json.loads</code>. However, I would not go all the way here but instead just wrap the values into <code>""</code> (and replace the original <code>"</code> with <code>'</code> before that), otherwise there is too much risk to accidentally turning a string with spaces into a list or similar. You can sort those out once you've created the JSON data.</p>
<pre><code>>>> data = data.replace('"', "'")
>>> data = re.sub(r'= (.+),$', r'= "\1",', data, flags=re.M)
>>> data = re.sub(r'^\s*(\w+) = ', r'"\1": ', data, flags=re.M)
>>> data = re.sub(r',$\s*}', r'}', data, flags=re.M)
>>> json.loads(data)
{'data': {'family': {'version': '1',
'presets': {'size': '10', 'travelers': 'False', 'location': "'italy'"},
'cars': {'bike': "'trek'", 'van': "'honda'", 'car': "'ford'"}},
'friends': {'max': '0 0', 'min': '0 0'}}}
</code></pre>
| 1 | 2016-07-29T12:54:37Z | [
"python"
] |
recursively collect string blocks in python | 38,658,470 | <p>I have a custom data file formatted like this:</p>
<pre><code>{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}
</code></pre>
<p>I want to collect the blocks of data, meaning string between each set of {} while maintaining a hierarhcy. This data is not a typical json format so that is not a possible solution. </p>
<p>My idea was to create a class object like so</p>
<pre><code>class Block:
def __init__(self, header, children):
self.header = header
self.children = children
</code></pre>
<p>Where i would then loop through the data line by line 'somehow' collecting the necessary data so my resulting output would like something like this...</p>
<pre><code>Block("data = {}", [
Block("friends = {max = 0 0,\n min = 0 0,}", []),
Block("family = {version = 1}", [...])
])
</code></pre>
<p>In short I'm looking for help on ways I can serialize this into useful data I can then easily manipulate. So my approach is to break into objects by using the {} as dividers.
If anyone has suggestions on ways to better approach this I'm all up for ideas. Thank you again.</p>
<p>So far I've just implemented the basic snippets of code</p>
<pre><code>class Block:
def __init__(self, content, children):
self.content = content
self.children = children
def GetBlock(strArr=[]):
print len(strArr)
# blocks = []
blockStart = "{"
blockEnd = "}"
with open(filepath, 'r') as file:
data = file.readlines()
blocks = GetBlock(strArr=data)
</code></pre>
| 1 | 2016-07-29T12:08:29Z | 38,659,653 | <p>You can also do with ast or json with the help of regex substitutions.</p>
<pre><code>import re
a = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}"""
#with ast
a = re.sub("(\w+)\s*=\s*", '"\\1":', a)
a = re.sub(":\s*((?:\d+)(?: \d+)+)", lambda x:':[' + x.group(1).replace(" ", ",") + "]", a)
import ast
print ast.literal_eval(a)
#{'data': {'friends': {'max': [0, 0], 'min': [0, 0]}, 'family': {'cars': {'car': 'ford', 'bike': 'trek', 'van': 'honda'}, 'presets': {'travelers': False, 'location': 'italy', 'size': 10}, 'version': 1}}}
#with json
import json
a = re.sub(",(\s*\})", "\\1", a)
a = a.replace(":True", ":true").replace(":False", ":false").replace(":None", ":null")
print json.loads(a)
#{u'data': {u'friends': {u'max': [0, 0], u'min': [0, 0]}, u'family': {u'cars': {u'car': u'ford', u'bike': u'trek', u'van': u'honda'}, u'presets': {u'travelers': False, u'location': u'italy', u'size': 10}, u'version': 1}}}
</code></pre>
| 1 | 2016-07-29T13:03:36Z | [
"python"
] |
Indexing many JSON objects into Elasticsearch - the canonical way | 38,658,471 | <p>Here's a scenario I keep facing, and I am in doubt whether the solution I take is the canonical/smart one. Assume you a file where each line is a valid JSON. Furthermore, each object contains a field <code>type</code> and <code>id</code> and the pairs of are unique. My goal is to index all the objects into an index on an ES cluster. So far I took two approaches:</p>
<p>Using the <code>bulk</code> API together with <code>jq</code> using something like: </p>
<pre><code>$ cat foo.json | jq -c '. | {"index": {"_index": "your_test_index", "_type": "doc_type"}}, .' | curl -XPOST localhost:9200/_bulk --data-binary @-
</code></pre>
<p>This works very nicely, but it is super slow.</p>
<p>I tried also to use the Python client, but still I have to read line by line and index them one by one. </p>
<p>Is there some way to "push" the complete file and direct ES to process all lines the same way? Or in other words, what is the efficient way to index LARGE amount of JSON objects in a batch processing fashion?</p>
| 2 | 2016-07-29T12:08:30Z | 38,658,606 | <p>Definitely the <code>bulk</code> approach. But you need to put a bit more work into this, as it's not as easy as creating one file, sending it to ES and expect it to deal with it.</p>
<p>If the file is too big, of course it will struggle.
Do please read this section of the documentation, especially the last part where it describes how you'd need to decide how large a bulk batch needs to be: <a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/bulk.html" rel="nofollow">https://www.elastic.co/guide/en/elasticsearch/guide/current/bulk.html</a></p>
<p>Each cluster has its own characteristics and each can deal with a certain number of batches/certain number of concurrent batches even. It depends on your specifics, so do please test this and determine the best number for your particular use case.</p>
| 0 | 2016-07-29T12:15:25Z | [
"python",
"json",
"elasticsearch",
"jq"
] |
Calculating circular motion in a tuple array | 38,658,472 | <p>I got a project to deliver for school.
The project is about random movement based systems, me and my mate chose to research a double pendulum set(movement is not random but chaotic).
Our research question wants to investigate if the system can get to a state of periodic motion in any point and in how much time will that happen.
And my question is how can we detect circular motion in an array filled with 2 values pairs from the model.
The array will be filled like this- (theta1, theta2) the 2 angles of the 2 pendulum rods.
The rest of the variables such as the lengths of the pendulum rods or the 2 masses are known so from those angles we can calculate the state of the rest of the system.
Each array element is added based on the time of the movement, the time differences between each array element is 0.05 a second, so the time of each array element can be calculated like n*0.05.</p>
<p><a href="http://i.stack.imgur.com/ZeHbF.gif" rel="nofollow"><img src="http://i.stack.imgur.com/ZeHbF.gif" alt="Double Pendulum"></a></p>
<p>We can easily export the data pairs array from the double pendulum model we developed using EJS and analyze it using a script written in python or something else, just we don't know what is the best way to approach this.
Hope that the explanation was clear, thanks for helping! :) </p>
| 0 | 2016-07-29T12:08:33Z | 38,660,237 | <p><strong>EDIT:</strong> OP clarified the question and stated that he is searching for periodical movement, not <code>m1</code> or <code>m2</code> <em>flipping over</em>.</p>
<p>The approach below refers to <em>flip detection</em> for one of the two masses at the double pendulum.</p>
<h2>Flip detection</h2>
<p>Detecting the point in time (or the simulation time slice) where the pendulum has <em>flipped over</em> depends on how you define an actual <em>flip</em>. Let the pendulum's center be positiond at <code>(0, 0)</code>.</p>
<p><em>Please note that this is only a rough thought that can serve as a starting point for more exact calculations for flip detection.</em></p>
<h3>Single pendulum case</h3>
<p>A single pendulum can be considered to have flipped over, if the <code>x</code> coordinate changes its sign while its <code>y</code> coordinate is positive. This is, because we know, that it will continue its movement into the same direction once it crossed the zenith.</p>
<h3>Double pendulum case</h3>
<p>The definition of a <em>flip</em> is not as simple for a double pendulum case.</p>
<p>As OP probably knows, a double pendulum has <em>chaotic motion</em> and can only be solved numerically. I.e., its trajectory cannot be predicted.</p>
<p><a href="http://i.stack.imgur.com/R2VhX.gif" rel="nofollow"><img src="http://i.stack.imgur.com/R2VhX.gif" alt="Idealized double pendulum"></a></p>
<p><em>(Image by 100Miezekatzen taken from Wikipedia, licensed under Creative Commons)</em></p>
<p>For the case of <code>m1</code> flipping over, roughly the same conditions hold as for the single pendulum case. However, after crossing the zenith, <code>m1</code> can still be forced to invert its (angular) movement direction by the forces of <code>m2</code> under certain conditions. Thus, it is insufficent to only inspect whether the <code>x</code> coordinate changes its sign. Instead, we have to inspect a longer sequence of observations (from the simulation period) and make sure that the direction (i.e. <code>theta1</code> constantly decreases or increases) remains the same over that period. Depending on the masses and lengths in the setup, we can surely calculate the actual interval for <code>theta1</code> where it can still be forced to re-cross the zenith at positive <code>y</code>, but it is probably sufficient to search for a sequence in which all of these hold:</p>
<ul>
<li><code>m1</code> has covered an angular distance of roughly <code>pi/2</code> (a quarter circle) into the same direction</li>
<li>the <code>y</code> coordinate of <code>m1</code> is positive over the entire sequence</li>
<li>the sign of the <code>x</code> coordinate of <code>m1</code> changes exactly once at the center of the sequence</li>
</ul>
<p>For the case of <code>m2</code> flipping over, we would have to define similar conditions that have to hold during a sequence ought to be found, but always in relation to <code>m1</code>. Let me just make a quick guess:</p>
<ul>
<li><code>m2.y - m1.y</code> is positive</li>
<li><code>theta2 - theta1</code> has only increased or decreased for the entire sequence (movement direction is constant relative to <code>m1</code>)</li>
<li><code>m2.x - m1.x</code> changes sign exactly once in this sequence (<code>m2</code> crosses zenith over <code>m1</code>)</li>
<li>... a few more?</li>
</ul>
| 2 | 2016-07-29T13:33:04Z | [
"python",
"arrays",
"physics"
] |
Changing iterable variable during loop | 38,658,490 | <p>Let <code>it</code> be an iterable element in python.
In what cases is a change of <code>it</code> inside a loop over <code>it</code> reflected? Or more straightforward: When does something like this work?</p>
<pre><code>it = range(6)
for i in it:
it.remove(i+1)
print i
</code></pre>
<p>Leads to 0,2,4 being printed (showing the loop runs 3 times).</p>
<p>On the other hand does </p>
<pre><code>it = range(6)
for i in it:
it = it[:-2]
print it
</code></pre>
<p>lead to the output:</p>
<pre><code>[0,1,2,3]
[0,1]
[]
[]
[]
[],
</code></pre>
<p>showing the loop runs 6 times. I guess it has something to do with in-place operations or variable scope but cannot wrap my head around it 100% sure.</p>
<p><strong>Clearification:</strong> </p>
<p>One example, that doesn't work:</p>
<pre><code>it = range(6)
for i in it:
it = it.remove(i+1)
print it
</code></pre>
<p>leads to 'None' being printed and an Error (NoneType has no attribute 'remove') to be thrown.</p>
| 6 | 2016-07-29T12:09:32Z | 38,658,791 | <p>When you iterate over a <code>list</code> you actually call <code>list.__iter__()</code>, which returns a <code>listiterator</code> object bound to the <code>list</code>, and then actually iterate over this <code>listiterator</code>. Technically, this:</p>
<pre><code>itt = [1, 2, 3]
for i in itt:
print i
</code></pre>
<p>is actually kind of syntactic sugar for:</p>
<pre><code>itt = [1, 2, 3]
iterator = iter(itt)
while True:
try:
i = it.next()
except StopIteration:
break
print i
</code></pre>
<p>So at this point - within the loop -, rebinding <code>itt</code> doesn't impact the <code>listiterator</code> (which keeps it's own reference to the list), but <em>mutating</em> <code>itt</code> will obviously impact it (since both references point to the same list).</p>
<p>IOW it's the same old difference between rebinding and mutating... You'd get the same behaviour without the <code>for</code> loop:</p>
<pre><code># creates a `list` and binds it to name "a"
a = [1, 2, 3]
# get the object bound to name "a" and binds it to name "b" too.
# at this point "a" and "b" both refer to the same `list` instance
b = a
print id(a), id(b)
print a is b
# so if we mutate "a" - actually "mutate the object bound to name 'a'" -
# we can see the effect using any name refering to this object:
a.append(42)
print b
# now we rebind "a" - make it refer to another object
a = ["a", "b", "c"]
# at this point, "b" still refer to the first list, and
# "a" refers to the new ["a", "b", "c"] list
print id(a), id(b)
print a is b
# and of course if we now mutate "a", it won't reflect on "b"
a.pop()
print a
print b
</code></pre>
| 6 | 2016-07-29T12:24:36Z | [
"python",
"for-loop"
] |
Changing iterable variable during loop | 38,658,490 | <p>Let <code>it</code> be an iterable element in python.
In what cases is a change of <code>it</code> inside a loop over <code>it</code> reflected? Or more straightforward: When does something like this work?</p>
<pre><code>it = range(6)
for i in it:
it.remove(i+1)
print i
</code></pre>
<p>Leads to 0,2,4 being printed (showing the loop runs 3 times).</p>
<p>On the other hand does </p>
<pre><code>it = range(6)
for i in it:
it = it[:-2]
print it
</code></pre>
<p>lead to the output:</p>
<pre><code>[0,1,2,3]
[0,1]
[]
[]
[]
[],
</code></pre>
<p>showing the loop runs 6 times. I guess it has something to do with in-place operations or variable scope but cannot wrap my head around it 100% sure.</p>
<p><strong>Clearification:</strong> </p>
<p>One example, that doesn't work:</p>
<pre><code>it = range(6)
for i in it:
it = it.remove(i+1)
print it
</code></pre>
<p>leads to 'None' being printed and an Error (NoneType has no attribute 'remove') to be thrown.</p>
| 6 | 2016-07-29T12:09:32Z | 38,658,815 | <p>In the first loop you are changing the <code>it</code> object (inner state of the object), however, in the second loop you are reassigning the <code>it</code> to another object, leaving initial object unchanged.</p>
<p>Let's take a look at the generated bytecode:</p>
<pre><code>In [2]: def f1():
...: it = range(6)
...: for i in it:
...: it.remove(i + 1)
...: print i
...:
In [3]: def f2():
...: it = range(6)
...: for i in it:
...: it = it[:-2]
...: print it
...:
In [4]: import dis
In [5]: dis.dis(f1)
2 0 LOAD_GLOBAL 0 (range)
3 LOAD_CONST 1 (6)
6 CALL_FUNCTION 1
9 STORE_FAST 0 (it)
3 12 SETUP_LOOP 36 (to 51)
15 LOAD_FAST 0 (it)
18 GET_ITER
>> 19 FOR_ITER 28 (to 50)
22 STORE_FAST 1 (i)
4 25 LOAD_FAST 0 (it)
28 LOAD_ATTR 1 (remove)
31 LOAD_FAST 1 (i)
34 LOAD_CONST 2 (1)
37 BINARY_ADD
38 CALL_FUNCTION 1
41 POP_TOP
5 42 LOAD_FAST 1 (i)
45 PRINT_ITEM
46 PRINT_NEWLINE
47 JUMP_ABSOLUTE 19
>> 50 POP_BLOCK
>> 51 LOAD_CONST 0 (None)
54 RETURN_VALUE
In [6]: dis.dis(f2)
2 0 LOAD_GLOBAL 0 (range)
3 LOAD_CONST 1 (6)
6 CALL_FUNCTION 1
9 STORE_FAST 0 (it)
3 12 SETUP_LOOP 29 (to 44)
15 LOAD_FAST 0 (it)
18 GET_ITER
>> 19 FOR_ITER 21 (to 43)
22 STORE_FAST 1 (i)
4 25 LOAD_FAST 0 (it)
28 LOAD_CONST 2 (-2)
31 SLICE+2
32 STORE_FAST 0 (it)
5 35 LOAD_FAST 0 (it)
38 PRINT_ITEM
39 PRINT_NEWLINE
40 JUMP_ABSOLUTE 19
>> 43 POP_BLOCK
>> 44 LOAD_CONST 0 (None)
</code></pre>
<p>As you can see, <code>for</code> statement works with an iterable of <code>it</code> (<code>GET_ITER</code> instruction, <code>iter(it)</code>). Therefore, reassigning the <code>it</code> variable will not affect the loop iteration.</p>
| 4 | 2016-07-29T12:25:58Z | [
"python",
"for-loop"
] |
Changing iterable variable during loop | 38,658,490 | <p>Let <code>it</code> be an iterable element in python.
In what cases is a change of <code>it</code> inside a loop over <code>it</code> reflected? Or more straightforward: When does something like this work?</p>
<pre><code>it = range(6)
for i in it:
it.remove(i+1)
print i
</code></pre>
<p>Leads to 0,2,4 being printed (showing the loop runs 3 times).</p>
<p>On the other hand does </p>
<pre><code>it = range(6)
for i in it:
it = it[:-2]
print it
</code></pre>
<p>lead to the output:</p>
<pre><code>[0,1,2,3]
[0,1]
[]
[]
[]
[],
</code></pre>
<p>showing the loop runs 6 times. I guess it has something to do with in-place operations or variable scope but cannot wrap my head around it 100% sure.</p>
<p><strong>Clearification:</strong> </p>
<p>One example, that doesn't work:</p>
<pre><code>it = range(6)
for i in it:
it = it.remove(i+1)
print it
</code></pre>
<p>leads to 'None' being printed and an Error (NoneType has no attribute 'remove') to be thrown.</p>
| 6 | 2016-07-29T12:09:32Z | 38,658,829 | <p>First, it is essential to understand what happens under the hood when you run a simple for-loop, like:</p>
<pre><code>for i in it: pass
</code></pre>
<p>At the beginning of the loop, an <em>iterator</em> is created. That iterator is the result of an implicit call to <code>iter(it)</code>. This is the <em>only</em> time the variable named <code>it</code> is referenced in the above loop. The rest of the references happen when <code>next</code> is called on that iterator, but it uses the object the iterator keeps a reference to, not the object the name <code>it</code> is bound to.</p>
<p>What does this mean for your second example?</p>
<p>Note that in your second example, you do not change the list inplace, but create a new list and bind the variable <code>it</code> to it.</p>
<p>It means the iterator keeps referencing the original list, which is unchanged.</p>
<p>In your first example, you change the original list in place, therefor calls to <code>next(iterator)</code> reflect those changes.</p>
| 4 | 2016-07-29T12:26:33Z | [
"python",
"for-loop"
] |
Calculate with date, without import dateutil | 38,658,521 | <pre><code>from datetime import datetime
from dateutil import relativedelta
date1 = datetime.strptime(str('2011-08-15 12:00:00'), '%Y-%m-%d %H:%M:%S')
date2 = datetime.strptime(str('2012-02-15'), '%Y-%m-%d')
r = relativedelta.relativedelta(date2, date1)
r.months
</code></pre>
<p>The code above does the trick for me but i don't want to import dateutil. Does anyone have an example for me without looping? </p>
<p>I want to deduct two dates from each other and i want to know the difference in months between the two dates in whole months. </p>
| 1 | 2016-07-29T12:11:04Z | 38,658,756 | <p>Seems this post helps: <a href="http://stackoverflow.com/questions/1345827/how-do-i-find-the-time-difference-between-two-datetime-objects-in-python">How do I find the time difference between two datetime objects in python?</a></p>
<p>simply do a substraction on two datetime obj, then you can get what detail you want in the diff.</p>
| 1 | 2016-07-29T12:23:08Z | [
"python",
"python-2.7"
] |
Calculate with date, without import dateutil | 38,658,521 | <pre><code>from datetime import datetime
from dateutil import relativedelta
date1 = datetime.strptime(str('2011-08-15 12:00:00'), '%Y-%m-%d %H:%M:%S')
date2 = datetime.strptime(str('2012-02-15'), '%Y-%m-%d')
r = relativedelta.relativedelta(date2, date1)
r.months
</code></pre>
<p>The code above does the trick for me but i don't want to import dateutil. Does anyone have an example for me without looping? </p>
<p>I want to deduct two dates from each other and i want to know the difference in months between the two dates in whole months. </p>
| 1 | 2016-07-29T12:11:04Z | 38,659,029 | <p>Interestingly enough, your code outputs to: <code>5</code> which means, as suggested in the comments, that you are probably interested in the duration of each month and you do not want to round your results. Unfortunately the <code>timedelta</code> object will not work for you in this case because by definition, a time difference does not hold the information you need to obtain the duration of the months of interest to you.</p>
<p>You should probably take a look here:
<a href="http://stackoverflow.com/questions/7015587/python-difference-of-2-datetimes-in-months">Python: Difference of 2 datetimes in months</a>
where they discuss a solution using <code>calendar</code> instead of <code>dateutil</code>.</p>
<p>Otherwise, if you are happy with an approximated (and rounded) estimate, you could go close enough by doing:</p>
<pre><code>DAYS_PER_MONTH = 30 # or 365.0 / 12.0 for more precision
datetime_diff = date2 - date1
print(datetime_diff.days / DAYS_PER_MONTH) # '//' for floored result
</code></pre>
<p>If you want to get back to some code that works with your data (but not with all data, because of e.g. leap years, leap seconds, etc.) have a look here:</p>
<pre><code>MONTH_NUM_DAYS = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
YEAR_LENGTH = 365.25 # average year duration, including leap years
def num_days_in_months(
begin_month, end_month, month_duration=MONTH_NUM_DAYS):
begin_month, end_month = sorted((begin_month, end_month))
return sum(month_duration[begin_month:end_month])
def get_num_months(begin_date, end_date, num_days_per_year=YEAR_LENGTH):
begin_month = begin_date.month
end_month = end_date.month
month_diff = abs(end_month - begin_month)
num_days = (end_date - begin_date).days
num_days_within_year = num_days % num_days_per_year
num_months = num_days // num_days_per_year
num_days_max = num_days_in_months(begin_month, end_month)
print(num_months, month_diff)
if num_days_within_year < num_days_max:
num_months += month_diff - 1
else:
num_months += month_diff
return num_months
</code></pre>
| 0 | 2016-07-29T12:35:46Z | [
"python",
"python-2.7"
] |
Scrapy iterating over selector yields n duplicated items for number of selectors found on page | 38,658,552 | <p>I have a working scraper that I have built to collect information from a review site. The problem I'm having is that when I crawl a business page with several reviews and try to yield the items, I only get the first item n times (where n is the number of reviews the selector found).</p>
<p>I've read up a lot on generators, and I'm sure it is because I'm not thinking things through correctly.
This is a simplified snippet. Understand that I have a more complex crawler using callbacks etc, but this code generates the behavior I'm talking about.</p>
<pre><code>from scrapy import Spider
from scrapy.selector import Selector
from yelp.items import ReviewItem
class CategorySpider(Spider):
name = "yelp_search_"
allowed_domains = ["yelp.com"]
start_urls = ["http://www.yelp.com/biz/j-crew-arden"]
def parse(self, response):
sel = Selector(response)
# There are 9 particular reviews on this page
reviews_info = sel.xpath('//div[contains(@class, "review review--with-sidebar") and @itemprop="review"]')
for reviewSelector in reviews_info:
#If I print the extracted review selector here, I can confirm that only the first review selector is being used
#In other words, I expect extract first will extract the one and only result within the revewSelector
#Note: if I just do extract(), the item property is populated with a list of all 9 reviewSelectors
#i.e. a list of 9 usernames given to me 9 times
reviewitem = ReviewItem()
reviewitem["username"] = reviewSelector.xpath('//*[@itemprop="author"]/@content').extract_first()
reviewitem["userprofileurl"] = reviewSelector.xpath('//*[@class="user-display-name"]/@href').extract_first()
reviewitem["userlocation"] = reviewSelector.xpath('//*[contains(@class, "user-location responsive-hidden-small")]/text()').extract_first().strip()
reviewitem["reviewtext"] = reviewSelector.xpath('//*[@itemprop="description"]/@content').extract_first()
reviewitem["reviewrating"] = reviewSelector.xpath('//*[@itemprop="ratingValue"]/@content').extract_first()
reviewitem["reviewdate"] = reviewSelector.xpath('//*[@itemprop="datePublished"]/@content').extract_first()
reviewitem["reviewvotesuseful"] = reviewSelector.xpath('//a[@rel="useful"]/span[@class="count"]/text()').extract_first()
yield reviewitem
</code></pre>
<p>This particular code would give me 9 scraped results, but all of them are the first reviewSelector.</p>
<p>What am I doing wrong here?</p>
| 0 | 2016-07-29T12:12:45Z | 38,659,276 | <p>Once you have your "sub-selector" <code>reviewSelector</code> you need to use <code>.</code> before your xpath to indicate sub-selector level.</p>
<p>i.e. this:</p>
<pre><code>reviewSelector.xpath('//*[@itemprop="author"]/@content').extract_first()
</code></pre>
<p>should be:</p>
<pre><code>reviewSelector.xpath('.//*[@itemprop="author"]/@content').extract_first()
</code></pre>
| 0 | 2016-07-29T12:46:38Z | [
"python",
"scrapy",
"generator"
] |
add() argument after * must be a sequence, not Settings | 38,658,684 | <p>I'm trying to build a game that moves a ship left and right with the arrow keys and fires bullets when the spacebar is pressed. When I press the spacebar my game crashes and this error is shown:
Traceback (most recent call last):</p>
<pre><code>TypeError: add() argument after * must be a sequence, not Settings
</code></pre>
<p>Here's my code:</p>
<pre><code>class Settings():
"""A class to store all settings for Alien Invasion."""
def __init__(self):
"""Initialize the game's settings."""
# Screen settings
self.screen_width = 800
self.screen_height = 480
self.bg_color = (230, 230, 230)
# Ship settings
self.ship_speed_factor = 1.5
# Bullet settings
self.bullet_speed_factor = 1
self.bullet_width = 3
self.bullet_height = 15
self.bullet_color = 60, 60, 60
import pygame
from pygame.sprite import Sprite
class Bullet(Sprite):
"""A class to manage bullets fired from the ship"""
def _init__(self, ai_settings, screen, ship):
"""Create a bullet object at the ship's current position."""
super(Bullet, self).__init__()
self.screen = screen
# Create a bullet rect at (0, 0) and then set correct position.
self.rect = pygame.Rect(0, 0, ai_settings.bullet_width, ai_settings.bullet_height)
self.rect.centerx = ship.rect.centerx
self.rect.top = ship.rect.top
# Store the bullet's position as a decimal value.
self.y = float(self.rect.y)
self.color = ai_settings.bullet_color
self.speed_factor = ai_settings.bullet_speed_factor
def update(self):
"""Move the bullet up the screen"""
# Update the decimal position of the bullet.
self.y -= self.speed_factor
# Update the rect position.
self.rect.y = self.y
def draw_bullet(self):
"""Draw the bullet to the screen."""
pygame.draw.rect(self.screen, self.color, self.rect)
import sys
import pygame
from bullet import Bullet
def check_keydown_events(event, ai_settings, screen, ship, bullets):
"""Respond to keypresses."""
if event.key == pygame.K_RIGHT:
ship.moving_right = True
elif event.key == pygame.K_LEFT:
ship.moving_left = True
elif event.key == pygame.K_SPACE:
# Create a new bullet and add it to the bullets group.
new_bullet = Bullet(ai_settings, screen, ship)
bullets.add(new_bullet)
def check_keyup_events(event, ship):
"""Respind to key releases."""
if event.key == pygame.K_RIGHT:
ship.moving_right = False
elif event.key == pygame.K_LEFT:
ship.moving_left = False
def check_events(ai_settings, screen, ship, bullets):
"""Respond to keypresses and mouse events."""
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
elif event.type == pygame.KEYDOWN:
check_keydown_events(event, ai_settings, screen, ship, bullets)
elif event.type == pygame.KEYUP:
check_keyup_events(event, ship)
</code></pre>
<p>And finally the main file:</p>
<pre><code>import pygame
from pygame.sprite import Group
from settings import Settings
from ship import Ship
import game_functions as gf
def run_game():
# Initialize pygame, settings, and screen object.
pygame.init()
ai_settings = Settings()
screen = pygame.display.set_mode(
(ai_settings.screen_width, ai_settings.screen_height))
pygame.display.set_caption("Alien Invasion")
# Make a ship.
ship = Ship(ai_settings, screen)
# Make a group to store bullets in.
bullets = Group()
# Start the main loop for the game.
while True:
# Watch the keyboard and mouse events.
gf.check_events(ai_settings, screen, ship, bullets)
ship.update()
bullets.update()
gf.update_screen(ai_settings, screen, ship, bullets)
run_game()
</code></pre>
<p>The trace: </p>
<pre><code>Traceback (most recent call last):
File "C:\Users\martin\Desktop\python_work\alien_invasion\alien_invasion.py", line 30, in <module>
run_game()
File "C:\Users\martin\Desktop\python_work\alien_invasion\alien_invasion.py", line 25, in run_game
gf.check_events(ai_settings, screen, ship, bullets)
File "C:\Users\martin\Desktop\python_work\alien_invasion\game_functions.py", line 33, in check_events
check_keydown_events(event, ai_settings, screen, ship, bullets)
File "C:\Users\martin\Desktop\python_work\alien_invasion\game_functions.py", line 15, in check_keydown_events
new_bullet = Bullet(ai_settings, screen, ship)
File "C:\Users\martin\Anaconda3\lib\site-packages\pygame\sprite.py", line 124, in __init__
self.add(*groups)
File "C:\Users\martin\Anaconda3\lib\site-packages\pygame\sprite.py", line 142, in add
self.add(*group)
TypeError: add() argument after * must be a sequence, not Settings
</code></pre>
| 2 | 2016-07-29T12:19:23Z | 38,660,821 | <p>You are missing an underscore <code>_</code> in your <code>Bullet.__init__</code> method. You currently have <code>_init__</code> when it should be <code>__init__</code>.</p>
<p>This results in Python calling the <code>Sprite.__init__</code> method with <code>ai_settings</code> as the first argument, since it cannot find any overridden <code>__init__</code> for <code>Bullet</code>. That leads to problems.</p>
| 2 | 2016-07-29T14:02:52Z | [
"python",
"pygame",
"sprite",
"add"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.