title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Python2.6 xmpp Jabber Error | 1,301,303 | <p>I am using xmpp with python and I want create a simple client to communicate with a gmail
id. </p>
<pre><code>#!/usr/bin/python
import xmpp
login = 'Your.Login' # @gmail.com
pwd = 'YourPassword'
cnx = xmpp.Client('gmail.com')
cnx.connect( server=('talk.google.com',5223) )
cnx.auth(login,pwd, 'botty')
cnx.send( xmpp.Message( "YourFriend@gmail.com" ,"Hello World form Python" ) )
</code></pre>
<p>When I run the last line I get an exception</p>
<blockquote>
<p>IOError: Disconnected from server.</p>
</blockquote>
<p>Also when I run the other statements I get debug messages in the console.</p>
<p>What could be the issue and how can I resolve it ?</p>
| 4 | 2009-08-19T17:06:20Z | 4,012,698 | <p>i think you must write this. i test it in python 2.7 with xmpppy 0.5.0rc1 and work IT very nice :P :) :</p>
<pre><code>import xmpp
login = 'your mail@gmail.com' # @gmail.com
pwd = 'your pass'
text='Hello worlD!'
tojid='your friend @gmail.com'
jid = xmpp.protocol.JID(login)
cl = xmpp.Client(jid.getDomain(), debug=[])
if cl.connect(('talk.google.com',5223)):
print "Connected"
else:
print "Connectioned failed"
if cl.auth(jid.getNode(), pwd):
cl.sendInitPresence()
cl.send(xmpp.protocol.Message(tojid,text))
else:
print "Authentication failed"
</code></pre>
| 1 | 2010-10-25T07:49:01Z | [
"python",
"xmpp"
] |
C# equivalent of rotating a list using python slice operation | 1,301,316 | <p>In python, I can take a list my_list and rotate the contents:</p>
<pre><code>>>> my_list = list(range(10))
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> new_list = my_list[1:] + my_list[:1]
>>> new_list
[1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
</code></pre>
<p>What's the equivalent way in C# to create a new list that is a made up of two slices of an existing C# list? I know I can generate by brute force if necessary.</p>
| 33 | 2009-08-19T17:08:16Z | 1,301,347 | <p>You can easily use LINQ to do this:</p>
<pre><code>// Create the list
int[] my_list = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
IEnumerable<int> new_list =
my_list.Skip(1).Concat(my_list.Take(1));
</code></pre>
<p>You could even add this as an extension method like so:</p>
<pre><code>public static IEnumerable<T> Slice<T>(this IEnumerable<T> e, int count)
{
// Skip the first number of elements, and then take that same number of
// elements from the beginning.
return e.Skip(count).Concat(e.Take(count));
}
</code></pre>
<p>Of course there needs to be some error checking in the above, but that's the general premise.</p>
<hr>
<p>Thinking about this more, there are definite improvements that can be made to this algorithm which would improve performance.</p>
<p>You can definitely take advantage if the <code>IEnumerable<T></code> instance implements <code>IList<T></code> or is an array, taking advantage of the fact that it is indexed.</p>
<p>Also, you can cut down on the number of iterations that are required to skip and take would take within the body of the message.</p>
<p>For example, if you have 200 items and you want to slice with a value of 199, then it requires 199 (for the initial skip) + 1 (for the remaining item) + 199 (for the take) iterations in the body of the Slice method. This can be cut down by iterating through the list once, storing the items in a list which is then concatenated to itself (requiring no iteration).</p>
<p>In this case, the trade off here is memory.</p>
<p>To that end, I propose the following for the extension method:</p>
<pre><code>public static IEnumerable<T> Slice<T>(this IEnumerable<T> source, int count)
{
// If the enumeration is null, throw an exception.
if (source == null) throw new ArgumentNullException("source");
// Validate count.
if (count < 0) throw new ArgumentOutOfRangeException("count",
"The count property must be a non-negative number.");
// Short circuit, if the count is 0, just return the enumeration.
if (count == 0) return source;
// Is this an array? If so, then take advantage of the fact it
// is index based.
if (source.GetType().IsArray)
{
// Return the array slice.
return SliceArray((T[]) source, count);
}
// Check to see if it is a list.
if (source is IList<T>)
{
// Return the list slice.
return SliceList ((IList<T>) source);
}
// Slice everything else.
return SliceEverything(source, count);
}
private static IEnumerable<T> SliceArray<T>(T[] arr, int count)
{
// Error checking has been done, but use diagnostics or code
// contract checking here.
Debug.Assert(arr != null);
Debug.Assert(count > 0);
// Return from the count to the end of the array.
for (int index = count; index < arr.Length; index++)
{
// Return the items at the end.
yield return arr[index];
}
// Get the items at the beginning.
for (int index = 0; index < count; index++)
{
// Return the items from the beginning.
yield return arr[index];
}
}
private static IEnumerable<T> SliceList<T>(IList<T> list, int count)
{
// Error checking has been done, but use diagnostics or code
// contract checking here.
Debug.Assert(list != null);
Debug.Assert(count > 0);
// Return from the count to the end of the list.
for (int index = count; index < list.Count; index++)
{
// Return the items at the end.
yield return list[index];
}
// Get the items at the beginning.
for (int index = 0; index < list.Count; index++)
{
// Return the items from the beginning.
yield return list[index];
}
}
// Helps with storing the sliced items.
internal class SliceHelper<T> : IEnumerable<T>
{
// Creates a
internal SliceHelper(IEnumerable<T> source, int count)
{
// Test assertions.
Debug.Assert(source != null);
Debug.Assert(count > 0);
// Set up the backing store for the list of items
// that are skipped.
skippedItems = new List<T>(count);
// Set the count and the source.
this.count = count;
this.source = source;
}
// The source.
IEnumerable<T> source;
// The count of items to slice.
private int count;
// The list of items that were skipped.
private IList<T> skippedItems;
// Expose the accessor for the skipped items.
public IEnumerable<T> SkippedItems { get { return skippedItems; } }
// Needed to implement IEnumerable<T>.
// This is not supported.
System.Collections.IEnumerator
System.Collections.IEnumerable.GetEnumerator()
{
throw new InvalidOperationException(
"This operation is not supported.");
}
// Skips the items, but stores what is skipped in a list
// which has capacity already set.
public IEnumerator<T> GetEnumerator()
{
// The number of skipped items. Set to the count.
int skipped = count;
// Cycle through the items.
foreach (T item in source)
{
// If there are items left, store.
if (skipped > 0)
{
// Store the item.
skippedItems.Add(item);
// Subtract one.
skipped--;
}
else
{
// Yield the item.
yield return item;
}
}
}
}
private static IEnumerable<T> SliceEverything<T>(
this IEnumerable<T> source, int count)
{
// Test assertions.
Debug.Assert(source != null);
Debug.Assert(count > 0);
// Create the helper.
SliceHelper<T> helper = new SliceHelper<T>(
source, count);
// Return the helper concatenated with the skipped
// items.
return helper.Concat(helper.SkippedItems);
}
</code></pre>
| 18 | 2009-08-19T17:11:57Z | [
"c#",
"python",
"algorithm"
] |
C# equivalent of rotating a list using python slice operation | 1,301,316 | <p>In python, I can take a list my_list and rotate the contents:</p>
<pre><code>>>> my_list = list(range(10))
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> new_list = my_list[1:] + my_list[:1]
>>> new_list
[1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
</code></pre>
<p>What's the equivalent way in C# to create a new list that is a made up of two slices of an existing C# list? I know I can generate by brute force if necessary.</p>
| 33 | 2009-08-19T17:08:16Z | 1,301,350 | <pre><code>List<int> list1;
List<int> list2 = new List<int>(list1);
</code></pre>
<p>or you can</p>
<pre><code>list2.AddRange(list1);
</code></pre>
<p>To get a distinct list using LINQ</p>
<pre><code>List<int> distinceList = list2.Distinct<int>().ToList<int>();
</code></pre>
| 1 | 2009-08-19T17:12:19Z | [
"c#",
"python",
"algorithm"
] |
C# equivalent of rotating a list using python slice operation | 1,301,316 | <p>In python, I can take a list my_list and rotate the contents:</p>
<pre><code>>>> my_list = list(range(10))
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> new_list = my_list[1:] + my_list[:1]
>>> new_list
[1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
</code></pre>
<p>What's the equivalent way in C# to create a new list that is a made up of two slices of an existing C# list? I know I can generate by brute force if necessary.</p>
| 33 | 2009-08-19T17:08:16Z | 1,301,351 | <p>The closest thing in C# would be to use the <a href="http://msdn.microsoft.com/en-us/library/bb358985.aspx">Enumerable.Skip</a> and <a href="http://msdn.microsoft.com/en-us/library/bb503062.aspx">Enumerable.Take</a> extension methods. You could use these to build your new list.</p>
| 9 | 2009-08-19T17:12:27Z | [
"c#",
"python",
"algorithm"
] |
C# equivalent of rotating a list using python slice operation | 1,301,316 | <p>In python, I can take a list my_list and rotate the contents:</p>
<pre><code>>>> my_list = list(range(10))
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> new_list = my_list[1:] + my_list[:1]
>>> new_list
[1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
</code></pre>
<p>What's the equivalent way in C# to create a new list that is a made up of two slices of an existing C# list? I know I can generate by brute force if necessary.</p>
| 33 | 2009-08-19T17:08:16Z | 1,301,362 | <pre><code>var newlist = oldlist.Skip(1).Concat(oldlist.Take(1));
</code></pre>
| 39 | 2009-08-19T17:14:13Z | [
"c#",
"python",
"algorithm"
] |
C# equivalent of rotating a list using python slice operation | 1,301,316 | <p>In python, I can take a list my_list and rotate the contents:</p>
<pre><code>>>> my_list = list(range(10))
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> new_list = my_list[1:] + my_list[:1]
>>> new_list
[1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
</code></pre>
<p>What's the equivalent way in C# to create a new list that is a made up of two slices of an existing C# list? I know I can generate by brute force if necessary.</p>
| 33 | 2009-08-19T17:08:16Z | 18,866,531 | <p>To rotate array, do <code>a.Slice(1, null).Concat(a.Slice(null, 1))</code>. </p>
<p>Here's my stab at it. <code>a.Slice(step: -1)</code> gives a reversed copy as <code>a[::-1]</code>.</p>
<pre><code>/// <summary>
/// Slice an array as Python.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="array"></param>
/// <param name="start">start index.</param>
/// <param name="end">end index.</param>
/// <param name="step">step</param>
/// <returns></returns>
/// <remarks>
/// http://docs.python.org/2/tutorial/introduction.html#strings
/// +---+---+---+---+---+
/// | H | e | l | p | A |
/// +---+---+---+---+---+
/// 0 1 2 3 4 5
/// -6 -5 -4 -3 -2 -1
/// </remarks>
public static IEnumerable<T> Slice<T>(this T[] array,
int? start = null, int? end = null, int step = 1)
{
array.NullArgumentCheck("array");
// step
if (step == 0)
{
// handle gracefully
yield break;
}
// step > 0
int _start = 0;
int _end = array.Length;
// step < 0
if (step < 0)
{
_start = -1;
_end = -array.Length - 1;
}
// inputs
_start = start ?? _start;
_end = end ?? _end;
// get positive index for given index
Func<int, int, int> toPositiveIndex = (int index, int length) =>
{
return index >= 0 ? index : index + length;
};
// start
if (_start < -array.Length || _start >= array.Length)
{
yield break;
}
_start = toPositiveIndex(_start, array.Length);
// end
if (_end < -array.Length - 1)
{
yield break;
}
if (_end > array.Length)
{
_end = array.Length;
}
_end = toPositiveIndex(_end, array.Length);
// slice
if (step > 0)
{
// start, end
if (_start > _end)
{
yield break;
}
for (int i = _start; i < _end; i += step)
{
yield return array[i];
}
}
else
{
// start, end
if (_end > _start)
{
yield break;
}
for (int i = _start; i > _end; i += step)
{
yield return array[i];
}
}
}
</code></pre>
<p>nunit tests: </p>
<pre><code>[Test]
// normal cases
[TestCase(3, 5, 1, 3, 4)]
[TestCase(0, 5, 1, 0, 4)]
[TestCase(3, null, 1, 3, 9)]
[TestCase(0, null, 1, 0, 9)]
[TestCase(null, null, 1, 0, 9)]
[TestCase(0, 10, 1, 0, 9)]
[TestCase(0, int.MaxValue, 1, 0, 9)]
[TestCase(-1, null, 1, 9, 9)]
[TestCase(-2, null, 1, 8, 9)]
[TestCase(0, -2, 1, 0, 7)]
// corner cases
[TestCase(0, 0, 1, null, null)]
[TestCase(3, 5, 2, 3, 3)]
[TestCase(3, 6, 2, 3, 5)]
[TestCase(100, int.MaxValue, 1, null, null)]
[TestCase(int.MaxValue, 1, 1, null, null)]
[TestCase(-11, int.MaxValue, 1, null, null)]
[TestCase(-6, -5, 1, 4, 4)]
[TestCase(-5, -6, 1, null, null)]
[TestCase(-5, -5, 1, null, null)]
[TestCase(0, -10, 1, null, null)]
[TestCase(0, -11, 1, null, null)]
[TestCase(null, null, 100, 0, 0)]
// -ve step
[TestCase(null, null, -1, 9, 0)]
[TestCase(-7, -5, -1, null, null)]
[TestCase(-5, -7, -1, 5, 4)]
[TestCase(-5, -7, -2, 5, 5)]
[TestCase(-7, null, -1, 3, 0)]
public void Slice01(int? s, int? e, int i, int? first, int? last)
{
var a = new[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
var slice = a.Slice(start: s, end: e, step: i).ToArray();
Print(slice);
if (first.HasValue)
{
Assert.AreEqual(first, slice.First());
}
if (last.HasValue)
{
Assert.AreEqual(last, slice.Last());
}
}
</code></pre>
| 1 | 2013-09-18T07:45:41Z | [
"c#",
"python",
"algorithm"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 1,301,369 | <h2>Single Underscore</h2>
<p>Names, in a class, with a leading underscore are simply to indicate to other programmers that the attribute or method is intended to be private. However, nothing special is done with the name itself.</p>
<p>To quote <a href="http://www.python.org/dev/peps/pep-0008/">PEP-8</a>:</p>
<blockquote>
<p>_single_leading_underscore: weak "internal use" indicator. E.g. <code>from M import *</code> does not import objects whose name starts with an underscore.</p>
</blockquote>
<h2>Double Underscore (Name Mangling)</h2>
<p>From <a href="http://docs.python.org/tutorial/classes.html#private-variables-and-class-local-references">the Python docs</a>:</p>
<blockquote>
<p>Any identifier of the form <code>__spam</code> (at least two leading underscores, at most one trailing underscore) is textually replaced with <code>_classname__spam</code>, where <code>classname</code> is the current class name with leading underscore(s) stripped. This mangling is done without regard to the syntactic position of the identifier, so it can be used to define class-private instance and class variables, methods, variables stored in globals, and even variables stored in instances. private to this class on instances of other classes. </p>
</blockquote>
<p>And a warning from the same page:</p>
<blockquote>
<p>Name mangling is intended to give classes an easy way to define âprivateâ instance variables and methods, without having to worry about instance variables defined by derived classes, or mucking with instance variables by code outside the class. Note that the mangling rules are designed mostly to avoid accidents; it still is possible for a determined soul to access or modify a variable that is considered private.</p>
</blockquote>
<h2>Example</h2>
<pre><code>>>> class MyClass():
... def __init__(self):
... self.__superprivate = "Hello"
... self._semiprivate = ", world!"
...
>>> mc = MyClass()
>>> print mc.__superprivate
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: myClass instance has no attribute '__superprivate'
>>> print mc._semiprivate
, world!
>>> print mc.__dict__
{'_MyClass__superprivate': 'Hello', '_semiprivate': ', world!'}
</code></pre>
| 635 | 2009-08-19T17:15:53Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 1,301,384 | <p>Single leading underscores is a convention. there is no difference from the interpreter's point of view if whether names starts with a single underscore or not. </p>
<p>Double leading and trailing underscores are used for built-in methods, such as <code>__init__</code>, <code>__bool__</code>, etc.</p>
<p>Double leading underscores w/o trailing counterparts are a convention too, however, the class methods will be <a href="http://docs.python.org/tutorial/classes.html#private-variables" rel="nofollow">mangled</a> by the interpreter. For variables or basic function names no difference exists.</p>
| 3 | 2009-08-19T17:17:11Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 1,301,409 | <p><code>__foo__</code>: this is just a convention, a way for the Python system to use names that won't conflict with user names.</p>
<p><code>_foo</code>: this is just a convention, a way for the programmer to indicate that the variable is private (whatever that means in Python).</p>
<p><code>__foo</code>: this has real meaning: the interpreter replaces this name with <code>_classname__foo</code> as a way to ensure that the name will not overlap with a similar name in another class.</p>
<p>No other form of underscores have meaning in the Python world.</p>
<p>There's no difference between class, variable, global, etc in these conventions.</p>
| 171 | 2009-08-19T17:21:29Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 1,301,456 | <p>Your question is good, it is not only about methods. Functions and objects in modules are commonly prefixed with one underscore as well, and can be prefixed by two.</p>
<p>But __double_underscore names are not name-mangled in modules, for example. What happens is that names beginning with one (or more) underscores are not imported if you import all from a module (from module import *), nor are the names shown in help(module).</p>
| 2 | 2009-08-19T17:31:04Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 1,301,557 | <p>Excellent answers so far but some tidbits are missing. A single leading underscore isn't exactly <em>just</em> a convention: if you use <code>from foobar import *</code>, and module <code>foobar</code> does not define an <code>__all__</code> list, the names imported from the module <strong>do not</strong> include those with a leading underscore. Let's say it's <em>mostly</em> a convention, since this case is a pretty obscure corner;-).</p>
<p>The leading-underscore convention is widely used not just for <em>private</em> names, but also for what C++ would call <em>protected</em> ones -- for example, names of methods that are fully intended to be overridden by subclasses (even ones that <strong>have</strong> to be overridden since in the base class they <code>raise NotImplementedError</code>!-) are often single-leading-underscore names to indicate to code <strong>using</strong> instances of that class (or subclasses) that said methods are not meant to be called directly.</p>
<p>For example, to make a thread-safe queue with a different queueing discipline than FIFO, one imports Queue, subclasses Queue.Queue, and overrides such methods as <code>_get</code> and <code>_put</code>; "client code" never calls those ("hook") methods, but rather the ("organizing") public methods such as <code>put</code> and <code>get</code> (this is known as the <a href="http://en.wikipedia.org/wiki/Template_method_pattern">Template Method</a> design pattern -- see e.g. <a href="http://www.catonmat.net/blog/learning-python-design-patterns-through-video-lectures/">here</a> for an interesting presentation based on a video of a talk of mine on the subject, with the addition of synopses of the transcript).</p>
| 225 | 2009-08-19T17:52:36Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 8,822,881 | <p>Sometimes you have what appears to be a tuple with a leading underscore as in </p>
<pre><code>def foo(bar):
return _('my_' + bar)
</code></pre>
<p>In this case, what's going on is that _() is an alias for a localization function that operates on text to put it into the proper language, etc. based on the locale. For example, Sphinx does this, and you'll find among the imports</p>
<pre><code>from sphinx.locale import l_, _
</code></pre>
<p>and in sphinx.locale, _() is assigned as an alias of some localization function.</p>
| 15 | 2012-01-11T16:28:22Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 12,629,901 | <p><code>._variable</code> is semiprivate and meant just for convention</p>
<p><code>.__variable</code> is often incorrectly considered superprivate, while it's actual meaning is just to namemangle to <strong>prevent accidental access</strong><a href="https://www.youtube.com/watch?v=HTLu2DFOdTg&t=33m8s">[1]</a></p>
<p><code>.__variable__</code> is typically reserved for builtin methods or variables</p>
<p>You can still access <code>.__mangled</code> variables if you desperately want to. The double underscores just namemangles, or renames, the variable to something like <code>instance._className__mangled</code></p>
<p>Example:</p>
<pre><code>class Test(object):
def __init__(self):
self.__a = 'a'
self._b = 'b'
>>> t = Test()
>>> t._b
'b'
</code></pre>
<p>t._b is accessible because it is only hidden by convention</p>
<pre><code>>>> t.__a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Test' object has no attribute '__a'
</code></pre>
<p>t.__a isn't found because it no longer exists due to namemangling</p>
<pre><code>>>> t._Test__a
'a'
</code></pre>
<p>By accessing <code>instance._className__variable</code> instead of just the double underscore name, you can access the hidden value</p>
| 82 | 2012-09-27T20:56:44Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 16,006,566 | <p>If one really wants to make a variable read-only, IMHO the best way would be to use property() with only getter passed to it. With property() we can have complete control over the data.</p>
<pre><code>class PrivateVarC(object):
def get_x(self):
pass
def set_x(self, val):
pass
rwvar = property(get_p, set_p)
ronly = property(get_p)
</code></pre>
<p>I understand that OP asked a little different question but since I found another question asking for 'how to set private variables' marked duplicate with this one, I thought of adding this additional info here.</p>
| 6 | 2013-04-15T01:58:14Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 25,454,077 | <p>Here is a simple illustrative example on how double underscore properties can affect an inherited class. So with the following setup:</p>
<pre><code>class parent(object):
__default = "parent"
def __init__(self, name=None):
self.default = name or self.__default
@property
def default(self):
return self.__default
@default.setter
def default(self, value):
self.__default = value
class child(parent):
__default = "child"
</code></pre>
<p>if you then create a child instance in the python REPL, you will see the below</p>
<pre><code>child_a = child()
child_a.default # 'parent'
child_a._child__default # 'child'
child_a._parent__default # 'parent'
child_b = child("orphan")
## this will show
child_b.default # 'orphan'
child_a._child__default # 'child'
child_a._parent__default # 'orphan'
</code></pre>
<p>This may be obvious to some, but it caught me off guard in a much more complex environment</p>
| 0 | 2014-08-22T19:15:48Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 27,481,645 | <p><strong>Single underscore at the beginning:</strong></p>
<p>Python doesn't have real private methods, so one underscore at the start of a method or attribute name means you shouldn't access this method, because it's not part of the API.</p>
<pre><code>class BaseForm(StrAndUnicode):
def _get_errors(self):
"Returns an ErrorDict for the data provided for the form"
if self._errors is None:
self.full_clean()
return self._errors
errors = property(_get_errors)
</code></pre>
<p>code snippet taken from django source code (django/forms/forms.py). This means errors is a property, and it's part of the module, but the method this property calls, _get_errors, is "private", so you shouldn't access it.</p>
<p><strong>Two underscores at the beginning:</strong></p>
<p>This causes a lot of confusion. It should not be used to create a private method. It should be used to avoid your method to be overridden by a subclass or accessed accidentally. Let's see an example:</p>
<pre><code>class A(object):
def __test(self):
print "I'm test method in class A"
def test(self):
self.__test()
a = A()
a.test()
</code></pre>
<p>Output: </p>
<pre><code>$ python test.py
I'm test method in class A
</code></pre>
<p>Now create a subclass B and do customization for __test method</p>
<pre><code>class B(A):
def __test(self):
print "I'm test method in class B"
b = B()
b.test()
</code></pre>
<p>Output will be....</p>
<pre><code>$ python test.py
I'm test method in class A
</code></pre>
<p>As we have seen, A.test() didn't call B.__test() methods, as we might expect. But in fact, this is the correct behavior for __. So when you create a method starting with __ it means that you don't want to anyone to be able to override it, it will be accessible only from inside the own class.</p>
<p><strong>Two underscores at the beginning and at the end:</strong></p>
<p>When we see a method like <code>__this__</code>, don't call it. Because it means it's a method which python calls, not by you. Let's take a look:</p>
<pre><code>>>> name = "test string"
>>> name.__len__()
11
>>> len(name)
11
>>> number = 10
>>> number.__add__(40)
50
>>> number + 50
60
</code></pre>
<p>There is always an operator or native function which calls these magic methods. Sometimes it's just a hook python calls in specific situations. For example <code>__init__()</code> is called when the object is created after <code>__new__()</code> is called to build the instance...</p>
<p>Let's take an example...</p>
<pre><code>class FalseCalculator(object):
def __init__(self, number):
self.number = number
def __add__(self, number):
return self.number - number
def __sub__(self, number):
return self.number + number
number = FalseCalculator(20)
print number + 10 # 10
print number - 20 # 40
</code></pre>
<p>For more details <a href="https://www.python.org/dev/peps/pep-0008/#method-names-and-instance-variables">PEP-8 guide</a> will help more.</p>
<p>Please find more magic methods in python here. <a href="http://www.rafekettler.com/magicmethods.pdf">http://www.rafekettler.com/magicmethods.pdf</a></p>
| 47 | 2014-12-15T10:10:20Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
The meaning of a single- and a double-underscore before an object name in Python | 1,301,346 | <p>I want to clear this up once and for all. Can someone please explain the exact meaning of having leading underscores before an object's name in Python? Also explain the difference between a single and a double leading underscore. Also, does that meaning stay the same whether the object in question is a variable, a function, a method, etcetera?</p>
| 668 | 2009-08-19T17:11:57Z | 28,385,489 | <p>âPrivateâ instance variables that cannot be accessed except from inside an object donât exist in Python. However, there is a convention that is followed by most Python code: a name prefixed with an underscore (e.g. _spam) should be treated as a non-public part of the API (whether it is a function, a method or a data member). It should be considered an implementation detail and subject to change without notice.</p>
<p>reference
<a href="https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow">https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references</a></p>
| 1 | 2015-02-07T17:57:10Z | [
"python",
"naming-conventions",
"private",
"underscores",
"double-underscore"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,370 | <p>You should give the <a href="http://www.gnu.org/software/screen/screen.html">screen</a> utility a look. While it's not an IDE it is some kind of window manager on the terminal -- i.e. you can have multiple windows and switch between them, which makes especially tasks like this much easier.</p>
| 9 | 2009-08-19T17:16:14Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,379 | <p>You can <a href="http://linux.com/archive/articles/57727" rel="nofollow">execute shell commands</a> <a href="http://www.vim.org/htmldoc/tips.html" rel="nofollow">from within <code>vim</code></a>.</p>
| 6 | 2009-08-19T17:17:02Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,385 | <p>You could run <a href="http://www.realvnc.com/products/free/4.1/man/Xvnc.html" rel="nofollow">XVNC</a> over ssh, which is actually passably responsive for doing this sort of thing and gets you a windowing GUI. I've done this quite effectively over really asthmatic Jetstart DSL services in New Zealand (128K up/ 128K down =8^P) and it's certainly responsive enough for gvim and xterm windows. Another option would be <a href="http://www.gnu.org/software/screen/" rel="nofollow">screen,</a> which lets you have multiple textual sessions open and switch between them.</p>
| 1 | 2009-08-19T17:17:28Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,418 | <p>Using emacs with python-mode you can execute the script with C-c C-c</p>
| 5 | 2009-08-19T17:23:27Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,423 | <p>put this line in your .vimrc file:</p>
<pre><code>:map <F2> :w\|!python %<CR>
</code></pre>
<p>now hitting <code><F2></code> will save and run your python script</p>
| 18 | 2009-08-19T17:24:01Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,501 | <p>When working with Vim on the console, I have found that using "tabs" in Vim, instead of having multiple Vim instances suspended in the background, makes handling multiple files in Vim more efficient. It takes a bit of getting used to, but it works really well.</p>
| 1 | 2009-08-19T17:41:08Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,550 | <p>Well, apart from using one of the more capable console editors (Emacs or vi would come to mind), why do you have to edit it on the web server itself? Just edit it remotely if constant FTP/WebDAV transfer would seem to cumbersome. </p>
<p>Emacs has <a href="http://www.emacswiki.org/emacs/TrampMode" rel="nofollow">Tramp Mode</a>, gedit on Linux and bbedit on the Mac support remote editing, too. Probably quite a large number of other editors. In that case you would just edit in on a more capable desktop and restart the script from a shell window.</p>
| 0 | 2009-08-19T17:50:44Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 1,301,621 | <p>you can try <a href="http://ipython.scipy.org/moin/" rel="nofollow">ipython</a>. using its edit command, it will bring up your editor (nano/vim/etc), you write your script, and then on exiting you're returned to the ipython prompt and the script is automatically executed. </p>
| 4 | 2009-08-19T18:03:18Z | [
"python",
"shell",
"vi"
] |
Python IDE on Linux Console | 1,301,352 | <p>This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in <code>python script.py</code>, over and over again.</p>
<p>I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?</p>
| 10 | 2009-08-19T17:12:36Z | 2,596,979 | <p>There are actually 2 questions. First is polling for a console IDE for python and the second is a better dev/test/deploy workflow.</p>
<p>For while there are many ways you can write python code in the console, I find a combination of screen, vim and python/ipython is the best as they are usually available on most servers. If you are doing long sessions, I find emacs + python-mode typically involves less typing.</p>
<p>For a better workflow, I would suggest setting up a development environment. You can easily setup a Linux VM on your desktop/laptop easily these days - there isn't a excuse not to even if it's for hobby projects. That opens up a much larger selection of IDEs available to you, such as:</p>
<ul>
<li>GUI versions of VI and friends</li>
<li>Remote file editing with <a href="http://www.gnu.org/software/tramp/" rel="nofollow">tramp</a> and testing locally with python-mode inside Emacs</li>
<li><a href="http://www.netbeans.org" rel="nofollow">http://www.netbeans.org</a></li>
<li>and of course <a href="http://eclipse.org" rel="nofollow">http://eclipse.org</a> with the <a href="http://pydev.sourceforge.net" rel="nofollow">PyDev plugin</a></li>
</ul>
<p>I would also setup a SCM to keep track of changes so that you do
better QA and use it to deploy tested changes onto the server.</p>
<p>For example I use Mercurial for my pet projects and I simply tag my repo when it's ready and update the production server to the tag when I deploy. On the devbox, I do:</p>
<ul>
<li>(hack hack hack, test test test)</li>
<li>hg ci -m 'comment'</li>
<li>hg tag </li>
<li>hg push </li>
</ul>
<p>Then I jump onto the server and do the following when I deploy:</p>
<ul>
<li>hg update </li>
<li>restart service/webserver as needed</li>
</ul>
| 1 | 2010-04-08T01:23:21Z | [
"python",
"shell",
"vi"
] |
Python: Why are some of Queue.queue's method "unreliable"? | 1,301,416 | <p>In the <code>queue</code> class from the <code>Queue</code> module, there are a few methods, namely, <code>qsize</code>, <code>empty</code> and <code>full</code>, whose documentation claims they are "not reliable".</p>
<p>What exactly is not reliable about them?</p>
<p>I did notice that <a href="http://docs.python.org/library/queue.html">on the Python docs</a> site, the following is said about <code>qsize</code>:</p>
<blockquote>
<p>Note, qsize() > 0 doesnât guarantee
that a subsequent get() will not
block, nor will qsize() < maxsize
guarantee that put() will not block.</p>
</blockquote>
<p>I personally don't consider that behavior "unreliable". But is this what is meant by "unreliable," or is there some more sinister defect in these methods?</p>
| 6 | 2009-08-19T17:22:43Z | 1,301,445 | <p>I don't know which Queue module you're referring to, please can you provide a link?</p>
<p>One possible source of unreliability: Generally, a queue is read by one thread and written by another. If you are the only thread accessing a queue, then reliable implementations of qsize(), empty() and full() are possible. But once other threads get involved, the return value of these methods might be out-of-date by the time you test it.</p>
| 2 | 2009-08-19T17:28:50Z | [
"python",
"multithreading",
"queue"
] |
Python: Why are some of Queue.queue's method "unreliable"? | 1,301,416 | <p>In the <code>queue</code> class from the <code>Queue</code> module, there are a few methods, namely, <code>qsize</code>, <code>empty</code> and <code>full</code>, whose documentation claims they are "not reliable".</p>
<p>What exactly is not reliable about them?</p>
<p>I did notice that <a href="http://docs.python.org/library/queue.html">on the Python docs</a> site, the following is said about <code>qsize</code>:</p>
<blockquote>
<p>Note, qsize() > 0 doesnât guarantee
that a subsequent get() will not
block, nor will qsize() < maxsize
guarantee that put() will not block.</p>
</blockquote>
<p>I personally don't consider that behavior "unreliable". But is this what is meant by "unreliable," or is there some more sinister defect in these methods?</p>
| 6 | 2009-08-19T17:22:43Z | 1,301,463 | <p>Yes, the docs use "unreliable" here to convey exactly this meaning: for example, in a sense, <code>qsize</code> doesn't tell you how many entries there are "right now", a concept that is not necessarily very meaningful in a multithreaded world (except at specific points where synchronization precautions are being taken) -- it tells you how many entries it had "a while ago"... when you act upon that information, even in the very next opcode, the queue might have more entries, or fewer, or none at all maybe, depending on what other threads have been up to in the meantime (if anything;-).</p>
| 9 | 2009-08-19T17:31:55Z | [
"python",
"multithreading",
"queue"
] |
Python: Why are some of Queue.queue's method "unreliable"? | 1,301,416 | <p>In the <code>queue</code> class from the <code>Queue</code> module, there are a few methods, namely, <code>qsize</code>, <code>empty</code> and <code>full</code>, whose documentation claims they are "not reliable".</p>
<p>What exactly is not reliable about them?</p>
<p>I did notice that <a href="http://docs.python.org/library/queue.html">on the Python docs</a> site, the following is said about <code>qsize</code>:</p>
<blockquote>
<p>Note, qsize() > 0 doesnât guarantee
that a subsequent get() will not
block, nor will qsize() < maxsize
guarantee that put() will not block.</p>
</blockquote>
<p>I personally don't consider that behavior "unreliable". But is this what is meant by "unreliable," or is there some more sinister defect in these methods?</p>
| 6 | 2009-08-19T17:22:43Z | 20,586,846 | <p>This is one case of unreliability in line with what Alex Martelli suggested:
<a href="http://stackoverflow.com/questions/20586673/joinablequeue-empty-unreliable-whats-the-alternative">JoinableQueue.empty() unreliable? What's the alternative?</a></p>
| 0 | 2013-12-14T18:41:14Z | [
"python",
"multithreading",
"queue"
] |
Setting timezone in Python | 1,301,493 | <p>Is it possible with Python to set the timezone just like this in PHP:</p>
<pre><code>date_default_timezone_set("Europe/London");
$Year = date('y');
$Month = date('m');
$Day = date('d');
$Hour = date('H');
$Minute = date('i');
</code></pre>
<p>I can't really install any other modules etc as I'm using shared web hosting.</p>
<p>Any ideas?</p>
| 21 | 2009-08-19T17:38:31Z | 1,301,528 | <pre><code>>>> import os, time
>>> time.strftime('%X %x %Z')
'12:45:20 08/19/09 CDT'
>>> os.environ['TZ'] = 'Europe/London'
>>> time.tzset()
>>> time.strftime('%X %x %Z')
'18:45:39 08/19/09 BST'
</code></pre>
<p>To get the specific values you've listed:</p>
<pre><code>>>> year = time.strftime('%Y')
>>> month = time.strftime('%m')
>>> day = time.strftime('%d')
>>> hour = time.strftime('%H')
>>> minute = time.strftime('%M')
</code></pre>
<p>See <a href="http://docs.python.org/library/time.html#time.strftime">here</a> for a complete list of directives. Keep in mind that the strftime() function will always return a string, not an integer or other type.</p>
| 38 | 2009-08-19T17:46:03Z | [
"python",
"timezone"
] |
Setting timezone in Python | 1,301,493 | <p>Is it possible with Python to set the timezone just like this in PHP:</p>
<pre><code>date_default_timezone_set("Europe/London");
$Year = date('y');
$Month = date('m');
$Day = date('d');
$Hour = date('H');
$Minute = date('i');
</code></pre>
<p>I can't really install any other modules etc as I'm using shared web hosting.</p>
<p>Any ideas?</p>
| 21 | 2009-08-19T17:38:31Z | 25,706,099 | <p>It's not an answer, but...</p>
<p>To get <code>datetime</code> components individually, better use <a href="https://docs.python.org/2.7/library/datetime.html#datetime.datetime.timetuple" rel="nofollow">datetime.timetuple</a>:</p>
<pre><code>time = datetime.now()
time.timetuple()
#-> time.struct_time(
# tm_year=2014, tm_mon=9, tm_mday=7,
# tm_hour=2, tm_min=38, tm_sec=5,
# tm_wday=6, tm_yday=250, tm_isdst=-1
#)
</code></pre>
<p>It's now easy to get the parts:</p>
<pre><code>ts = time.timetuple()
ts.tm_year
ts.tm_mon
ts.tm_mday
ts.tm_hour
ts.tm_min
ts.tm_sec
</code></pre>
| 0 | 2014-09-07T00:42:23Z | [
"python",
"timezone"
] |
I'd like some advice on packaging this as an egg and uploading it to pypi | 1,301,689 | <p>I wrote some code that I'd like to package as an egg. This is my directory structure:</p>
<pre>
src/
src/tests
src/tests/test.py # this has several tests for the movie name parser
src/torrent
src/torrent/__init__.py
src/torrent/movienameparser
src/torrent/movienameparser/__init__.py # this contains the code
</pre>
<p>I'd like to package this directory structure as an egg, and include the test file too. What should I include in the <code>setup.py</code> file so that I can have any number of namespaces, and any number of tests?</p>
<p>This is the first open source code I'd like to share. Even though, probably, I will be the only one who will find this module useful, I'd like to upload it on <code>pypi</code>. What license can I use that will allow users to do what they want with the code,no limitations upon redistribution,modifications? </p>
<p>Even though I plan on updating this egg, I'd like not to be responsible of anything ( such as providing support to users ). I know this may sound selfish, but this is my first open source code, so please bear with me. Will I need to provide a copy of the license? Where could I find a copy?</p>
<p>Thanks for reading all of this. </p>
| 2 | 2009-08-19T18:12:03Z | 1,302,680 | <p>It would be better to distribute it as a tarball (<code>.tar.gz</code>), not as an egg. Eggs are primarily for binary distribution, such as when using compiled C extensions. In source-only distributions, they are just unnecessary complexity.</p>
<p>If you just want to throw your code out into the world, the MIT or 3-clause BSD licenses are the most popular choice. Both include disclaimers of liability. All you have to do is include the main license in the tarball; typically as "License.txt", or similar. Optionally, you can add a small copyright notification to each source file; I encourage this, so the status of each file is obvious even without the entire archive, but some people think that's too verbose. It's a matter of personal preference.</p>
<p>The BSD license is available on <a href="http://en.wikipedia.org/wiki/BSD%5Flicenses" rel="nofollow">Wikipdia</a>, copied below:</p>
<pre><code>Copyright (c) <year>, <copyright holder>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the <organization> nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY <copyright holder> ''AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <copyright holder> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</code></pre>
| 3 | 2009-08-19T21:21:08Z | [
"python",
"setuptools",
"distutils",
"egg",
"pypi"
] |
I'd like some advice on packaging this as an egg and uploading it to pypi | 1,301,689 | <p>I wrote some code that I'd like to package as an egg. This is my directory structure:</p>
<pre>
src/
src/tests
src/tests/test.py # this has several tests for the movie name parser
src/torrent
src/torrent/__init__.py
src/torrent/movienameparser
src/torrent/movienameparser/__init__.py # this contains the code
</pre>
<p>I'd like to package this directory structure as an egg, and include the test file too. What should I include in the <code>setup.py</code> file so that I can have any number of namespaces, and any number of tests?</p>
<p>This is the first open source code I'd like to share. Even though, probably, I will be the only one who will find this module useful, I'd like to upload it on <code>pypi</code>. What license can I use that will allow users to do what they want with the code,no limitations upon redistribution,modifications? </p>
<p>Even though I plan on updating this egg, I'd like not to be responsible of anything ( such as providing support to users ). I know this may sound selfish, but this is my first open source code, so please bear with me. Will I need to provide a copy of the license? Where could I find a copy?</p>
<p>Thanks for reading all of this. </p>
| 2 | 2009-08-19T18:12:03Z | 1,302,722 | <p>I won't get into licensing discussion here, but it's typical to include LICENSE file at the root of your package source code, along with other customary things like README, etc.</p>
<p>I usually organize packages the same way they will be installed on the target system. The standard package layout convention is explained <a href="http://docs.python.org/distutils/setupscript.html#listing-whole-packages" rel="nofollow">here.</a></p>
<p>For example if my package is 'torrent' and it has a couple sub-packages such as 'tests' and 'util', here's the source tree would look like: </p>
<pre>
workspace/torrent/setup.py
workspace/torrent/torrent/__init__.py
workspace/torrent/torrent/foo.py
workspace/torrent/torrent/bar.py
workspace/torrent/torrent/...
workspace/torrent/torrent/tests/__init__.py
workspace/torrent/torrent/tests/test.py
workspace/torrent/torrent/tests/...
workspace/torrent/torrent/util/__init__.py
workspace/torrent/torrent/util/helper1.py
workspace/torrent/torrent/util/...
</pre>
<p>This 'torrent/torrent' bit seems redundant, but this is the side-effect of this standard convention and of how Python imports work.</p>
<p>Here's the very minimalist <code>setup.py</code> (more info on <a href="http://docs.python.org/distutils/setupscript.html#writing-the-setup-script" rel="nofollow">how to write the setup script</a>):</p>
<pre><code>#!/usr/bin/env python
from distutils.core import setup
setup(name='torrent',
version='0.1',
description='would be nice',
packages=['torrent', 'torrent.tests', 'torrent.util']
)
</code></pre>
To obtain a source distro, I'd then do:
<pre>$ cd workspace/torrent
$ ./setup.py sdist
</pre>
<p>This distro (<code>dist/torrent-0.1.tar.gz</code>) will be usable on its own, simply by unpacking it and running <code>setup.py install</code> or by using <code>easy_install</code> from <code>setuptools</code> toolkit. And you won't have to make several "eggs" for each supported version of Python.</p>
<p>If you really need an egg, you will need to add a dependency on <code>setuptools</code> to your <code>setup.py</code>, which will introduce an additional subcommand <code>bdist_egg</code> that generates eggs.</p>
<p>But there's another advantage of <code>setuptools</code> besides its egg-producing-qualities, it removes the need to enumerate packages in your <code>setup.py</code> with a nice helper function <code>find_packages</code>:</p>
<pre><code>#!/usr/bin/env python
from setuptools import setup, find_packages
setup(name='torrent',
version='0.1',
description='would be nice',
packages=find_packages()
)
</code></pre>
Then, to obtain an "egg", I will do:
<pre>$ cd workspace
$ ./setup.py bdist_egg
</pre>
<p>... and it will give me the egg file: <code>dist/torrent-0.1-py2.6.egg</code></p>
<p>Notice the <code>py2.6</code> suffix, this is because on my machine I have Python 2.6. If you want to please lots of people, you'd need to publish an egg for each major Python release. You don't want hordes of Python 2.5 folks with axes and spears at your doorstep, do you?</p>
<p>But you don't have to build an egg, you can still use <code>sdist</code> subcommand.</p>
<p><strong>Updated:</strong> here's <a href="http://docs.python.org/install/index.html" rel="nofollow">another useful page</a> in Python documentation that introduces <code>Distutils</code> from user's perspective.</p>
| 6 | 2009-08-19T21:29:42Z | [
"python",
"setuptools",
"distutils",
"egg",
"pypi"
] |
I'd like some advice on packaging this as an egg and uploading it to pypi | 1,301,689 | <p>I wrote some code that I'd like to package as an egg. This is my directory structure:</p>
<pre>
src/
src/tests
src/tests/test.py # this has several tests for the movie name parser
src/torrent
src/torrent/__init__.py
src/torrent/movienameparser
src/torrent/movienameparser/__init__.py # this contains the code
</pre>
<p>I'd like to package this directory structure as an egg, and include the test file too. What should I include in the <code>setup.py</code> file so that I can have any number of namespaces, and any number of tests?</p>
<p>This is the first open source code I'd like to share. Even though, probably, I will be the only one who will find this module useful, I'd like to upload it on <code>pypi</code>. What license can I use that will allow users to do what they want with the code,no limitations upon redistribution,modifications? </p>
<p>Even though I plan on updating this egg, I'd like not to be responsible of anything ( such as providing support to users ). I know this may sound selfish, but this is my first open source code, so please bear with me. Will I need to provide a copy of the license? Where could I find a copy?</p>
<p>Thanks for reading all of this. </p>
| 2 | 2009-08-19T18:12:03Z | 3,469,637 | <p>Include ez_setup file from <a href="http://peak.telecommunity.com/dist/ez_setup.py" rel="nofollow">setuptools website</a> and include, at the top of your setup.py:</p>
<pre><code>from ez_setup import use_setuptools
use_setuptools()
</code></pre>
<p>This script is an helper for people who doesn't have setuptools. It download and install latest version of setuptools on system that do not have setuptools installed.</p>
| 0 | 2010-08-12T16:04:05Z | [
"python",
"setuptools",
"distutils",
"egg",
"pypi"
] |
Counting python method calls within another method | 1,301,735 | <p>I'm actually trying doing this in Java, but I'm in the process of teaching myself python and it made me wonder if there was an easy/clever way to do this with wrappers or something.</p>
<p>I want to know how many times a specific method was called inside another method. For example:</p>
<pre><code>def foo(z):
#do something
return result
def bar(x,y):
#complicated algorithm/logic involving foo
return foobar
</code></pre>
<p>So for each call to bar with various parameters, I'd like to know how many times foo was called, perhaps with output like this:</p>
<pre><code>>>> print bar('xyz',3)
foo was called 15 times
[results here]
>>> print bar('stuv',6)
foo was called 23 times
[other results here]
</code></pre>
<p>edit: I realize I could just slap a counter inside bar and dump it when I return, but it would be cool if there was some magic you could do with wrappers to accomplish the same thing. It would also mean I could reuse the same wrappers somewhere else without having to modify any code inside the method.</p>
| 7 | 2009-08-19T18:19:20Z | 1,301,835 | <p>After your response - here's a way with a decorator factory...</p>
<pre><code>import inspect
def make_decorators():
# Mutable shared storage...
caller_L = []
callee_L = []
called_count = [0]
def caller_decorator(caller):
caller_L.append(caller)
def counting_caller(*args, **kwargs):
# Returning result here separate from the count report in case
# the result needs to be used...
result = caller(*args, **kwargs)
print callee_L[0].__name__, \
'was called', called_count[0], 'times'
called_count[0] = 0
return result
return counting_caller
def callee_decorator(callee):
callee_L.append(callee)
def counting_callee(*args, **kwargs):
# Next two lines are an alternative to
# sys._getframe(1).f_code.co_name mentioned by Ned...
current_frame = inspect.currentframe()
caller_name = inspect.getouterframes(current_frame)[1][3]
if caller_name == caller_L[0].__name__:
called_count[0] += 1
return callee(*args, **kwargs)
return counting_callee
return caller_decorator, callee_decorator
caller_decorator, callee_decorator = make_decorators()
@callee_decorator
def foo(z):
#do something
return ' foo result'
@caller_decorator
def bar(x,y):
# complicated algorithm/logic simulation...
for i in xrange(x+y):
foo(i)
foobar = 'some result other than the call count that you might use'
return foobar
bar(1,1)
bar(1,2)
bar(2,2)
</code></pre>
<p>And here's the output (tested with Python 2.5.2):</p>
<pre><code>foo was called 2 times
foo was called 3 times
foo was called 4 times
</code></pre>
| 1 | 2009-08-19T18:37:17Z | [
"python",
"profiling"
] |
Counting python method calls within another method | 1,301,735 | <p>I'm actually trying doing this in Java, but I'm in the process of teaching myself python and it made me wonder if there was an easy/clever way to do this with wrappers or something.</p>
<p>I want to know how many times a specific method was called inside another method. For example:</p>
<pre><code>def foo(z):
#do something
return result
def bar(x,y):
#complicated algorithm/logic involving foo
return foobar
</code></pre>
<p>So for each call to bar with various parameters, I'd like to know how many times foo was called, perhaps with output like this:</p>
<pre><code>>>> print bar('xyz',3)
foo was called 15 times
[results here]
>>> print bar('stuv',6)
foo was called 23 times
[other results here]
</code></pre>
<p>edit: I realize I could just slap a counter inside bar and dump it when I return, but it would be cool if there was some magic you could do with wrappers to accomplish the same thing. It would also mean I could reuse the same wrappers somewhere else without having to modify any code inside the method.</p>
| 7 | 2009-08-19T18:19:20Z | 1,301,845 | <p>This defines a decorator to do it:</p>
<pre><code>def count_calls(fn):
def _counting(*args, **kwargs):
_counting.calls += 1
return fn(*args, **kwargs)
_counting.calls = 0
return _counting
@count_calls
def foo(x):
return x
def bar(y):
foo(y)
foo(y)
bar(1)
print foo.calls
</code></pre>
| 6 | 2009-08-19T18:38:06Z | [
"python",
"profiling"
] |
Counting python method calls within another method | 1,301,735 | <p>I'm actually trying doing this in Java, but I'm in the process of teaching myself python and it made me wonder if there was an easy/clever way to do this with wrappers or something.</p>
<p>I want to know how many times a specific method was called inside another method. For example:</p>
<pre><code>def foo(z):
#do something
return result
def bar(x,y):
#complicated algorithm/logic involving foo
return foobar
</code></pre>
<p>So for each call to bar with various parameters, I'd like to know how many times foo was called, perhaps with output like this:</p>
<pre><code>>>> print bar('xyz',3)
foo was called 15 times
[results here]
>>> print bar('stuv',6)
foo was called 23 times
[other results here]
</code></pre>
<p>edit: I realize I could just slap a counter inside bar and dump it when I return, but it would be cool if there was some magic you could do with wrappers to accomplish the same thing. It would also mean I could reuse the same wrappers somewhere else without having to modify any code inside the method.</p>
| 7 | 2009-08-19T18:19:20Z | 1,301,926 | <p>Sounds like almost the textbook example for decorators!</p>
<pre><code>def counted(fn):
def wrapper(*args, **kwargs):
wrapper.called+= 1
return fn(*args, **kwargs)
wrapper.called= 0
wrapper.__name__= fn.__name__
return wrapper
@counted
def foo():
return
>>> foo()
>>> foo.called
1
</code></pre>
<p>You could even use another decorator to automate the recording of how many times a function is called inside another function:</p>
<pre><code>def counting(other):
def decorator(fn):
def wrapper(*args, **kwargs):
other.called= 0
try:
return fn(*args, **kwargs)
finally:
print '%s was called %i times' % (other.__name__, other.called)
wrapper.__name__= fn.__name__
return wrapper
return decorator
@counting(foo)
def bar():
foo()
foo()
>>> bar()
foo was called 2 times
</code></pre>
<p>If âfooâ or âbarâ can end up calling themselves, though, you'd need a more complicated solution involving stacks to cope with the recursion. Then you're heading towards a full-on profiler...</p>
<p>Possibly this wrapped decorator stuff, which tends to be used for magic, isn't the ideal place to be looking if you're still âteaching yourself Pythonâ!</p>
| 13 | 2009-08-19T18:52:56Z | [
"python",
"profiling"
] |
django admin: company branches must manage only their records across many models | 1,301,757 | <p>One company with many branches across the world using the same app. Each branch's supervisor, signing into the same /admin, should see and be able to manage only their records across many models (blog, galleries, subscribed users, clients list, etc.). </p>
<p>How to solve it best within django? I need a flexible and reliable solution, not hacks. Never came across this task, so really have no idea how to do it for the moment.</p>
<p>Tx</p>
| 0 | 2009-08-19T18:24:10Z | 1,301,871 | <p>There is a nice tutorial <a href="http://www.ibm.com/developerworks/opensource/library/os-django-admin/" rel="nofollow">here</a> on Django Admin. It includes customizing the Admin to add row-level permissions (which, as i understand it, is what you want).</p>
| 1 | 2009-08-19T18:42:39Z | [
"python",
"django",
"django-admin",
"personalization"
] |
Why is Standard Input is not displayed as I type in Mac OS X Terminal application? | 1,301,887 | <p>I'm confused by some behavior of my Mac OS X Terminal and my Django <code>manage.py</code> shell and pdb.</p>
<p>When I start a new terminal, the Standard Input is displayed as I type. However, if there is an error, suddenly Standard Input does not appear on the screen. This error continues until I shut down that terminal window.</p>
<p>The Input is still being captured as I can see the Standard Output.</p>
<p>E.g. in <code>pdb.set_trace()</code> I can 'l' to display where I'm at in the code. However, the 'l' will not be displayed, just an empty prompt.</p>
<p>This makes it hard to debug because I can't determine what I'm typing in.</p>
<p>What could be going wrong and what can I do to fix it?</p>
| 10 | 2009-08-19T18:44:56Z | 1,307,696 | <p>Try installing readline on Mac OS X:</p>
<pre><code>$ sudo easy_install readline
</code></pre>
<p>This is a blind guess, but perhaps it solves your problem.</p>
| -1 | 2009-08-20T17:29:42Z | [
"python",
"django",
"osx",
"shell",
"terminal"
] |
Why is Standard Input is not displayed as I type in Mac OS X Terminal application? | 1,301,887 | <p>I'm confused by some behavior of my Mac OS X Terminal and my Django <code>manage.py</code> shell and pdb.</p>
<p>When I start a new terminal, the Standard Input is displayed as I type. However, if there is an error, suddenly Standard Input does not appear on the screen. This error continues until I shut down that terminal window.</p>
<p>The Input is still being captured as I can see the Standard Output.</p>
<p>E.g. in <code>pdb.set_trace()</code> I can 'l' to display where I'm at in the code. However, the 'l' will not be displayed, just an empty prompt.</p>
<p>This makes it hard to debug because I can't determine what I'm typing in.</p>
<p>What could be going wrong and what can I do to fix it?</p>
| 10 | 2009-08-19T18:44:56Z | 2,018,573 | <p>If you exit pdb you can type reset and standard input echo will return. I'm not sure if you can execute something similar within pdb. It will erase what is currently displayed however.</p>
| 3 | 2010-01-07T06:16:29Z | [
"python",
"django",
"osx",
"shell",
"terminal"
] |
Why is Standard Input is not displayed as I type in Mac OS X Terminal application? | 1,301,887 | <p>I'm confused by some behavior of my Mac OS X Terminal and my Django <code>manage.py</code> shell and pdb.</p>
<p>When I start a new terminal, the Standard Input is displayed as I type. However, if there is an error, suddenly Standard Input does not appear on the screen. This error continues until I shut down that terminal window.</p>
<p>The Input is still being captured as I can see the Standard Output.</p>
<p>E.g. in <code>pdb.set_trace()</code> I can 'l' to display where I'm at in the code. However, the 'l' will not be displayed, just an empty prompt.</p>
<p>This makes it hard to debug because I can't determine what I'm typing in.</p>
<p>What could be going wrong and what can I do to fix it?</p>
| 10 | 2009-08-19T18:44:56Z | 2,623,452 | <p>Maybe this is because there was an error while running Django. Sometimes it happens that the std input disappears because <code>stty</code> was used. You can manually hide your input by typing:</p>
<p><code>$ stty -echo</code></p>
<p>Now you won't see what you typed. To restore this and solve your problem just type</p>
<p><code>$ stty echo</code></p>
<p>This could help.</p>
| 19 | 2010-04-12T16:03:17Z | [
"python",
"django",
"osx",
"shell",
"terminal"
] |
Django popup box Error?? Running development web server | 1,302,040 | <p>I have my Django site up and running, and everything works fine EXCEPT:</p>
<p>When I first go to my site <a href="http://127.0.0.1:8000" rel="nofollow">http://127.0.0.1:8000</a></p>
<p>A popup box comes up and says</p>
<p>"The page at <a href="http://127.0.0.1:8000" rel="nofollow">http://127.0.0.1:8000</a> says"</p>
<p>And just sits there</p>
<p>You have to hit OK before anything is displayed.</p>
<p>What is going on here?</p>
| 0 | 2009-08-19T19:09:35Z | 1,302,799 | <p>You must have a Javascript alert box in your template somewhere.</p>
| 1 | 2009-08-19T21:46:32Z | [
"python",
"django",
"webserver"
] |
cutdown uuid further to make short string | 1,302,057 | <p>I need to generate unique record id for the given unique string.</p>
<p>I tried using uuid format which seems to be good. </p>
<p>But we feel that is lengthly. </p>
<p>so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings.</p>
<p>We need unique id specific to site/database (SQL Server/ADO.NET Data services).</p>
<p>Any idea or sample from any language is fine</p>
<p>Thanks in advance</p>
| 5 | 2009-08-19T19:13:04Z | 1,302,076 | <p>Why not instead just convert it to a base 64 string? You can cut it down to 22 characters that way.</p>
<p><a href="http://stackoverflow.com/questions/772802/storing-uuid-as-base64-string">http://stackoverflow.com/questions/772802/storing-uuid-as-base64-string</a></p>
| 9 | 2009-08-19T19:17:50Z | [
"c#",
"python",
"string",
"uniqueidentifier",
"uuid"
] |
cutdown uuid further to make short string | 1,302,057 | <p>I need to generate unique record id for the given unique string.</p>
<p>I tried using uuid format which seems to be good. </p>
<p>But we feel that is lengthly. </p>
<p>so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings.</p>
<p>We need unique id specific to site/database (SQL Server/ADO.NET Data services).</p>
<p>Any idea or sample from any language is fine</p>
<p>Thanks in advance</p>
| 5 | 2009-08-19T19:13:04Z | 1,302,155 | <p>If you are using MS-SQL you should probably just use the uniqueindentifier datatype, it is both compact (16 bytes) and since the SQL engine knows about it it can optimize indexes and queries using it. </p>
| 2 | 2009-08-19T19:29:14Z | [
"c#",
"python",
"string",
"uniqueidentifier",
"uuid"
] |
cutdown uuid further to make short string | 1,302,057 | <p>I need to generate unique record id for the given unique string.</p>
<p>I tried using uuid format which seems to be good. </p>
<p>But we feel that is lengthly. </p>
<p>so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings.</p>
<p>We need unique id specific to site/database (SQL Server/ADO.NET Data services).</p>
<p>Any idea or sample from any language is fine</p>
<p>Thanks in advance</p>
| 5 | 2009-08-19T19:13:04Z | 1,302,160 | <p>A UUID has 128 bits. Have you considered doing a CRC of it? That could get it down to 16 or 32 bits easily, and would use all the original information. If a CRC isn't good enough, you could always use the first few bytes of a proper hash (SHA256, for example).</p>
<p>If you really want to just cut down the UUID, the format of it is described in <a href="http://www.ietf.org/rfc/rfc4122.txt" rel="nofollow">RFC 4122</a>. You should be able to figure out what parts your implementation doesn't need from that.</p>
| 1 | 2009-08-19T19:30:13Z | [
"c#",
"python",
"string",
"uniqueidentifier",
"uuid"
] |
cutdown uuid further to make short string | 1,302,057 | <p>I need to generate unique record id for the given unique string.</p>
<p>I tried using uuid format which seems to be good. </p>
<p>But we feel that is lengthly. </p>
<p>so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings.</p>
<p>We need unique id specific to site/database (SQL Server/ADO.NET Data services).</p>
<p>Any idea or sample from any language is fine</p>
<p>Thanks in advance</p>
| 5 | 2009-08-19T19:13:04Z | 1,302,532 | <p>An UUID provides (almost) 128 bits of uniqueness. You may shorten it to 16 binary bytes, or 22 base64-encoded characters. I wouldn't recommend removing any part of a UUID, otherwise, it just loses its sense. UUIDs were designed so that all the 128 bits have meaning. If you want less than that, you should use some other schema.</p>
<p>For example, if you could guarantee that only version 4 UUIDs are used, then you could take just the first 32 bits, or just the last 32 bits. You lose uniqueness, but you have pretty random numbers. Just avoid the bits that are fixed (version and variant).</p>
<p>But if you can't guarantee that, you will have real problems. For version 1 UUIDs, the first bits will not be unique for UUIDs generated in the same day, and the last bits will not be unique for UUIDs generated in the same system. Even if you CRC the UUID, it is not guaranteed that you will have 16 or 32 bits of uniqueness.</p>
<p>In this case, just use some other scheme. Generate a 32-bit random number using the system random number generator and use that as your unique ID. Don't rely on UUIDs if you intend on stripping its length.</p>
| 1 | 2009-08-19T20:44:20Z | [
"c#",
"python",
"string",
"uniqueidentifier",
"uuid"
] |
cutdown uuid further to make short string | 1,302,057 | <p>I need to generate unique record id for the given unique string.</p>
<p>I tried using uuid format which seems to be good. </p>
<p>But we feel that is lengthly. </p>
<p>so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings.</p>
<p>We need unique id specific to site/database (SQL Server/ADO.NET Data services).</p>
<p>Any idea or sample from any language is fine</p>
<p>Thanks in advance</p>
| 5 | 2009-08-19T19:13:04Z | 1,302,665 | <p>The UUID is 128 bits or 16 bytes. With no encoding, you could get it as low as 16 bytes. UUIDs are commonly written in hexadecimal, making them 32 byte readable strings. With other encodings, you get different results:</p>
<ol>
<li>base-64 turns 3 8-bit bytes into 4 6-bit characters, so 16 bytes of data becomes 22 characters long</li>
<li>base-85 turns 4 8-bit bytes into 5 6.4-bit characters, so 16 bytes of data becomes 20 characters long</li>
</ol>
<p>It all depends on if you want readable strings and how standard/common an encoding you want to use.</p>
| 2 | 2009-08-19T21:17:05Z | [
"c#",
"python",
"string",
"uniqueidentifier",
"uuid"
] |
How do I parse timezones with UTC offsets in Python? | 1,302,161 | <p>Let's say I have a timezone like "2009-08-18 13:52:54-04". I can parse most of it using a line like this:</p>
<pre><code>datetime.strptime(time_string, "%Y-%m-%d %H:%M:%S")
</code></pre>
<p>However, I can't get the timezone to work. There's a %Z that handles textual timezones ("EST", "UTC", etc) but I don't see anything that can parse "-04".</p>
| 6 | 2009-08-19T19:30:17Z | 1,302,176 | <p>use <a href="http://babel.edgewall.org/" rel="nofollow">Babel</a>, specifically <a href="http://babel.edgewall.org/wiki/ApiDocs/babel.dates#babel.dates:parse%5Fdatetime" rel="nofollow">parse_datetime</a>.</p>
| 2 | 2009-08-19T19:34:23Z | [
"python",
"datetime"
] |
How do I parse timezones with UTC offsets in Python? | 1,302,161 | <p>Let's say I have a timezone like "2009-08-18 13:52:54-04". I can parse most of it using a line like this:</p>
<pre><code>datetime.strptime(time_string, "%Y-%m-%d %H:%M:%S")
</code></pre>
<p>However, I can't get the timezone to work. There's a %Z that handles textual timezones ("EST", "UTC", etc) but I don't see anything that can parse "-04".</p>
| 6 | 2009-08-19T19:30:17Z | 1,302,228 | <p>You can do that directrly on the constructor: <code>class datetime.datetime(year, month, day[, hour[, minute[, second[, microsecond[, </code><strong>tzinfo</strong><code>]]]]])</code>, tzinfo being a <a href="http://docs.python.org/library/datetime.html#datetime.tzinfo" rel="nofollow"><code>datetime.tzinfo</code></a> dervided object.</p>
<blockquote>
<p><a href="http://python.org/doc/2.5.2/lib/datetime-tzinfo.html" rel="nofollow">tzinfo</a> is an abstract base clase, meaning that this class should not be instantiated directly. You need to derive a concrete subclass, and (at least) supply implementations of the standard tzinfo methods needed by the datetime methods you use. The datetime module does not supply any concrete subclasses of tzinfo.</p>
</blockquote>
<p>What you need to override is the <code>utcoffset(self, dt)</code> method.</p>
<blockquote>
<p>Return offset of local time from UTC, in minutes east of UTC. If local time is west of UTC, this should be negative. Note that this is intended to be the total offset from UTC; for example, if a tzinfo object represents both time zone and DST adjustments, utcoffset() should return their sum. If the UTC offset isn't known, return None. Else the value returned must be a timedelta object specifying a whole number of minutes in the range -1439 to 1439 inclusive (1440 = 24*60; the magnitude of the offset must be less than one day). Most implementations of utcoffset() will probably look like one of these two:</p>
<p><code> return CONSTANT # fixed-offset class</code></p>
<p><code> return CONSTANT + self.dst(dt) # daylight-aware class</code></p>
<p>If utcoffset() does not return None, dst() should not return None either.</p>
</blockquote>
<p>The default implementation of utcoffset() raises NotImplementedError.</p>
| 0 | 2009-08-19T19:45:02Z | [
"python",
"datetime"
] |
How do I parse timezones with UTC offsets in Python? | 1,302,161 | <p>Let's say I have a timezone like "2009-08-18 13:52:54-04". I can parse most of it using a line like this:</p>
<pre><code>datetime.strptime(time_string, "%Y-%m-%d %H:%M:%S")
</code></pre>
<p>However, I can't get the timezone to work. There's a %Z that handles textual timezones ("EST", "UTC", etc) but I don't see anything that can parse "-04".</p>
| 6 | 2009-08-19T19:30:17Z | 1,302,248 | <p>I ran across the same issue recently and worked around it using this code:</p>
<pre><code>gmt_offset_str = time_string[-3:]
gmt_offset_seconds = int(gmt_offset_str)*60*60
timestamp = time.strptime(time_string[:-4], '%Y-%m-%d %H:%M:%S')
return time.localtime(time.mktime(timestamp)-gmt_offset_seconds)
</code></pre>
<p>I would also be interested in a more elegant solution.</p>
| 0 | 2009-08-19T19:49:52Z | [
"python",
"datetime"
] |
How do I parse timezones with UTC offsets in Python? | 1,302,161 | <p>Let's say I have a timezone like "2009-08-18 13:52:54-04". I can parse most of it using a line like this:</p>
<pre><code>datetime.strptime(time_string, "%Y-%m-%d %H:%M:%S")
</code></pre>
<p>However, I can't get the timezone to work. There's a %Z that handles textual timezones ("EST", "UTC", etc) but I don't see anything that can parse "-04".</p>
| 6 | 2009-08-19T19:30:17Z | 2,496,144 | <p>Maybe you could use <a href="http://labix.org/python-dateutil">dateutil.parser.parse</a>? That method is also mentioned on <a href="http://wiki.python.org/moin/WorkingWithTime#ParsingISO8601withpython-dateutil">wiki.python.org/WorkingWithTime</a>.</p>
<pre><code>>>> from dateutil.parser import parse
>>> parse("2009-08-18 13:52:54-04")
datetime.datetime(2009, 8, 18, 13, 52, 54, tzinfo=tzoffset(None, -14400))
</code></pre>
<hr>
<p><em>(is this question a duplicate?)</em></p>
| 21 | 2010-03-22T22:10:47Z | [
"python",
"datetime"
] |
Python PEP8 printing wrapped strings without indent | 1,302,364 | <p>There is probably an easy answer for this, just not sure how to tease it out of my searches.</p>
<p>I adhere to <a href="http://www.python.org/dev/peps/pep-0008/">PEP8</a> in my python code, and I'm currently using OptionParser for a script I'm writing. To prevent lines from going beyond a with of 80, I use the backslash where needed. </p>
<p>For example:</p>
<pre><code>if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random \
users of each type.'
parser = OptionParser(usage)
</code></pre>
<p>That indent after the backslash results in:</p>
<pre><code>~$ ./er_usersearch -h
Usage: er_usersearch [options]
Without any options, will display 10 random users of each type.
</code></pre>
<p>That gap after "random" bugs me. I could do:</p>
<pre><code> if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random \
users of each type.'
parser = OptionParser(usage)
</code></pre>
<p>But that bugs me just as much. This seems silly:</p>
<pre><code> if __name__=='__main__':
usage = ''.join(['%prog [options]\nWithout any options, will display',
' 10 random users of each type.'])
parser = OptionParser(usage)
</code></pre>
<p>There must be a better way?</p>
| 11 | 2009-08-19T20:10:40Z | 1,302,379 | <p>try this:</p>
<pre><code>if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random ' \
'users of each type.'
parser = OptionParser(usage)
</code></pre>
| 1 | 2009-08-19T20:13:04Z | [
"python",
"wrapping",
"pep8"
] |
Python PEP8 printing wrapped strings without indent | 1,302,364 | <p>There is probably an easy answer for this, just not sure how to tease it out of my searches.</p>
<p>I adhere to <a href="http://www.python.org/dev/peps/pep-0008/">PEP8</a> in my python code, and I'm currently using OptionParser for a script I'm writing. To prevent lines from going beyond a with of 80, I use the backslash where needed. </p>
<p>For example:</p>
<pre><code>if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random \
users of each type.'
parser = OptionParser(usage)
</code></pre>
<p>That indent after the backslash results in:</p>
<pre><code>~$ ./er_usersearch -h
Usage: er_usersearch [options]
Without any options, will display 10 random users of each type.
</code></pre>
<p>That gap after "random" bugs me. I could do:</p>
<pre><code> if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random \
users of each type.'
parser = OptionParser(usage)
</code></pre>
<p>But that bugs me just as much. This seems silly:</p>
<pre><code> if __name__=='__main__':
usage = ''.join(['%prog [options]\nWithout any options, will display',
' 10 random users of each type.'])
parser = OptionParser(usage)
</code></pre>
<p>There must be a better way?</p>
| 11 | 2009-08-19T20:10:40Z | 1,302,381 | <p>Use <a href="http://docs.python.org/tutorial/introduction.html#strings">automatic string concatenation</a> + <a href="http://docs.python.org/reference/lexical%5Fanalysis.html#implicit-line-joining">implicit line continuation</a>:</p>
<pre><code>long_string = ("Line 1 "
"Line 2 "
"Line 3 ")
>>> long_string
'Line 1 Line 2 Line 3 '
</code></pre>
| 28 | 2009-08-19T20:13:19Z | [
"python",
"wrapping",
"pep8"
] |
Python PEP8 printing wrapped strings without indent | 1,302,364 | <p>There is probably an easy answer for this, just not sure how to tease it out of my searches.</p>
<p>I adhere to <a href="http://www.python.org/dev/peps/pep-0008/">PEP8</a> in my python code, and I'm currently using OptionParser for a script I'm writing. To prevent lines from going beyond a with of 80, I use the backslash where needed. </p>
<p>For example:</p>
<pre><code>if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random \
users of each type.'
parser = OptionParser(usage)
</code></pre>
<p>That indent after the backslash results in:</p>
<pre><code>~$ ./er_usersearch -h
Usage: er_usersearch [options]
Without any options, will display 10 random users of each type.
</code></pre>
<p>That gap after "random" bugs me. I could do:</p>
<pre><code> if __name__=='__main__':
usage = '%prog [options]\nWithout any options, will display 10 random \
users of each type.'
parser = OptionParser(usage)
</code></pre>
<p>But that bugs me just as much. This seems silly:</p>
<pre><code> if __name__=='__main__':
usage = ''.join(['%prog [options]\nWithout any options, will display',
' 10 random users of each type.'])
parser = OptionParser(usage)
</code></pre>
<p>There must be a better way?</p>
| 11 | 2009-08-19T20:10:40Z | 1,302,382 | <p>This works:</p>
<pre><code>if __name__=='__main__':
usage = ('%prog [options]\nWithout any options, will display 10 random '
'users of each type.')
parser = OptionParser(usage)
</code></pre>
<p>Although I'd lay it out like this:</p>
<pre><code>if __name__=='__main__':
usage = ('%prog [options]\n'
'Without any options, will display 10 random users '
'of each type.')
parser = OptionParser(usage)
</code></pre>
<p>(So I start a new line when there's a <code>\n</code> in the string, as well as when I need to word wrap the source code.)</p>
| 3 | 2009-08-19T20:13:40Z | [
"python",
"wrapping",
"pep8"
] |
Django: do I need to restart Apache when deploying? | 1,302,411 | <p>I just noted an annoying factor: Django requires either a restart of the server or <a href="http://en.wikipedia.org/wiki/Common%5FGateway%5FInterface" rel="nofollow">CGI</a> access to work. The first option is not feasible if you don't have access to the Apache server process. The second, as far as I know, is detrimental to performance, and in general the idea of running a CGI makes me uncomfortable.</p>
<p>I also recently saw a presentation titled "why I hate Django". Although I did not really shared most of the speaker's (a Flickr guy) points, this fact of re-starting the server sounded very annoying.</p>
<p>I would like to know your motivated experience in this regard. Should I continue working with Django and use it as a CGI, or favor another Python framework ? Is the CGI option that bad, and should I be concerned about it, or it's a viable option (for performance and scalability) ?</p>
| 1 | 2009-08-19T20:20:04Z | 1,302,419 | <p>Use the WSGI standard, through <a href="http://code.google.com/p/modwsgi/" rel="nofollow"><code>mod_wsgi</code></a>. You don't have to restart Apache, merely update the mtime on the .wsgi file. </p>
| 6 | 2009-08-19T20:22:11Z | [
"python",
"django"
] |
Django: do I need to restart Apache when deploying? | 1,302,411 | <p>I just noted an annoying factor: Django requires either a restart of the server or <a href="http://en.wikipedia.org/wiki/Common%5FGateway%5FInterface" rel="nofollow">CGI</a> access to work. The first option is not feasible if you don't have access to the Apache server process. The second, as far as I know, is detrimental to performance, and in general the idea of running a CGI makes me uncomfortable.</p>
<p>I also recently saw a presentation titled "why I hate Django". Although I did not really shared most of the speaker's (a Flickr guy) points, this fact of re-starting the server sounded very annoying.</p>
<p>I would like to know your motivated experience in this regard. Should I continue working with Django and use it as a CGI, or favor another Python framework ? Is the CGI option that bad, and should I be concerned about it, or it's a viable option (for performance and scalability) ?</p>
| 1 | 2009-08-19T20:20:04Z | 1,304,627 | <p>I usually don't restart the server, but force-reload the configuration. On an Ubuntu Hardy server, that is</p>
<pre><code>sudo /etc/init.d/apache2 force-reload
</code></pre>
<p>and it's done almost immediately.</p>
| 0 | 2009-08-20T07:56:19Z | [
"python",
"django"
] |
Django: do I need to restart Apache when deploying? | 1,302,411 | <p>I just noted an annoying factor: Django requires either a restart of the server or <a href="http://en.wikipedia.org/wiki/Common%5FGateway%5FInterface" rel="nofollow">CGI</a> access to work. The first option is not feasible if you don't have access to the Apache server process. The second, as far as I know, is detrimental to performance, and in general the idea of running a CGI makes me uncomfortable.</p>
<p>I also recently saw a presentation titled "why I hate Django". Although I did not really shared most of the speaker's (a Flickr guy) points, this fact of re-starting the server sounded very annoying.</p>
<p>I would like to know your motivated experience in this regard. Should I continue working with Django and use it as a CGI, or favor another Python framework ? Is the CGI option that bad, and should I be concerned about it, or it's a viable option (for performance and scalability) ?</p>
| 1 | 2009-08-19T20:20:04Z | 1,305,415 | <p>For how to deal with source code reloading when using Apache/mod_wsgi, read:</p>
<p><a href="http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode" rel="nofollow">http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode</a></p>
<p><a href="http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html" rel="nofollow">http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html</a></p>
<p><a href="http://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html" rel="nofollow">http://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html</a></p>
<p>Documentation is more useful when it is read. ;-)</p>
| 0 | 2009-08-20T11:02:30Z | [
"python",
"django"
] |
Produce multiple files from a single file in python | 1,302,499 | <p>I have a file like below.</p>
<blockquote>
<p>Sequence A.1.1 Bacteria<br />
ATGCGCGATATAGGCCT<br />
ATTATGCGCGCGCGC </p>
<p>Sequence A.1.2 Virus<br />
ATATATGCGCCGCGCGTA<br />
ATATATATGCGCGCCGGC </p>
<p>Sequence B.1.21 Chimpanzee<br />
ATATAGCGCGCGCGCGAT<br />
ATATATATGCGCG </p>
<p>Sequence C.21.4 Human<br />
ATATATATGCCGCGCG<br />
ATATAATATC</p>
</blockquote>
<p>I want to make separate files for sequences of category A, B and C from one single file. Kindly suggest some reading material for breaking this code. Thanks. The output should be three files, one for 'A', second file for Sequences with 'B' and third file for sequences with 'C'.</p>
| 0 | 2009-08-19T20:36:42Z | 1,302,556 | <p>It's not 100% clear what you want to do, but something like:</p>
<pre><code>currout = None
seqname2file = dict()
for line in open('thefilewhosenameyoudonottellus.txt'):
if line.startswith('Sequence '):
seqname = line[9] # A or B or C
if seqname not in seqname2file:
filename = 'outputfileforsequence_%s.txt' % seqname
seqname2file[seqname] = open(filename, 'w')
currout = seqname2file[seqname]
currout.write(line)
for f in seqname2file.values():
f.close()
</code></pre>
<p>should get you pretty close -- if you want three separate files (one each for A, B and C) that among them contain all the lines from the input file, it's just about done except you'll probably need better filenames (but you don't let us in on the secret of what those might be;-), otherwise some tweaks should get it there.</p>
<p>BTW, it always helps immensely (to help you more effectively rather than stumbling in the dark and guessing) if you also give examples of what output results you want for the input data example you give!-)</p>
| 1 | 2009-08-19T20:48:25Z | [
"python",
"file"
] |
Produce multiple files from a single file in python | 1,302,499 | <p>I have a file like below.</p>
<blockquote>
<p>Sequence A.1.1 Bacteria<br />
ATGCGCGATATAGGCCT<br />
ATTATGCGCGCGCGC </p>
<p>Sequence A.1.2 Virus<br />
ATATATGCGCCGCGCGTA<br />
ATATATATGCGCGCCGGC </p>
<p>Sequence B.1.21 Chimpanzee<br />
ATATAGCGCGCGCGCGAT<br />
ATATATATGCGCG </p>
<p>Sequence C.21.4 Human<br />
ATATATATGCCGCGCG<br />
ATATAATATC</p>
</blockquote>
<p>I want to make separate files for sequences of category A, B and C from one single file. Kindly suggest some reading material for breaking this code. Thanks. The output should be three files, one for 'A', second file for Sequences with 'B' and third file for sequences with 'C'.</p>
| 0 | 2009-08-19T20:36:42Z | 1,302,561 | <p>I'm not sure exactly what you want the output to be, but it sounds like you need something like:</p>
<pre><code>#!/usr/bin/python
# Open the input file
fhIn = open("input_file.txt", "r")
# Open the output files and store their handles in a dictionary
fhOut = {}
fhOut['A'] = open("sequence_a.txt", "w")
fhOut['B'] = open("sequence_b.txt", "w")
fhOut['C'] = open("sequence_c.txt", "w")
# Create a regexp to find the line naming the sequence
Matcher = re.compile(r'^Sequence (?P<sequence>[A-C])')
# Iterate through each line in the file
CurrentSequence = None
for line in fhIn:
# If the line is a sequence identifier...
m = Matcher.match(line)
if m is not None:
# Select the appropriate sequence from the regexp match
CurrentSequence = m.group('sequence')
# Uncomment the following two lines to skip blank lines
# elif len(line.strip()) == 0:
# pass
# Print out the line to the current sequence output file
# (change to else if you don't want to print the sequence titles)
if CurrentSequence is not None:
fhOut[CurrentSequence].write(line)
# Close all the file handles
fhIn.close()
fhOut['A'].close()
fhOut['B'].close()
fhOut['C'].close()
</code></pre>
<p>Completely untested though...</p>
| 0 | 2009-08-19T20:49:11Z | [
"python",
"file"
] |
python for firefox extensions? | 1,302,567 | <p>Can I use python in firefox extensions? Does it work?</p>
| 23 | 2009-08-19T20:50:25Z | 1,302,574 | <p><strong>Yes</strong>, through an extension for mozilla, Python Extension (pythonext).</p>
<p>Originally hosted in <a href="http://pyxpcomext.mozdev.org/" rel="nofollow">mozdev</a>, PythonExt project have move to Google code, <strong>you can see it in <a href="http://code.google.com/p/pythonext/" rel="nofollow">PythonExt in Google code</a></strong>.</p>
| 23 | 2009-08-19T20:51:38Z | [
"python",
"firefox",
"plugins",
"firefox-addon"
] |
What is the simplest way (in python) to print to a remote IPP/CUPS server or printer? | 1,302,650 | <p>I have a postscript file and want it to be printed on a IPP capable device (or CUPS server). What is the minimal code and dependencies I could get away with to do that.</p>
<p>Using LPR or libcups gives me lot of cross-plattform dependencies. So my first approach was to implement a minimal subset of IPP (the protocol used by cups and many modern printers) since "it's only extended HTTP". But unfortuntely a IPP client is a lot more code than a few lines and so far I found no IPP client implementation meant for just printing and not managing a printserver.</p>
<p>I would prefer a solution in Python, but would also be happy with something in an oter dynamic language.</p>
| 6 | 2009-08-19T21:11:42Z | 1,302,659 | <p>you need to add remote printer to CUPS:</p>
<pre><code>lpadmin -p printername -E -v //IPADDRESS/spool -m driver.ppd
</code></pre>
<p>where driver.ppd is the driver to print with</p>
<p>ps: this could also work for programatic access, if printer is set before.</p>
| 0 | 2009-08-19T21:16:13Z | [
"python",
"printing",
"cups",
"ipp-protocol"
] |
What is the simplest way (in python) to print to a remote IPP/CUPS server or printer? | 1,302,650 | <p>I have a postscript file and want it to be printed on a IPP capable device (or CUPS server). What is the minimal code and dependencies I could get away with to do that.</p>
<p>Using LPR or libcups gives me lot of cross-plattform dependencies. So my first approach was to implement a minimal subset of IPP (the protocol used by cups and many modern printers) since "it's only extended HTTP". But unfortuntely a IPP client is a lot more code than a few lines and so far I found no IPP client implementation meant for just printing and not managing a printserver.</p>
<p>I would prefer a solution in Python, but would also be happy with something in an oter dynamic language.</p>
| 6 | 2009-08-19T21:11:42Z | 28,203,190 | <p>There's a python wrapper for CUPS <code>ipptool</code> available at github:</p>
<ul>
<li><a href="https://github.com/ezeep/pyipptool" rel="nofollow">https://github.com/ezeep/pyipptool</a></li>
<li><a href="https://www.ezeep.com/blog/pyipptool-released" rel="nofollow">https://www.ezeep.com/blog/pyipptool-released</a></li>
</ul>
<p>You might also want to check <a href="http://stackoverflow.com/questions/19232082/printing-using-ipp-without-drivers-ipp-client">this answer</a>.</p>
| 0 | 2015-01-28T21:57:15Z | [
"python",
"printing",
"cups",
"ipp-protocol"
] |
Convert a nested dataset to a flat dataset, while retaining enough data to convert it back to nested set | 1,302,653 | <p>Say I have a dataset like</p>
<pre><code>(1, 2, (3, 4), (5, 6), (7, 8, (9, 0)))
</code></pre>
<p>I want to convert it to a (semi) flat representation like,</p>
<pre><code>(
(1, 2),
(1, 2, 3, 4),
(1, 2, 5, 6),
(1, 2, 7, 8),
(1, 2, 7, 8, 9, 0),
)
</code></pre>
<p>If you use this, (taken from SO)</p>
<pre><code>def flatten(iterable):
for i, item in enumerate(iterable):
if hasattr(item, '__iter__'):
for nested in flatten(item):
yield nested
else:
yield item
</code></pre>
<p>this will convert it to a list like(after iterating)</p>
<pre><code>[1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>But I cant get the original from this reperenstation, while I can get the original back from the first. (If every tuple has 2 elements only)</p>
| 0 | 2009-08-19T21:12:45Z | 1,302,780 | <p>How about a using a different "flat" representation, one which can be converted back:</p>
<pre><code>[1, 2, '(', 3, 4, ')', '(', 5, 6, ')', '(', 7, 8, '(', 9, 0, ')', ')']
</code></pre>
| 0 | 2009-08-19T21:41:06Z | [
"python",
"algorithm"
] |
Convert a nested dataset to a flat dataset, while retaining enough data to convert it back to nested set | 1,302,653 | <p>Say I have a dataset like</p>
<pre><code>(1, 2, (3, 4), (5, 6), (7, 8, (9, 0)))
</code></pre>
<p>I want to convert it to a (semi) flat representation like,</p>
<pre><code>(
(1, 2),
(1, 2, 3, 4),
(1, 2, 5, 6),
(1, 2, 7, 8),
(1, 2, 7, 8, 9, 0),
)
</code></pre>
<p>If you use this, (taken from SO)</p>
<pre><code>def flatten(iterable):
for i, item in enumerate(iterable):
if hasattr(item, '__iter__'):
for nested in flatten(item):
yield nested
else:
yield item
</code></pre>
<p>this will convert it to a list like(after iterating)</p>
<pre><code>[1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>But I cant get the original from this reperenstation, while I can get the original back from the first. (If every tuple has 2 elements only)</p>
| 0 | 2009-08-19T21:12:45Z | 1,302,817 | <p>This will give the example output. Don't know if that's really the best way of representing the model you want, though...</p>
<pre><code>def combineflatten(seq):
items= tuple(item for item in seq if not isinstance(item, tuple))
yield items
for item in seq:
if isinstance(item, tuple):
for yielded in combineflatten(item):
yield items+yielded
>>> tuple(combineflatten((1, 2, (3, 4), (5, 6), (7, 8, (9, 0)))))
((1, 2), (1, 2, 3, 4), (1, 2, 5, 6), (1, 2, 7, 8), (1, 2, 7, 8, 9, 0))
</code></pre>
| 2 | 2009-08-19T21:50:03Z | [
"python",
"algorithm"
] |
How can I convert a URL query string into a list of tuples using Python? | 1,302,688 | <p>I am struggling to convert a url to a nested tuple.</p>
<pre><code># Convert this string
str = 'http://somesite.com/?foo=bar&key=val'
# to a tuple like this:
[(u'foo', u'bar'), (u'key', u'val')]
</code></pre>
<p>I assume I need to be doing something like:</p>
<pre><code> url = 'http://somesite.com/?foo=bar&key=val'
url = url.split('?')
get = ()
for param in url[1].split('&'):
get = get + param.split('=')
</code></pre>
<p>What am I doing wrong? Thanks!</p>
| 8 | 2009-08-19T21:23:16Z | 1,302,696 | <p>I believe you are looking for the <a href="http://docs.python.org/library/urlparse.html"><code>urlparse</code></a> module.</p>
<blockquote>
<p>This module defines a standard
interface to break Uniform Resource
Locator (URL) strings up in components
(addressing scheme, network location,
path etc.), to combine the components
back into a URL string, and to convert
a ârelative URLâ to an absolute URL
given a âbase URL.â</p>
</blockquote>
<p>Here is an example:</p>
<pre><code>from urlparse import urlparse, parse_qsl
url = 'http://somesite.com/?foo=bar&key=val'
print parse_qsl(urlparse(url)[4])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[('foo', 'bar'), ('key', 'val')]
</code></pre>
<p>In this example I first use the <a href="http://docs.python.org/library/urlparse.html#urlparse.urlparse"><code>urlparse</code></a> function to parse the entire URL then I use the <a href="http://docs.python.org/library/urlparse.html#urlparse.parse%5Fqsl"><code>parse_qsl</code></a> function to break the querystring (the fifth element returned from <code>urlparse</code>) into a list of tuples.</p>
| 26 | 2009-08-19T21:25:00Z | [
"python",
"url",
"parsing"
] |
How can I convert a URL query string into a list of tuples using Python? | 1,302,688 | <p>I am struggling to convert a url to a nested tuple.</p>
<pre><code># Convert this string
str = 'http://somesite.com/?foo=bar&key=val'
# to a tuple like this:
[(u'foo', u'bar'), (u'key', u'val')]
</code></pre>
<p>I assume I need to be doing something like:</p>
<pre><code> url = 'http://somesite.com/?foo=bar&key=val'
url = url.split('?')
get = ()
for param in url[1].split('&'):
get = get + param.split('=')
</code></pre>
<p>What am I doing wrong? Thanks!</p>
| 8 | 2009-08-19T21:23:16Z | 1,303,900 | <p>Andrew's answer was really informative and helpful. A less adept way to grab those params would be with a regular expression--something like this:</p>
<p>import re</p>
<pre><code>re_param = re.compile(r'(?P<key>w\+)=(?P<value>w\+)')
url = 'http://somesite.com/?foo=bar&key=val''
params_list = re_param.findall(url)
</code></pre>
<p>Also, in your code it looks like you're trying to concatenate a list and tuple--</p>
<pre><code>for param in url[1].split('&'):
get = get + param.split('=')
</code></pre>
<p>You created get as a tuple, but str.split returns a list. Maybe this would fix your code:</p>
<pre><code>for param in url[1].split('&'):
get = get + tuple(param.split('='))
</code></pre>
| 0 | 2009-08-20T04:07:40Z | [
"python",
"url",
"parsing"
] |
What possible values does datetime.strptime() accept for %Z? | 1,302,701 | <p>Python's datetime.strptime() is documented as supporting a timezone in the %Z field. So, for example:</p>
<pre><code>In [1]: datetime.strptime('2009-08-19 14:20:36 UTC', "%Y-%m-%d %H:%M:%S %Z")
Out[1]: datetime.datetime(2009, 8, 19, 14, 20, 36)
</code></pre>
<p>However, "UTC" seems to be the only timezone I can get it to support:</p>
<pre><code>In [2]: datetime.strptime('2009-08-19 14:20:36 EDT', "%Y-%m-%d %H:%M:%S %Z")
ValueError: time data '2009-08-19 14:20:36 EDT' does not match format '%Y-%m-%d %H:%M:%S %Z'
In [3]: datetime.strptime('2009-08-19 14:20:36 America/Phoenix', "%Y-%m-%d %H:%M:%S %Z")
ValueError: time data '2009-08-19 14:20:36 America/Phoenix' does not match format '%Y-%m-%d %H:%M:%S %Z'
In [4]: datetime.strptime('2009-08-19 14:20:36 -0700', "%Y-%m-%d %H:%M:%S %Z")
ValueError: time data '2009-08-19 14:20:36 -0700' does not match format '%Y-%m-%d %H:%M:%S %Z'
</code></pre>
<p>What format is it expecting for %Z? Or, how do I represent a timezone other than UTC?</p>
| 10 | 2009-08-19T21:26:21Z | 1,302,762 | <p>This is from the <code>time</code> module, but I'm almost certain it applies to <code>datetime</code>:</p>
<blockquote>
<p>Support for the %Z directive is based
on the values contained in tzname and
whether daylight is true. Because of
this, it is platform-specific except
for recognizing UTC and GMT which are
always known (and are considered to be
non-daylight savings timezones).</p>
</blockquote>
<p><a href="https://docs.python.org/library/time.html" rel="nofollow">https://docs.python.org/library/time.html</a></p>
<p>On my system:</p>
<pre><code>>>> import time
>>> time.tzname
('PST', 'PDT')
</code></pre>
<p>Using anything but these in datetime.strptime results in an exception. So, look to see what you have available on your machine.</p>
| 3 | 2009-08-19T21:36:54Z | [
"python",
"datetime"
] |
What possible values does datetime.strptime() accept for %Z? | 1,302,701 | <p>Python's datetime.strptime() is documented as supporting a timezone in the %Z field. So, for example:</p>
<pre><code>In [1]: datetime.strptime('2009-08-19 14:20:36 UTC', "%Y-%m-%d %H:%M:%S %Z")
Out[1]: datetime.datetime(2009, 8, 19, 14, 20, 36)
</code></pre>
<p>However, "UTC" seems to be the only timezone I can get it to support:</p>
<pre><code>In [2]: datetime.strptime('2009-08-19 14:20:36 EDT', "%Y-%m-%d %H:%M:%S %Z")
ValueError: time data '2009-08-19 14:20:36 EDT' does not match format '%Y-%m-%d %H:%M:%S %Z'
In [3]: datetime.strptime('2009-08-19 14:20:36 America/Phoenix', "%Y-%m-%d %H:%M:%S %Z")
ValueError: time data '2009-08-19 14:20:36 America/Phoenix' does not match format '%Y-%m-%d %H:%M:%S %Z'
In [4]: datetime.strptime('2009-08-19 14:20:36 -0700', "%Y-%m-%d %H:%M:%S %Z")
ValueError: time data '2009-08-19 14:20:36 -0700' does not match format '%Y-%m-%d %H:%M:%S %Z'
</code></pre>
<p>What format is it expecting for %Z? Or, how do I represent a timezone other than UTC?</p>
| 10 | 2009-08-19T21:26:21Z | 1,302,793 | <p>I gather they are GMT, UTC, and whatever is listed in time.tzname.</p>
<pre><code>>>> for t in time.tzname:
... print t
...
Eastern Standard Time
Eastern Daylight Time
>>> datetime.strptime('2009-08-19 14:20:36 Eastern Standard Time', "%Y-%m-%d %H:%M:%S %Z")
datetime.datetime(2009, 8, 19, 14, 20, 36)
>>> datetime.strptime('2009-08-19 14:20:36 UTC', "%Y-%m-%d %H:%M:%S %Z")
datetime.datetime(2009, 8, 19, 14, 20, 36)
>>> datetime.strptime('2009-08-19 14:20:36 GMT', "%Y-%m-%d %H:%M:%S %Z")
datetime.datetime(2009, 8, 19, 14, 20, 36)
</code></pre>
<p>These settings are machine-specific, of course, and yours will be different in all likelihood.</p>
| 9 | 2009-08-19T21:44:04Z | [
"python",
"datetime"
] |
Python thread dump | 1,302,991 | <p>Is there a way to get a thread dump from a running Python process?
Similar to kill -3 on a Java process.</p>
| 5 | 2009-08-19T22:35:59Z | 1,303,362 | <p>I havent seen anything built-in, but I have seen <a href="http://bzimmer.ziclix.com/2008/12/17/python-thread-dumps/" rel="nofollow">a solution here</a> which can be exposed via http console. The solution iterates over all threads and outputs the stack.</p>
| 2 | 2009-08-20T00:31:18Z | [
"python"
] |
How to populate a list with items.count() from a queryset sorted by a datetime field | 1,302,999 | <p>I had a hard time formulating the title, so please edit it if you have a better one :)</p>
<p>I'm trying to display some statistics using the pygooglechart. And I am using Django to get the database items out of the database.</p>
<p>The database items has a datetime field wich i want to "sort on". What i really want is to populate a list like this.</p>
<pre><code>data = [10, 12, 51, 50]
</code></pre>
<p>Where each list item is the number(count) of database items within an hour. So lets say i do a query that gets all items in the last 72 hours, i want to collect the count of each hour into a list item. Anybody have a good way to do this?</p>
| 0 | 2009-08-19T22:37:48Z | 1,303,361 | <p><a href="http://docs.djangoproject.com/en/dev/topics/db/aggregation/" rel="nofollow">django aggregation</a> </p>
| 0 | 2009-08-20T00:30:58Z | [
"python",
"django",
"pygooglechart"
] |
How to populate a list with items.count() from a queryset sorted by a datetime field | 1,302,999 | <p>I had a hard time formulating the title, so please edit it if you have a better one :)</p>
<p>I'm trying to display some statistics using the pygooglechart. And I am using Django to get the database items out of the database.</p>
<p>The database items has a datetime field wich i want to "sort on". What i really want is to populate a list like this.</p>
<pre><code>data = [10, 12, 51, 50]
</code></pre>
<p>Where each list item is the number(count) of database items within an hour. So lets say i do a query that gets all items in the last 72 hours, i want to collect the count of each hour into a list item. Anybody have a good way to do this?</p>
| 0 | 2009-08-19T22:37:48Z | 1,304,749 | <p>Assuming you're running Django 1.1 or a fairly recent checkout, you can use the new <a href="http://docs.djangoproject.com/en/dev/topics/db/aggregation/" rel="nofollow">aggregation features</a>. Something like:</p>
<pre><code>counts = MyModel.objects.values('datettimefield').annotate(Count('datettimefield'))
</code></pre>
<p>This actually gets you a list of dictionaries:</p>
<pre><code>[{'datetimefield':<date1>, 'datettimefield__count':<count1>},
{'datetimefield':<date2>, 'datettimefield__count':<count2>}, ...]
</code></pre>
<p>but it should be fairly easy to write a list comprehension to get the format you want.</p>
<p><strong>Edited after comment</strong>: If you're on 1.0.2, the most efficient thing to do is to fall back to raw SQL.</p>
<pre><code>cursor = connection.cursor()
cursor.execute(
"SELECT COUNT(0) FROM `mymodel_table` "
"GROUP BY `mydatetimefield`;"
)
counts = cursor.fetchall()
</code></pre>
| 2 | 2009-08-20T08:25:01Z | [
"python",
"django",
"pygooglechart"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,040 | <p>I'm sure that there's a CRC32 implementation in Python, but that may be too short (8 hex digits). On the upside, it's very quick.</p>
<p>Found it, <a href="http://docs.python.org/library/binascii.html#binascii.crc32" rel="nofollow">binascii.crc32</a></p>
| 3 | 2009-08-19T22:48:21Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,045 | <p>If you do have a collision, how are you going to tell that it actually happened?</p>
<p>If I were you, I would use hashlib to <code>sha1()</code> the <code>repr()</code>, and then just get a limited substring of it (first 16 characters, for example).</p>
<p>Unless you are talking about huge numbers of these objects, I would suggest that you just use the full hash. Then the opportunity for collision is so, so, so, so small, that you will never live to see it happen (likely).</p>
<p>Also, if you are dealing with <strong>that</strong> many files, I'm guessing that your caching technique should be adjusted to accommodate it.</p>
| 1 | 2009-08-19T22:48:52Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,050 | <p>The builtin hash function of strings is fairly collision free, and also fairly short. It has <code>2**32</code> values, so it is fairly unlikely that you encounter collisions (if you use its abs value, it will have only <code>2**31</code> values).</p>
<p>You have been asking for the shortest hash function. That would certainly be</p>
<pre><code>def hash(s):
return 0
</code></pre>
<p>but I guess you didn't really mean it that way...</p>
| 26 | 2009-08-19T22:49:21Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,061 | <p>Short hashes mean you may have same hash for two different files. Same may happen for big hashes too, but its way more rare.
Maybe these file names should vary based on other references, like microtime (unless these files may be created too quickly).</p>
| 0 | 2009-08-19T22:53:17Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,063 | <p>You can make any hash you like shorter by simply truncating it. md5 is always 32 hex digits, but an arbitrary substring of it (or any other hash) has the proper qualities of a hash: equal values produce equal hashes, and the values are spread around a bunch.</p>
| 7 | 2009-08-19T22:53:53Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,188 | <p>We use hashlib.sha1.hexdigest(), which produces even longer strings, for cache objects with good success. Nobody is actually looking at cache files anyway.</p>
| 1 | 2009-08-19T23:33:05Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,524 | <p>Condsidering your use case, if you don't have your heart set on using separate cache files and you are not too far down that development path, you might consider using the <code>shelve</code> module.</p>
<p>This will give you a persistent dictionary (stored in a single dbm file) in which you store your objects. Pickling/unpickling is performed transparently, and you don't have to concern yourself with hashing, collisions, file I/O, etc.</p>
<p>For the shelve dictionary keys, you would just use repr(obj) and let <code>shelve</code> deal with stashing your objects for you. A simple example:</p>
<pre><code>import shelve
cache = shelve.open('cache')
t = (1,2,3)
i = 10
cache[repr(t)] = t
cache[repr(i)] = i
print cache
# {'(1, 2, 3)': (1, 2, 3), '10': 10}
cache.close()
cache = shelve.open('cache')
print cache
#>>> {'(1, 2, 3)': (1, 2, 3), '10': 10}
print cache[repr(10)]
#>>> 10
</code></pre>
| 1 | 2009-08-20T01:37:17Z | [
"python",
"hash"
] |
Shortest hash in python to name cache files | 1,303,021 | <p>What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save <em>cache files</em> for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application).</p>
<p>So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much.</p>
<p>Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance?</p>
<p>Edit:
Use case is like this:
The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for <code>hash(repr(obj))</code> exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data).</p>
<p><strong>Conclusions (?)</strong></p>
<p>The <code>str</code> hash in python may be good enough, I was only worried about its collision resistance. But if I can hash <code>2**16</code> objects with it, it's going to be more than good enough.</p>
<p>I found out how to take a hex hash (from any hash source) and store it compactly with base64:</p>
<pre><code># 'h' is a string of hex digits
bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2))
hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
</code></pre>
| 15 | 2009-08-19T22:43:51Z | 1,303,619 | <p>The <a href="http://en.wikipedia.org/wiki/Birthday%5Fparadox#Cast%5Fas%5Fa%5Fcollision%5Fproblem">birthday paradox</a> applies: given a good hash function, the expected number of hashes before a collision occurs is about sqrt(N), where N is the number of different values that the hash function can take. (The wikipedia entry I've pointed to gives the exact formula). So, for example, if you want to use no more than 32 bits, your collision worries are serious for around 64K objects (i.e., <code>2**16</code> objects -- the square root of the <code>2**32</code> different values your hash function can take). How many objects do you expect to have, as an order of magnitude?</p>
<p>Since you mention that a collision is a minor annoyance, I recommend you aim for a hash length that's roughly the square of the number of objects you'll have, or a bit less but not MUCH less than that.</p>
<p>You want to make a filename - is that on a case-sensitive filesystem, as typical on Unix, or do you have to cater for case-insensitive systems too? This matters because you aim for short filenames, but the number of bits per character you can use to represent your hash as a filename changes dramatically on case-sensive vs insensitive systems.</p>
<p>On a case-sensitive system, you can use the standard library's <code>base64</code> module (I recommend the "urlsafe" version of the encoding, i.e. <a href="http://docs.python.org/library/base64.html#base64.urlsafe%5Fb64encode">this</a> function, as avoiding '/' characters that could be present in plain base64 is important in Unix filenames). This gives you 6 usable bits per character, much better than the 4 bits/char in hex.</p>
<p>Even on a case-insensitive system, you can still do better than hex -- use base64.b32encode and get 5 bits per character.</p>
<p>These functions take and return strings; use the <code>struct</code> module to turn numbers into strings if your chosen hash function generates numbers.</p>
<p>If you do have a few tens of thousands of objects I think you'll be fine with builtin hash (32 bits, so 6-7 characters depending on your chosen encoding). For a million objects you'd want 40 bits or so (7 or 8 characters) -- you can fold (xor, don't truncate;-) a sha256 down to a long with a reasonable number of bits, say 128 or so, and use the <code>%</code> operator to cut it further to your desired length before encoding.</p>
| 33 | 2009-08-20T02:12:31Z | [
"python",
"hash"
] |
Interface with remote computers using Python | 1,303,047 | <p>I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).</p>
<p>For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and <code>who</code>, but how would I get this info into a script for manipulation? Something like,</p>
<pre><code>import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
</code></pre>
<p>Similarly for things like <code>cat /proc/cpuinfo</code> to get the processor information of a node. A starting point would be really great. Thanks.</p>
| 5 | 2009-08-19T22:49:09Z | 1,303,101 | <p>Here's a simple, cheap solution to get you started</p>
<pre><code>from subprocess import *
p = Popen('ssh servername who', shell=True, stdout=PIPE)
p.wait()
print p.stdout.readlines()
</code></pre>
<p>returns (eg)</p>
<pre><code>['usr pts/0 2009-08-19 16:03 (kakapo)\n',
'usr pts/1 2009-08-17 15:51 (kakapo)\n',
'usr pts/5 2009-08-17 17:00 (kakapo)\n']
</code></pre>
<p>and for cpuinfo:</p>
<pre><code>p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)
</code></pre>
| 2 | 2009-08-19T23:07:16Z | [
"python",
"networking",
"monitoring"
] |
Interface with remote computers using Python | 1,303,047 | <p>I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).</p>
<p>For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and <code>who</code>, but how would I get this info into a script for manipulation? Something like,</p>
<pre><code>import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
</code></pre>
<p>Similarly for things like <code>cat /proc/cpuinfo</code> to get the processor information of a node. A starting point would be really great. Thanks.</p>
| 5 | 2009-08-19T22:49:09Z | 1,303,109 | <p>I've been using <a href="http://sourceforge.net/projects/pexpect/" rel="nofollow">Pexpect</a>, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, <a href="http://bitbucket.org/inerte/proxpect/" rel="nofollow">Proxpect</a> - which haven't been updated in ages, but I digress...</p>
| 2 | 2009-08-19T23:08:31Z | [
"python",
"networking",
"monitoring"
] |
Interface with remote computers using Python | 1,303,047 | <p>I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).</p>
<p>For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and <code>who</code>, but how would I get this info into a script for manipulation? Something like,</p>
<pre><code>import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
</code></pre>
<p>Similarly for things like <code>cat /proc/cpuinfo</code> to get the processor information of a node. A starting point would be really great. Thanks.</p>
| 5 | 2009-08-19T22:49:09Z | 1,303,114 | <p>This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.</p>
<p>Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running <code>deploy-control httpd configtest</code> will check the apache configuration on all the remote servers.</p>
<pre><code>#!/usr/local/bin/python
import subprocess
import sys
# The user@host: for the SourceURLs (NO TRAILING SLASH)
RemoteUsers = [
"deploy@host1.example.com",
"deploy@host2.appcove.net",
]
###################################################################################################
# Global Variables
Arg = None
# Implicitly verified below in if/else
Command = tuple(sys.argv[1:])
ResultList = []
###################################################################################################
for UH in RemoteUsers:
print "-"*80
print "Running %s command on: %s" % (Command, UH)
#----------------------------------------------------------------------------------------------
if Command == ('httpd', 'configtest'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'graceful'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'status'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))
#----------------------------------------------------------------------------------------------
elif Command == ('disk', 'usage'):
CommandResult = subprocess.call(('ssh', UH, 'df -h'))
#----------------------------------------------------------------------------------------------
elif Command == ('uptime',):
CommandResult = subprocess.call(('ssh', UH, 'uptime'))
#----------------------------------------------------------------------------------------------
else:
print
print "#"*80
print
print "Error: invalid command"
print
HelpAndExit()
#----------------------------------------------------------------------------------------------
ResultList.append(CommandResult)
print
###################################################################################################
if any(ResultList):
print "#"*80
print "#"*80
print "#"*80
print
print "ERRORS FOUND. SEE ABOVE"
print
sys.exit(0)
else:
print "-"*80
print
print "Looks OK!"
print
sys.exit(1)
</code></pre>
| 0 | 2009-08-19T23:09:26Z | [
"python",
"networking",
"monitoring"
] |
Interface with remote computers using Python | 1,303,047 | <p>I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).</p>
<p>For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and <code>who</code>, but how would I get this info into a script for manipulation? Something like,</p>
<pre><code>import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
</code></pre>
<p>Similarly for things like <code>cat /proc/cpuinfo</code> to get the processor information of a node. A starting point would be really great. Thanks.</p>
| 5 | 2009-08-19T22:49:09Z | 1,303,116 | <p>The <a href="http://pexpect.sourceforge.net/pexpect.html" rel="nofollow">pexpect</a> module can help you interface with ssh. More or less, here is what your example would look like.</p>
<pre><code>child = pexpect.spawn('ssh servername')
child.expect('Password:')
child.sendline('ABCDEF')
(output,status) = child.sendline('who')
</code></pre>
| 1 | 2009-08-19T23:09:44Z | [
"python",
"networking",
"monitoring"
] |
Interface with remote computers using Python | 1,303,047 | <p>I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).</p>
<p>For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and <code>who</code>, but how would I get this info into a script for manipulation? Something like,</p>
<pre><code>import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
</code></pre>
<p>Similarly for things like <code>cat /proc/cpuinfo</code> to get the processor information of a node. A starting point would be really great. Thanks.</p>
| 5 | 2009-08-19T22:49:09Z | 1,303,117 | <p>Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:</p>
<pre><code>run('whoami', fail='ignore')
</code></pre>
<p>you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).</p>
<p>More info on Fabric here:</p>
<p><a href="http://www.nongnu.org/fab/" rel="nofollow">http://www.nongnu.org/fab/</a></p>
<p>There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...</p>
| 0 | 2009-08-19T23:09:52Z | [
"python",
"networking",
"monitoring"
] |
Interface with remote computers using Python | 1,303,047 | <p>I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).</p>
<p>For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and <code>who</code>, but how would I get this info into a script for manipulation? Something like,</p>
<pre><code>import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
</code></pre>
<p>Similarly for things like <code>cat /proc/cpuinfo</code> to get the processor information of a node. A starting point would be really great. Thanks.</p>
| 5 | 2009-08-19T22:49:09Z | 1,303,155 | <p>If your needs overgrow simple "<code>ssh remote-host.example.org who</code>" then there is an awesome python library, called <a href="http://en.wikipedia.org/wiki/RPyC" rel="nofollow">RPyC</a>. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.</p>
<p>Here's an example from Wikipedia:</p>
<pre><code>import rpyc
# assuming a classic server is running on 'hostname'
conn = rpyc.classic.connect("hostname")
# runs os.listdir() and os.stat() remotely, printing results locally
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
</code></pre>
<p>If you're interested, there's <a href="http://rpyc.wikidot.com/tutorial:part1" rel="nofollow">a good tutorial on their wiki</a>.</p>
<p>But, of course, if you're perfectly fine with ssh calls using <code>Popen</code> or just don't want to run separate "RPyC" daemon, then this is definitely an overkill.</p>
| 1 | 2009-08-19T23:22:59Z | [
"python",
"networking",
"monitoring"
] |
Graph databases and RDF triplestores: storage of graph data in python | 1,303,105 | <p>I need to develop a graph database in python (I would enjoy if anybody can join me in the development. I already have a bit of code, but I would gladly discuss about it).</p>
<p>I did my research on the internet. in Java, <a href="http://neo4j.org/">neo4j</a> is a candidate, but I was not able to find anything about actual disk storage. In python, there are many <a href="http://wiki.python.org/moin/PythonGraphApi">graph data models</a> (see this pre-PEP proposal, but none of them satisfy my need to store and retrieve from disk.</p>
<p>I do know about triplestores, however. triplestores are basically RDF databases, so a graph data model could be mapped in RDF and stored, but I am generally uneasy (mainly due to lack of experience) about this solution. One example is <a href="http://www.openrdf.org/">Sesame</a>. Fact is that, in any case, you have to convert from in-memory graph representation to RDF representation and viceversa in any case, unless the client code wants to hack on the RDF document directly, which is mostly unlikely. It would be like handling DB tuples directly, instead of creating an object. </p>
<p>What is the state-of-the-art for storage and retrieval (<em>a la</em> DBMS) of graph data in python, at the moment? Would it make sense to start developing an implementation, hopefully with the help of someone interested in it, and in collaboration with the proposers for the Graph API PEP ? Please note that this is going to be part of my job for the next months, so my contribution to this eventual project is pretty damn serious ;)</p>
<p><strong>Edit</strong>: Found also <a href="http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/">directededge</a>, but it appears to be a commercial product</p>
| 16 | 2009-08-19T23:07:36Z | 1,304,636 | <p>I think the solution really depends on exactly what it is you want to do with the graph once you have managed to store it on disk/in database, and this is a little unclear in your question. However, a couple of things you might wish to consider are:</p>
<ul>
<li>if you just want to persist the graph without using any of the features or properties you might expect from an rdbms solution (such as ACID), then how about just pickling the objects into a flat file? Very rudimentary, but like I say, depends on exactly what you want to achieve.</li>
<li><a href="http://wiki.zope.org/ZODB/FrontPage" rel="nofollow">ZODB</a> is an object database for Python (a spin off from the Zope project I think). I can't say I've had much experience of it in a high performance environment, but bar a few restrictions does allow you to store Python objects natively.</li>
<li>if you wish to pursue RDF, there is an <a href="http://www.openvest.com/trac/wiki/RDFAlchemy" rel="nofollow">RDF Alchemy</a> project which might help to alleviate some of your concerns about converting from your graph to RDF structures and I think has Sesame as part of it's stack.</li>
</ul>
<p>There are some other <a href="http://wiki.python.org/moin/PersistenceTools" rel="nofollow">persistence tools</a> detailed on the python site which may be of interest, however I spent quite a while looking into this area last year, and ultimately I found there wasn't a native Python solution that met my requirements. </p>
<p>The most success I had was using MySQL with a custom ORM and I posted a couple of relevant links in an answer to <a href="http://stackoverflow.com/questions/1271124/does-mysql-5-have-procedures-for-managing-hierarchical-data/1271517#1271517">this question</a>. Additionally, if you want to contribute to an RDBMS project, when I spoke to someone from Open Query about <a href="http://openquery.com/products/graph-engine" rel="nofollow">a Graph storage engine for MySQL</a> them seemed interested in getting active participation in their project.</p>
<p>Sorry I can't give a more definitive answer, but I don't think there is one... If you do start developing your own implementation, I'd be interested to keep up-to-date with how you get on.</p>
| 3 | 2009-08-20T07:59:20Z | [
"python",
"graph",
"database",
"graph-databases"
] |
Graph databases and RDF triplestores: storage of graph data in python | 1,303,105 | <p>I need to develop a graph database in python (I would enjoy if anybody can join me in the development. I already have a bit of code, but I would gladly discuss about it).</p>
<p>I did my research on the internet. in Java, <a href="http://neo4j.org/">neo4j</a> is a candidate, but I was not able to find anything about actual disk storage. In python, there are many <a href="http://wiki.python.org/moin/PythonGraphApi">graph data models</a> (see this pre-PEP proposal, but none of them satisfy my need to store and retrieve from disk.</p>
<p>I do know about triplestores, however. triplestores are basically RDF databases, so a graph data model could be mapped in RDF and stored, but I am generally uneasy (mainly due to lack of experience) about this solution. One example is <a href="http://www.openrdf.org/">Sesame</a>. Fact is that, in any case, you have to convert from in-memory graph representation to RDF representation and viceversa in any case, unless the client code wants to hack on the RDF document directly, which is mostly unlikely. It would be like handling DB tuples directly, instead of creating an object. </p>
<p>What is the state-of-the-art for storage and retrieval (<em>a la</em> DBMS) of graph data in python, at the moment? Would it make sense to start developing an implementation, hopefully with the help of someone interested in it, and in collaboration with the proposers for the Graph API PEP ? Please note that this is going to be part of my job for the next months, so my contribution to this eventual project is pretty damn serious ;)</p>
<p><strong>Edit</strong>: Found also <a href="http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/">directededge</a>, but it appears to be a commercial product</p>
| 16 | 2009-08-19T23:07:36Z | 1,304,985 | <p>Hmm, maybe you should take a look at <a href="http://www.cubicweb.org" rel="nofollow">CubicWeb</a></p>
| 1 | 2009-08-20T09:22:03Z | [
"python",
"graph",
"database",
"graph-databases"
] |
Graph databases and RDF triplestores: storage of graph data in python | 1,303,105 | <p>I need to develop a graph database in python (I would enjoy if anybody can join me in the development. I already have a bit of code, but I would gladly discuss about it).</p>
<p>I did my research on the internet. in Java, <a href="http://neo4j.org/">neo4j</a> is a candidate, but I was not able to find anything about actual disk storage. In python, there are many <a href="http://wiki.python.org/moin/PythonGraphApi">graph data models</a> (see this pre-PEP proposal, but none of them satisfy my need to store and retrieve from disk.</p>
<p>I do know about triplestores, however. triplestores are basically RDF databases, so a graph data model could be mapped in RDF and stored, but I am generally uneasy (mainly due to lack of experience) about this solution. One example is <a href="http://www.openrdf.org/">Sesame</a>. Fact is that, in any case, you have to convert from in-memory graph representation to RDF representation and viceversa in any case, unless the client code wants to hack on the RDF document directly, which is mostly unlikely. It would be like handling DB tuples directly, instead of creating an object. </p>
<p>What is the state-of-the-art for storage and retrieval (<em>a la</em> DBMS) of graph data in python, at the moment? Would it make sense to start developing an implementation, hopefully with the help of someone interested in it, and in collaboration with the proposers for the Graph API PEP ? Please note that this is going to be part of my job for the next months, so my contribution to this eventual project is pretty damn serious ;)</p>
<p><strong>Edit</strong>: Found also <a href="http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/">directededge</a>, but it appears to be a commercial product</p>
| 16 | 2009-08-19T23:07:36Z | 1,372,443 | <p>Regarding Neo4j, did you notice the existing <a href="http://wiki.neo4j.org/content/Python" rel="nofollow">Python bindings</a>? As for the disk storage, take a look at <a href="http://www.mail-archive.com/user@lists.neo4j.org/msg01128.html" rel="nofollow">this thread</a> on the <a href="http://neo4j.org/community/list/" rel="nofollow">mailing list</a>.</p>
<p>For graphdbs in Python, the <a href="http://sourceforge.net/projects/hygdas/" rel="nofollow">Hypergraph Database Management System</a> project was recently started on SourceForge by <a href="http://maurice.vodien.com/" rel="nofollow">Maurice Ling</a>.</p>
| 1 | 2009-09-03T09:39:41Z | [
"python",
"graph",
"database",
"graph-databases"
] |
Graph databases and RDF triplestores: storage of graph data in python | 1,303,105 | <p>I need to develop a graph database in python (I would enjoy if anybody can join me in the development. I already have a bit of code, but I would gladly discuss about it).</p>
<p>I did my research on the internet. in Java, <a href="http://neo4j.org/">neo4j</a> is a candidate, but I was not able to find anything about actual disk storage. In python, there are many <a href="http://wiki.python.org/moin/PythonGraphApi">graph data models</a> (see this pre-PEP proposal, but none of them satisfy my need to store and retrieve from disk.</p>
<p>I do know about triplestores, however. triplestores are basically RDF databases, so a graph data model could be mapped in RDF and stored, but I am generally uneasy (mainly due to lack of experience) about this solution. One example is <a href="http://www.openrdf.org/">Sesame</a>. Fact is that, in any case, you have to convert from in-memory graph representation to RDF representation and viceversa in any case, unless the client code wants to hack on the RDF document directly, which is mostly unlikely. It would be like handling DB tuples directly, instead of creating an object. </p>
<p>What is the state-of-the-art for storage and retrieval (<em>a la</em> DBMS) of graph data in python, at the moment? Would it make sense to start developing an implementation, hopefully with the help of someone interested in it, and in collaboration with the proposers for the Graph API PEP ? Please note that this is going to be part of my job for the next months, so my contribution to this eventual project is pretty damn serious ;)</p>
<p><strong>Edit</strong>: Found also <a href="http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/">directededge</a>, but it appears to be a commercial product</p>
| 16 | 2009-08-19T23:07:36Z | 1,990,600 | <p>I have used both <a href="http://jena.sourceforge.net/">Jena</a>, which is a Java framework, and <a href="http://www.franz.com/agraph/allegrograph/">Allegrograph</a> (Lisp, Java, Python bindings). Jena has sister projects for storing graph data and has been around a long, long time. Allegrograph is quite good and has a free edition, I think I would suggest this cause it is easy to install, free, fast and you could be up and going in no time. The power you would get from learning a little RDF and SPARQL may very well be worth your while. If you know SQL already then you are off to a great start. Being able to query your graph using SPARQL would yield some great benefits to you. Serializing to RDF triples would be easy, and some of the file formats are super easy ( NT for instance ). I'll give an example. Lets say you have the following graph node-edge-node ids:</p>
<blockquote>
<pre><code>1 <- 2 -> 3
3 <- 4 -> 5
</code></pre>
</blockquote>
<p>these are already subject predicate object form so just slap some URI notation on it, load it in the triple store and query at-will via SPARQL. Here it is in NT format:</p>
<blockquote>
<pre><code><http://mycompany.com#1> <http://mycompany.com#2> <http://mycompany.com#3> .
<http://mycompany.com#3> <http://mycompany.com#4> <http://mycompany.com#5> .
</code></pre>
</blockquote>
<p>Now query for all nodes two hops from node 1:</p>
<blockquote>
<pre><code>SELECT ?node
WHERE {
<http://mycompany.com#1> ?p1 ?o1 .
?o1 ?p2 ?node .
}
</code></pre>
</blockquote>
<p>This would of course yield <http://mycompany.com#5>.</p>
<p>Another candidate would be <a href="http://www.mulgara.org/">Mulgara</a>, written in pure Java. Since you seem more interested in Python though I think you should take a look at Allegrograph first.</p>
| 5 | 2010-01-02T04:24:43Z | [
"python",
"graph",
"database",
"graph-databases"
] |
Graph databases and RDF triplestores: storage of graph data in python | 1,303,105 | <p>I need to develop a graph database in python (I would enjoy if anybody can join me in the development. I already have a bit of code, but I would gladly discuss about it).</p>
<p>I did my research on the internet. in Java, <a href="http://neo4j.org/">neo4j</a> is a candidate, but I was not able to find anything about actual disk storage. In python, there are many <a href="http://wiki.python.org/moin/PythonGraphApi">graph data models</a> (see this pre-PEP proposal, but none of them satisfy my need to store and retrieve from disk.</p>
<p>I do know about triplestores, however. triplestores are basically RDF databases, so a graph data model could be mapped in RDF and stored, but I am generally uneasy (mainly due to lack of experience) about this solution. One example is <a href="http://www.openrdf.org/">Sesame</a>. Fact is that, in any case, you have to convert from in-memory graph representation to RDF representation and viceversa in any case, unless the client code wants to hack on the RDF document directly, which is mostly unlikely. It would be like handling DB tuples directly, instead of creating an object. </p>
<p>What is the state-of-the-art for storage and retrieval (<em>a la</em> DBMS) of graph data in python, at the moment? Would it make sense to start developing an implementation, hopefully with the help of someone interested in it, and in collaboration with the proposers for the Graph API PEP ? Please note that this is going to be part of my job for the next months, so my contribution to this eventual project is pretty damn serious ;)</p>
<p><strong>Edit</strong>: Found also <a href="http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/">directededge</a>, but it appears to be a commercial product</p>
| 16 | 2009-08-19T23:07:36Z | 4,986,634 | <p>Greetings from your Serius Cybernetics Intelligent Agent!</p>
<p>Some useful links...</p>
<ul>
<li><p><a href="http://rads.stackoverflow.com/amzn/click/0596153813" rel="nofollow">Programming the Semantic Web</a></p></li>
<li><p><a href="http://semprog.com/content/" rel="nofollow">SEMANTIC PROGRAMMING</a></p></li>
<li><p><a href="http://www.rdflib.net/" rel="nofollow">RDFLib Python Library for RDF</a></p></li>
</ul>
| 2 | 2011-02-13T20:23:11Z | [
"python",
"graph",
"database",
"graph-databases"
] |
Graph databases and RDF triplestores: storage of graph data in python | 1,303,105 | <p>I need to develop a graph database in python (I would enjoy if anybody can join me in the development. I already have a bit of code, but I would gladly discuss about it).</p>
<p>I did my research on the internet. in Java, <a href="http://neo4j.org/">neo4j</a> is a candidate, but I was not able to find anything about actual disk storage. In python, there are many <a href="http://wiki.python.org/moin/PythonGraphApi">graph data models</a> (see this pre-PEP proposal, but none of them satisfy my need to store and retrieve from disk.</p>
<p>I do know about triplestores, however. triplestores are basically RDF databases, so a graph data model could be mapped in RDF and stored, but I am generally uneasy (mainly due to lack of experience) about this solution. One example is <a href="http://www.openrdf.org/">Sesame</a>. Fact is that, in any case, you have to convert from in-memory graph representation to RDF representation and viceversa in any case, unless the client code wants to hack on the RDF document directly, which is mostly unlikely. It would be like handling DB tuples directly, instead of creating an object. </p>
<p>What is the state-of-the-art for storage and retrieval (<em>a la</em> DBMS) of graph data in python, at the moment? Would it make sense to start developing an implementation, hopefully with the help of someone interested in it, and in collaboration with the proposers for the Graph API PEP ? Please note that this is going to be part of my job for the next months, so my contribution to this eventual project is pretty damn serious ;)</p>
<p><strong>Edit</strong>: Found also <a href="http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/">directededge</a>, but it appears to be a commercial product</p>
| 16 | 2009-08-19T23:07:36Z | 6,726,392 | <p>Redland (http://librdf.org) is probably the solution you're looking for. It has Python bindings too.</p>
| 0 | 2011-07-17T19:56:52Z | [
"python",
"graph",
"database",
"graph-databases"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 1,303,252 | <h1>Python 2</h1>
<p>Use <code>isinstance(obj, basestring)</code> for an object-to-test <code>obj</code>.</p>
<p><a href="http://docs.python.org/library/functions.html#basestring">Docs</a>.</p>
| 181 | 2009-08-19T23:53:12Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 1,303,266 | <p>To check if an object <code>o</code> is a string type of a subclass of a string type:</p>
<pre><code>isinstance(o, basestring)
</code></pre>
<p>because both <code>str</code> and <code>unicode</code> are subclasses of <code>basestring</code>.</p>
<p>To check if the type of <code>o</code> is exactly <code>str</code>:</p>
<pre><code>type(o) is str
</code></pre>
<p>To check if <code>o</code> is an instance of <code>str</code> or any subclass of <code>str</code>:</p>
<pre><code>isinstance(o, str)
</code></pre>
<p>The above also work for Unicode strings if you replace <code>str</code> with <code>unicode</code>.</p>
<p>However, you may not need to do explicit type checking at all. "Duck typing" may fit your needs. See <a href="http://docs.python.org/glossary.html#term-duck-typing">http://docs.python.org/glossary.html#term-duck-typing</a>.</p>
<p>See also <a href="http://stackoverflow.com/questions/152580/whats-the-canonical-way-to-check-for-type-in-python">Whatâs the canonical way to check for type in python?</a></p>
| 92 | 2009-08-19T23:58:15Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 1,313,335 | <p>I might deal with this in the duck-typing style, like others mention. How do I know a string is really a string? well, obviously by <em>converting</em> it to a string!</p>
<pre><code>def myfunc(word):
word = unicode(word)
...
</code></pre>
<p>If the arg is already a string or unicode type, real_word will hold its value unmodified. If the object passed implements a <code>__unicode__</code> method, that is used to get its unicode representation. If the object passed cannot be used as a string, the <code>unicode</code> builtin raises an exception.</p>
| 4 | 2009-08-21T17:50:16Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 14,504,067 | <p>You can test it by concatenating with an empty string:</p>
<pre><code>def is_string(s):
try:
s += ''
except:
return False
return True
</code></pre>
<p><strong>Edit</strong>:</p>
<p>Correcting my answer after comments pointing out that this fails with lists</p>
<pre><code>def is_string(s):
return isinstance(s, basestring)
</code></pre>
| 0 | 2013-01-24T14:48:52Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 22,501,512 | <pre><code>isinstance(your_object, basestring)
</code></pre>
<p>will be True if your object is indeed a string-type. 'str' is reserved word.</p>
<p>my apologies,
the correct answer is using 'basestring' instead of 'str' in order of it to include unicode strings as well - as been noted above by one of the other responders.</p>
| 2 | 2014-03-19T09:35:26Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 23,692,403 | <h1>Python 2 and 3</h1>
<h3>(cross-compatible)</h3>
<p>If you want to check with no regard for Python version (2.x vs 3.x), use <a href="http://pythonhosted.org/six/"><strong><code>six</code></strong></a> (<a href="https://pypi.python.org/pypi/six/">PyPI</a>) and it's <code>string_types</code> attribute:</p>
<pre><code>import six
if isinstance(obj, six.string_types):
print('obj is a string!')
</code></pre>
<p>Within <code>six</code> (a very light-weight single-file module), it's simply doing this:</p>
<pre><code>import sys
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
else:
string_types = basestring,
</code></pre>
| 37 | 2014-05-16T03:57:49Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 25,798,563 | <pre><code>if type(varA) == str or type(varB) == str:
print 'string involved'
</code></pre>
<p>from
EDX - online course
MITx: 6.00.1x Introduction to Computer Science and Programming Using Python</p>
| -3 | 2014-09-11T23:38:26Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 26,535,728 | <h1>Python 3</h1>
<p>In Python 3.x <code>basestring</code> is not available anymore, as <code>str</code> is the sole string type (with the semantics of Python 2.x's <code>unicode</code>).</p>
<p>So the check in Python 3.x is just:</p>
<pre><code>isinstance(obj_to_test, str)
</code></pre>
<p>This follows <a href="https://docs.python.org/2/library/2to3.html#2to3fixer-basestring">the fix</a> of the official <code>2to3</code> conversion tool: converting <code>basestring</code> to <code>str</code>.</p>
| 36 | 2014-10-23T19:19:58Z | [
"python",
"string",
"types",
"compatibility"
] |
How to find out if a Python object is a string? | 1,303,243 | <p>How can I check if a Python object is a string (either regular or Unicode)?</p>
| 195 | 2009-08-19T23:50:58Z | 26,963,019 | <p>For a nice duck-typing approach for string-likes that has the bonus of working with both Python 2.x and 3.x:</p>
<pre><code>def is_string(obj):
try:
obj + ''
return True
except TypeError:
return False
</code></pre>
<p><a href="http://stackoverflow.com/users/920374/wisefish">wisefish</a> was close with the duck-typing before he switched to the <code>isinstance</code> approach, except that <code>+=</code> has a different meaning for lists than <code>+</code> does.</p>
| 0 | 2014-11-16T22:51:39Z | [
"python",
"string",
"types",
"compatibility"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.