Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Python-specific PATH environment variable?
22,772,608
4
2
47
0
python,shell
yes, add it to your PYTHONPATH as you are doing, but you cannot invoke it with python foo.py, instead, use python -m foo.
0
1
0
1
2014-03-31T21:29:00.000
1
1.2
true
22,772,554
1
0
0
1
Say I have a script script.py located in a specific folder in my system. This folder is not available on PATH. Assuming that I will always run script.py using python script.py, is there any way to run my script from anywhere on the system without having to modify PATH? I thought modifying PYTHONPATH would do it, but it doesn't. PYTHONPATH seems to only affect the module search path, and not the script search path. Is my understanding correct?
Starting up PySpark for using python with Spark in eclipse
23,485,718
6
1
6,907
0
python,apache-spark
I started a new Python project in PyDev, then went into Project -> Properties -> PyDev - PYTHONPATH -> External libraries. I added a "source path" entry for /path/to/spark/spark-0.9.1/python This allowed PyDev to see all Spark-related code and provide auto complete, etc. Hope this helps.
0
1
0
0
2014-04-01T11:49:00.000
2
1
false
22,785,010
0
0
0
1
how do i use python for a Spark program in eclipse? I've installed PyDev plugin in eclipse and installed Python on the system but how do i use PySpark.
Outputting text to multiple terminals in Python
22,799,374
3
3
5,633
0
python,shell,terminal
I like @ebarr's answer, but a quick and dirty way to do it is to write to several files. You can then open multiple terminals and tail the files.
0
1
0
0
2014-04-01T20:43:00.000
2
0.291313
false
22,796,476
1
0
0
1
(I am using Python and ArchLinux) I am writing a simple AI in Python as a school project. Because it is a school project, and I would like to visibly demonstrate what it is doing, my intention is to have a different terminal window displaying printed output from each subprocess- one terminal showing how sentences are being parsed, one showing what pyDatalog is doing, one for the actual input-output chat, etc, possibly on two monitors. From what I know, which is not much, a couple of feasible ways to go about this are threading each subprocess and figuring out display from there, or writing/using a library which allows me to make and configure my own windows. My question is, then, are those the best ways, or is there an easy way to output to multiple terminals simultaneously. Also, if making my own windows (and I'm sorry if my terminology is wrong when I say 'making my own windows'. I mean building my own output areas in Python) is the best option, I'm looking for which library I should use for that.
.py file from Linux to Windows: is it going to just work?
22,805,475
1
1
220
0
python,linux,python-idle
Depends on the script. Unless you use anything os-specific, you are golden. In the standard library most of the modules are totally os-agnostic, and for the rest the rule of thumb is - "if it is possible to provide the same functionality across *nix and windows, it has probably been done". Python actually makes it pretty easy to write portable programs. Even file paths manipulation is pretty portable if you do it right - os.path.sep instead of '/', os.path.join instead of string concatenation etc. Notable exceptions are sockets - windows sockets are bit different multiprocessing - windows does not have fork(), that may or may not be a problem. Needless to say, things related to username, hostname and such. os and sys are a mixed bag - you should read the compatibility notes in the docs. Everything packaging and distribution-related.
0
1
0
0
2014-04-02T08:04:00.000
2
0.099668
false
22,805,118
1
0
0
1
I am using Python 2.7.5+ on my Linux Mint to write simple programs as .py files and running them in Konsole Terminal. These work fine on my computer, but I need to share them with a friend using Windows (IDLE I suppose) and wonder if these will work as they are without modification. The programs start with the usual #!/usr/bin/python, you know.
Simulate keyboard input linux
22,812,228
1
2
2,879
0
python,c++,linux,input,keyboard
The most generic solution is to use pseudo-terminals: you connect tttyn to the standard in and standard out of the program you want to monitor, and use pttyn to read and write to it. Alternatively, you can create two pipes, which you connect to the standard in and standard out of the program to be monitored before doing the exec. This is much simpler, but the pipes look more like a file than a terminal to the program being monitored.
0
1
0
1
2014-04-02T12:41:00.000
2
0.099668
false
22,811,844
0
0
0
1
I work on a project to control my PC with a remote, and a infrared receptor on an Arduino. I need to simulate keyboard input with a process on linux who will listen arduino output and simulate keyboard input. I can dev it with Python or C++, but i think python is more easy. After many search, i found many result for... windows u_u Anyone have a library for this ? thanks EDIT: I found that /dev/input/event3 is my keyboard. I think write in to simulate keyboard, i'm searching how do that
python import module vs running script as subprocess.popen
22,840,850
0
1
903
0
python,import,subprocess
If you want to use functions from an other script then you usually import the script. When the script is script.py you can write import script and use the functions that are defined in the script with script.function_in_the_script.
0
1
0
0
2014-04-03T12:51:00.000
2
0
false
22,838,333
1
0
0
2
Suppose I have python script having having 4-5 functions all are called from single function in a script. If I want to results after executing script( Use functions from another script) I can make script executable and use subprocess.popen and I can also import these functions in another script. Which is better way to do this?
python import module vs running script as subprocess.popen
22,878,579
1
1
903
0
python,import,subprocess
Which is better way to do this? Use import unless you have to use subprocess.Popen to run Python code. import uses sys.path to find the module; you don't need to specify the path explicitly usually, imported functions accept arguments, return results in the same process; you don't need to serialize Python objects into bytes to send them to another process
0
1
0
0
2014-04-03T12:51:00.000
2
1.2
true
22,838,333
1
0
0
2
Suppose I have python script having having 4-5 functions all are called from single function in a script. If I want to results after executing script( Use functions from another script) I can make script executable and use subprocess.popen and I can also import these functions in another script. Which is better way to do this?
How to prevent an app from being killed in windows task manager?
22,850,707
2
1
1,098
0
c#,python
I think it may be because things like anti-virus software are hooked into kernel-mode as drivers and can intercept user-mode input and intervene. The anti-virus may be hooked into the kernel APIs for process management, and reject calls through the process APIs to kill a process with the same PID as itself. If this is the case, then the answer would be that no you can't as I highly doubt that C# can be run in kernel-mode.
0
1
0
0
2014-04-03T23:05:00.000
1
1.2
true
22,850,591
1
0
0
1
I want to make a repeated question. how to prevent that someone stop an application with the task manager. I now that is posible, if you try to kill avastui.exe form the task manager the task manager say "the operation could not be completed acces denied" and it happens when de service of avast is on, when you stop the service of avast you can kill the process avastui.exe. Someone have any idea how avast do it? how can i make it on c# or python? Thanks in advance
Can we send a tuple as an argument to sys.exit() in python
22,871,157
2
1
991
0
python
Yes, you are correct. Passing a tuple will print the tuple to stderr and return with a exit code of 1. You must return None to denote success. Notice this is a convention of shells and the like and is not required. That being said the conventions are in place for a very, very good reason.
0
1
0
1
2014-04-04T19:03:00.000
2
0.197375
false
22,871,051
0
0
0
1
I am running my scripts on python 2.6. The requirement is as mentioned below. There are some 100 test scripts (all python scripts) in one directory. I have to create one master python script which will start running all the 100 test scripts one by one and then I have to display whether test case is failed or not. Every script will call sys.exit() to finish the execution of script. Currently I am reading the sys.exit() value from the master script and I am determining whether the particular test case is failed or not. But now there is a requirement change that I have to display the log file name also (log files will be created when I run scripts). So can I send a tuple as argument (which contains status as well as log file name) to sys.exit() instead of sending integer value? I have read on net that if we pass an argument other than integer, None is equivalent to passing zero, and any other object is printed to stderr and results in an exit code of 1. So if I pass a tuple as an argument, will os consider as failure even in success case also as I am not passing None? I am using subprocess.popen() in my master script to run the scripts and I am using format() to read the sys.exit() value.
pydev Google App run Path for project must have only one segment
23,118,828
8
4
1,009
0
eclipse,google-app-engine,python-2.7
This is clearly a bug, but there's a possible workaround: In a .py file in your project, right-click and go to "Run As." Then, select "Python Run" (not a custom configuration). Let it run and crash or whatever this particular module does. Now, go look at your run configurations - you'll see one for this run. You can customize it as if you had made it anew.
0
1
0
0
2014-04-05T05:40:00.000
1
1.2
true
22,877,052
0
0
1
1
I had trouble to run the pyDev Google App run on Eclipse. I can't create a new run configuration and I get this error message: Path for project must have only one segment. Any ideas about how to fix it? I am running Eclipse Kepler on Ubuntu 13.10
Exchanging NDB Entities between two GAE web apps using URL Fetch
22,880,568
2
0
131
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb,urlfetch
You can use the NDB to_dict() method for an entity and use json to exchange te data. If it is a lot of data you can use a cursor. To exchange the entity keys, you can add the safe key to the dict.
0
1
0
0
2014-04-05T10:53:00.000
2
0.197375
false
22,879,890
0
0
1
1
I am planning to exchange NDB Entities between two GAE web apps using URL Fetch. One Web app can initiate the HTTP POST Request with the entity model name, starting entity index number and number of entities to be fetched. Each entity would have an index number which would be incremented sequentially for new entities. To Send an Entity: Some delimiter could be added to separate different entities as well as to separate properties of an entity. The HTTP Response would have a variable (say "content") containing the entity data. Receiving Side Web APP: The receiver web app would parse the received data and store the entities and their property values by creating new entities and "put"ting them Both the web apps are running GAE Python and have the same models. My Questions: Is there any disadvantage with the above method? Is there a better way to achieve this in automated way in code? I intend to implement this for some kind of infrequent data backup design implementation
Clean retry in deferred.defer
22,900,378
0
0
165
0
python,google-app-engine
Just relaunch the task from the task with another deferred.defer call.
0
1
0
0
2014-04-06T21:04:00.000
2
0
false
22,900,026
0
0
1
2
I am using deferred.defer quite heavily to schedule tasks using push queues on AppEngine. Sometimes I wish I would have a clean way to signal a retry for a task without having to raise an Exception that generates a log warning. Is there a way to do this?
Clean retry in deferred.defer
22,905,035
4
0
165
0
python,google-app-engine
If you raise a deferred.SingularTaskFailure it will set an error HTTP-status, but there won't be an exception in the log.
0
1
0
0
2014-04-06T21:04:00.000
2
1.2
true
22,900,026
0
0
1
2
I am using deferred.defer quite heavily to schedule tasks using push queues on AppEngine. Sometimes I wish I would have a clean way to signal a retry for a task without having to raise an Exception that generates a log warning. Is there a way to do this?
How to find the keys being evicted from memcache?
23,104,270
2
2
220
0
memcached,python-memcached
Not possible AFAIK, but a really good (and simple) solution is to modify your memcached library and do a print (or whatever you want) in the delete and multidelete methods. You can then get the keys that are being deleted (both by your app and by the library itself). I hope that helps
0
1
0
1
2014-04-07T11:21:00.000
1
1.2
true
22,910,946
0
0
0
1
Is there any inbuilt way/ or a hack by which I can know which key is being evicted from memcache ? There is one solution of polling for all possible keys inserted into memcache (e.g. get multi), but that is inefficient and certainly not implementable for large number of keys. The functionality is not needed to be run in production, but during some benchmarking and optimization runs.
Launch multiple process of an app on mac osx
23,134,724
2
3
511
0
python,macos,applescript,py2app,platypus
This is not really a py2app problem, but caused by the way the platform works: when a user tries to open a file that's associated with an application that is already running the system doesn't start a second instance of the application but sends the already running application an event to tell it to open the new file. To handle multiple files you should implemented some kind of GUI event loop (using PyObjC, Tk, ...) that can be used to receive the OSX events that are sent when a user tries to open a file for an already running application.
0
1
0
0
2014-04-07T19:40:00.000
1
0.379949
false
22,921,549
1
0
0
1
I am using python 2.7 on mac osx 10.9 for creating an app. This app takes file name as argument, and then opens the file, and keep monitoring the file for changes till file is closed. It is working fine for a single file. I used, py2app and platypus for converting python code .py file to an app. Limitation of it is, once an instance(process) of an app is started(by clicking on any file to open), file opens. But, simultaneously, I am not able to open two files at a time i.e. to launch to instance of an app. Through terminal, it is possible to launch multiple instance of an app. Then, what should I do, to open multiple files at a time, by clicking on multiple files at a time through this app.
Plugin Python depends on unknown plugins org.jetbrains.plugins.yaml, org.jetbrains.plugins.remote-run, Coverage
22,990,376
0
0
2,069
0
python,plugins,intellij-idea,jetbrains-ide
You can't use Python plugin with Idea Community edition, sorry. It requires IntelliJ IDEA Ultimate.
0
1
0
1
2014-04-08T12:08:00.000
2
0
false
22,936,567
0
0
0
1
I'm using IDEA CE 13.1.1 and tried to install the Python plugin version 3.4.Beta.135.1 from file because my development PC has no access to internet for security reasons. But get following warning and the plugin get not activated: Plugin Python depends on unknown plugins org.jetbrains.plugins.yaml, org.jetbrains.plugins.remote-run, Coverage I searched for these plugins in the repository but did not find them, only references in other plugin details that depend on them. How are they really called? How can I find them? Thanks
How to detect 'live' files during filesystem backup
22,947,202
0
1
49
0
python,database,file-io,cross-platform,backup
You could look for the file being closed and archive it. The phi notify library allows you to watch given files or directories for a number of events, including CLOSE-WRITE which allows you to detect those files which have closed with changes.
0
1
0
0
2014-04-08T15:53:00.000
1
0
false
22,942,091
0
0
0
1
I'm writing a Python-based service that scans a specified drive for files changes and backs them up to a storage service. My concern is handling files which are open and being actively written to (primarily database files). I will be running this cross-platform so Windows/Linux/OSX. I do not want to have to tinker with volume shadow copy services. I am perfectly happy with throwing a notice to the user/log that a file had to be skipped or even retrying a copy operation x number of times in the event of an intermittent write lock on a small document or similar type of file. Successfully copying out a file in an inconsistent state and not failing would certainly be a Bad Thing(TM). The users of this service will be able to specify the path(s) they want backed-up so I have to be able to determine at runtime what to skip. I am thinking I could just identify any file which has a read/write handle and try to obtain exclusive access to it during the archival process, but I think this might be too intrusive(?) if the user was actively using the system. Ideas?
celery tasks, workers and queues organization
22,985,609
0
3
898
0
python,multithreading,asynchronous,rabbitmq,celery
In a similar setup, I decided to go with specific queues for different tasks, and then I can decide which worker listens on which queue (which can also be changed dynamically !).
0
1
0
0
2014-04-09T09:43:00.000
2
0
false
22,958,634
0
0
0
2
I have some independent tasks which I am currently putting into different/independent workers. To be understood easily I will walk you through an example. Let's say I have three independent tasks namely sleep, eat, smile. A task may need to work under different celery configurations. So, I think, it is better to separate each of these tasks into different directories with different workers. Some tasks may be required to work on different servers. I am planning add some more tasks in the future and each of them will be implemented by different developers. Providing these conditions, there are more than one workers associated to each individual task. Now, here is the problem and my question. When I start three smile tasks, one of these will be fetched by smile's worker and carried out. But the next task will be fetched from eat's worker and never will be carried out. So, what is the accepted, most common pattern? Should I send each tasks into different queues and workers should listen its own queue?
celery tasks, workers and queues organization
24,001,208
1
3
898
0
python,multithreading,asynchronous,rabbitmq,celery
The answer depends on couple of things that should be taken in consideration: Does order of commands should be preserved ? If so the best approach is placing some sort of command pattern as serialized message so each fetched/consumed message can be executed in it's order in single place in your application. If it's not an issue for you - you can play with topic exchange while publishing different message types in single exchange, and having different workers receiving the messages by predefined pattern. This by the way will let you easily add another task lets say "drink" without changing a line in already existing transportation topology/already existing workers. Are you planning scaling queues among different machines to increase throughput ? In case you have very intense traffic of tasks (in terms of frequency) it may be worth creating different queue for each task type so latter when you grow you can place each one on different node in rabbit cluster.
0
1
0
0
2014-04-09T09:43:00.000
2
1.2
true
22,958,634
0
0
0
2
I have some independent tasks which I am currently putting into different/independent workers. To be understood easily I will walk you through an example. Let's say I have three independent tasks namely sleep, eat, smile. A task may need to work under different celery configurations. So, I think, it is better to separate each of these tasks into different directories with different workers. Some tasks may be required to work on different servers. I am planning add some more tasks in the future and each of them will be implemented by different developers. Providing these conditions, there are more than one workers associated to each individual task. Now, here is the problem and my question. When I start three smile tasks, one of these will be fetched by smile's worker and carried out. But the next task will be fetched from eat's worker and never will be carried out. So, what is the accepted, most common pattern? Should I send each tasks into different queues and workers should listen its own queue?
Celery periodic tasks: testing by modifying system time
23,090,632
1
1
283
0
python,celery
The solution for me was to restart redis after the time update, and also restart celerybeat. That combination seems to work.
0
1
0
1
2014-04-09T17:08:00.000
1
1.2
true
22,969,365
0
0
0
1
I'm trying to test out some periodic tasks I'm running in Celery, which are supposed to run at midnight of the first day of each month. To test these, I have a cron job running every few minutes which bumps the system time up to a few minutes before midnight on the last day of the month. When the clock strikes midnight (every few minutes), the tasks are not run. All the times are UTC, and celery is set to UTC mode. Celery itself is working fine, I can run the tasks manually. What might be going on here? Also, how does celery keep track of the system time for its scheduling, how does it handle a system time update? Could it be that celery's time and the system time get out of sync somehow? This is Celery 3.1.0 with redis as broker/backend
How to check which ports are connected to host in Mininet using open vswitch and a Pox controller?
43,923,014
0
3
2,310
0
python,pox,openflow
I think this question is old. but you can do so using host tracker module. Have alook at the host tracker module and gephi_topo module under misc to see the code to extract such information under PacketIn event.
0
1
0
0
2014-04-10T04:44:00.000
1
0
false
22,978,833
0
0
0
1
I am trying to write a Pox controller using python. The environment is set up using Mininet and the switch type is ovsk (open vswitch). For each individual switch, some of the ports are connected to hosts, some of them are connected with the other peer switches, some might connected to the controller, or routers. I can use "sh ovs-ofctl show " in mininet to get the openflow port number mapping with interface name. My question is: in the Pox python code, how can I check which ports on a switch are connected to host and which ones are connected to peer switches, controllers or routers?
What OS interrupt comes from closing a terminal tab?
22,993,707
0
1
365
0
python,windows,interrupt
Python does not seem to have an exception for this case. The closest would be SystemExit, however that does not actually capture the interrupt you're looking for. Windows seems to actually send Ctrl+C before killing the process when you close a terminal, however capturing KeyboardInterrupt doesn't seem to work either. At this point you might want to look into the signal module.
0
1
0
0
2014-04-10T16:04:00.000
2
0
false
22,993,244
0
0
0
1
I presume closing a terminal window (or a terminal window embedded in an IDE) sends some kind of OS interrupt signal to the process running in the terminal. How can I find out what this signal is? I am looking for a way to capture the interrupt, run some clean up, and then abort. I am using Python and Windows.
Easiest way to store a single timestamp in appengine
23,025,323
0
0
651
0
python,google-app-engine
Another way to solve this, that i found, is to use memcache. It's super easy. Though it should probably be noted that memcache could be cleared at anytime, so NDB is probably a better solution. Set the timestamp: memcache.set("timestamp", current_timestamp) Then, to read the timestamp: memcache.get("timestamp")
0
1
0
0
2014-04-10T23:36:00.000
2
0
false
23,000,998
0
0
1
1
I am running a Python script on Google's AppEngine. The python script is very basic. Every time the script runs i need it to update a timestamp SOMEWHERE so i can record and keep track of when the script last ran. This will allow me to do logic based on when the last time the script ran, etc. At the end of the script i'll update the timestamp to the current time. Using Google's NBD seems to be overkill for this but it also seems to be the only way to store ANY data in AppEngine. Is there a better/easier way to do what i want?
How to use tornado as both a socket server and web server?
23,031,157
3
2
1,209
0
python,sockets,web,tornado
You can start multiple servers that share an IOLoop within the same process. Your HTTPServer could listen on one port, and the TCPServer could listen on another.
0
1
1
0
2014-04-12T10:07:00.000
1
1.2
true
23,028,941
0
0
0
1
I know the httpserver module in tornado is implemented based on the tcpserver module, so I can write a socket server based on tornado. But how can I write a server that is both a socket server and a web server? For example, if I want to implement a chat app. A user can either login through a browser or a client program. The browser user can send msg to the client user through the back-end server. So the back-end server is a web and socket server.
Installing Leap Motion sdk into Enthought SDK
24,131,195
0
0
671
0
python,python-2.7,enthought,leap-motion
Try this: Put the four files into one folder. Right click on the Sample.py until it says "Open with" and gives some choices. Select Python Launcher.app (2.7.6) # This version of Python Launcher must match the Mac built in Python Version. If your version of LeapPython.so is constructed correctly, it should run.
0
1
0
0
2014-04-13T01:48:00.000
2
0
false
23,038,209
1
0
0
1
I am trying to install the leap motion sdk into Enthought Canapy. The page called Hello World on leap motion mentions i need to put these four files: Sample.py, Leap.py, LeapPython.so and libLeap.dylib into my "current directory". I don't know how to find my current directory. I have tried several things including typing into terminal "python Sample.py" which tells me: /Users/myname/Library/Enthought/Canopy_64bit/User/Resources/Python.app/Contents/MacOS/Python: can't open file 'Sample.py': [Errno 2] No such file or directory I've tried to put the 4 files in the MacOS file, but it still gives me this error. Any suggestions would be greatly appreciated.
Shared Server: Python Script run under UNIX Shell vs HTTP
23,041,133
0
0
132
0
python,shell,dreamhost,pycrypto
Your web server does not read your .bash_profile.
0
1
0
1
2014-04-13T09:31:00.000
1
0
false
23,041,079
0
0
0
1
I have a Python script on my Dreamhost shared server. When I access my script via SSH (using the UNIX Shell) my script executes fine and is able to import the Pycrypto module Crypto.Cipher. But if I access my script via HTTP using my websites url. The script fails when it goes to import the Pycrypto module Crypto.Cipher. It gives the error ImportError: No module named Crypto.Cipher. Do you know what might be causing this weird error? And how I can fix it. Some important information: - I have installed a custom version of python on my shared server. Its just Python 2.7 with Pycrypto and easy_install installed. - I am certain that the script is running under Python 2.7 and not Dreamhosts default 2.6 version. I know this because the script prints sys.version_info(major=2, minor=7, micro=0, releaselevel='final', serial=0) both in the UNIX shell and HTTP. - I installed Pycrypto manually (using tar, and running setup.py) as opposed to using easy_install or pip. - I have editted my .bash_profile's PATH variable correctly (well I believe I have done it correctly because the script is run under Python 2.7 not 2.6). Any advice would be extremely helpful.
Shutting down a Python environment orderly
23,045,672
2
2
66
0
python,multithreading,exit
That depends on what you mean by "orderly". Even if you don't have any non-daemonic threads, if you call sys.exit() from main thread, the other threads will not complete in an "orderly" fashion. There's no guarantee they will clean up after themselves. The only really clean way to do it is for the main thread to signal the other threads they should complete and abort (e.g. by setting a flag or an Event which they check periodically), wait for them to complete (by joining them), and then return from its main function.
0
1
0
0
2014-04-13T16:53:00.000
1
1.2
true
23,045,524
1
0
0
1
The obvious ways I can think of to make a Python environment exit, is either sys.exit(), or os._exit(). However, sys.exit() doesn't work outside the main thread, and os._exit() doesn't run shutdown handlers (e.g. those registered via atexit.register). Also, when there are non-daemon threads running, just exiting the main thread (as might be effected through thread.interrupt_main, for instance) won't make the rest of the environment shut down, either. Is there a way to make Python exit from another thread than the main thread, which runs shutdown handlers?
How can I make the "python" command in terminal, run python3 instead of python2?
23,048,869
4
22
75,788
0
python,terminal
Sounds like you have python 2 and 3 installed and your pythonpath is pointed at python 2, so unless specified it uses that version. If you are using python I would suggest setting up a virtual environment (virtualenv) for each project, which means you could run whatever version you'd like in that project and keep all dependencies contained.
0
1
0
0
2014-04-13T21:39:00.000
6
0.132549
false
23,048,756
1
0
0
3
I'm just starting to learn Python and did search around a little, so forgive me if this has been asked and answered. When running scripts through the command line/terminal, I have to type "python3" to run the latest version of Python. With Python 2.X I just use "python". Is there a way to run Python 3 just using "python"? It may seem a little lazy, but I'm mostly just curious if it is possible or if it will break anything unnecessarily if I could in fact do it.
How can I make the "python" command in terminal, run python3 instead of python2?
41,886,126
0
22
75,788
0
python,terminal
on raspbian linux in the terminal i just run it by typing python3 file.py or just python file.py for python 2
0
1
0
0
2014-04-13T21:39:00.000
6
0
false
23,048,756
1
0
0
3
I'm just starting to learn Python and did search around a little, so forgive me if this has been asked and answered. When running scripts through the command line/terminal, I have to type "python3" to run the latest version of Python. With Python 2.X I just use "python". Is there a way to run Python 3 just using "python"? It may seem a little lazy, but I'm mostly just curious if it is possible or if it will break anything unnecessarily if I could in fact do it.
How can I make the "python" command in terminal, run python3 instead of python2?
32,158,988
14
22
75,788
0
python,terminal
If you are using Linux, add the following into into ~/.bashrc alias python=python3 Restart the shell and type python and python3 should start instead of python2.
0
1
0
0
2014-04-13T21:39:00.000
6
1
false
23,048,756
1
0
0
3
I'm just starting to learn Python and did search around a little, so forgive me if this has been asked and answered. When running scripts through the command line/terminal, I have to type "python3" to run the latest version of Python. With Python 2.X I just use "python". Is there a way to run Python 3 just using "python"? It may seem a little lazy, but I'm mostly just curious if it is possible or if it will break anything unnecessarily if I could in fact do it.
Compile pymunk on mac OS X
23,200,199
0
1
118
0
python,chipmunk,pymunk
Try and go to the folder where setup.py is first and then do python setup.py install. As you have noticed, it assumes that you run it from the same folder as where its located.
0
1
0
1
2014-04-14T21:37:00.000
1
1.2
true
23,070,922
1
0
0
1
I have downloaded pymunk module on my computer. When I typed in "python setup.py install" in terminal, it says "no such file or directory", then I typed in the complete path of setup.py instead of setup.py, and it still could not run since the links to other files in the code of setup.py are not complete paths. (Like README.txt, terminal said "no such file or directory". Sorry I'm a python newbie. Someone tell me how can I fix it? Thanks!!!!
Curses can_change_color() always returns False
23,091,381
2
2
701
0
python,colors,ncurses,curses,xterm
can_change_color() actually reports whether colors can be remapped, via init_color() -- an uncommon capability -- not whether colors can be used at all, via init_pair(). To check for that basic color capability, what you want is has_colors(). init_color(), on the terminals where it works, lets you do things like tweak the exact shade of blue used -- or make the terminal's idea of "blue" show up as something else entirely.
0
1
0
0
2014-04-15T13:34:00.000
1
1.2
true
23,085,308
0
0
0
1
I wrote a little-more-than-throwaway monitoring script in Python which uses ncurses and color to display some values which update frequently, but which are hardly ever of interest. To alert me to significant changes, I set things up so that when these values get into the realm of being interesting, the text changes from black-on-white to white-on-red. This works fine on my Linux (openSuSE 12.2) box, but on Solaris 10 curses.can_change_color() always returns False, no matter what I have tried. On both platforms, I am using the same version of Python (2.7.2) and ncurses (5.7). I have a number of terminal emulators available to me (gnome-terminal, xterm, rxvt). All are capable of displaying my shell prompt in red, so I know they support color. I've tried setting TERM to a number of xterm variants, including xtermc, xterm-color, rxvt, rxvt-16color. Some of those terminal names aren't in the default location, so I occasionally also have to set TERMINFO to point at a terminfo capability database. I'm thus sure the entries I desire are found. The Python curses.can_change_color() function is just a thin wrapper around the ncurses library routine of the same name. Why is it always returning False?
What is best way to save data with appengine/HTML5/JavaScript/Python combo?
23,128,324
0
0
133
0
javascript,html,google-app-engine,python-2.7
My thanks to all of you for taking time to respond. Each response was useful it it's own way. The AJAX/JQuery looks a promising route for me, so many thanks for the link on that. I'll stop equivocating and stick with Python rather than try Go and start working through the tutorials and courses. Gary
0
1
0
0
2014-04-15T13:42:00.000
4
0
false
23,085,522
0
0
1
1
I want to build an application with an HTML5 interface that persists data using google-app-engine and could do with some some advice to avoid spending a ton of time going down the wrong path. What is puzzling me is the interaction between HTML5, Javascript/JQuery and Python. Let's say, I have designed an HTML5 site. I believeetc I can use prompts and forms to collect data entered by users. I also know that I can use Javascript to grab that data and keep it in the form of objects...I need objects for reasons I'll not go into. But when I look at the app-engine example, it has HTML form information embedded in the Python code which is what is used to store the data in the cloud Datastore. This raises the following questions in my mind: do I simply use Python to get user entered information? how does python interact with a separately described HTML5/CSS2 forms and prompts? does Javascript/Jquery play any role with respect to data? are forms and prompts the best way to capture use data? (Is there a better alternative) As background: It is a while since I programmed but have used HTML and CSS a fair bit I did the Javascript and Jquery courses at Codeacademy I was considering using Go which looks funky but "experimental" worries me and I cannot find a good IDE such as devTable I can do the Python course at Codeacademy pretty quickly if I need it? I think I may need to understand there objects syntax I appreciate this is basic basic stuff but if I can get my plumbing sorted, I doubt that I'll have to ask too man more really stupid questsions Gary
how to specify version of python to run a script?
51,430,363
3
4
28,056
0
python
You know, you can start python with py -specific version To run a script on interpreter with a specific version you'll just start your script with following parameters, py yourscript.py -version
0
1
0
0
2014-04-15T15:36:00.000
4
0.148885
false
23,088,338
1
0
0
1
I'm learning python now using a mac which pre-installed python 2.7.5. But I have also installed the latest 3.4. I know how to choose which interpreter to use in command line mode, ie python vs python 3 will bring up the respective interpreter. But if I just write a python script with this header in it "#!/usr/bin/python" and make it executable, how can I force it to use 3.4 instead of 2.7.5? As it stands, print sys.version says: 2.7.5 (default, Aug 25 2013, 00:04:04) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
Writing a python wrapper for librsync with ctypes. How should I compile librsync to work on all systems?
23,261,836
0
0
366
0
python,c++,wrapper,librsync
Generally, no, it's not possible to build a C library that will work smoothly across platforms. Some BSD systems can run Linux applications (and maybe vice versa) so you could build the whole thing in that way, but it would require shipping a Linux Python.
0
1
0
0
2014-04-16T18:08:00.000
1
1.2
true
23,116,965
0
0
0
1
So I'm writing an application in python which requires librsync for more efficient file transfers. I want my librsync wrapper to work so that if librsync is already installed on the system it will use that, but otherwise try to use a version shipped with my application. The wrapper currently works on linux with librsync already installed and I also managed to compile librsync to a DLL that works with the wrapper on windows. When I compile it on linux to a .so file I can move it to other linux systems and it will work, but when I try to use it on FreeBSD I get an "invalid file layout" error. I'm wondering, is it possible to compile librsync to a library file that would work cross-platform? (or only on all *NIX systems) Also if you think there's a better way to do this please let me know.
Where should I put portaudio so that Pyaudio can find it
23,233,317
2
2
976
0
python,makefile,gentoo,portaudio,pyaudio
Finally I could find the source of the problem. Somehow portaudio is installing itself to /usr/local/ but the robot I'm working on uses the folders in /usr i.e /usr/lib /usr/include and not /usr/local/lib etc. Putting the libraries in /usr/lib and also transferring manually some portaudio libs you can find in python site-packages folder solved the problem.
0
1
0
1
2014-04-18T01:45:00.000
1
1.2
true
23,146,168
1
0
0
1
Working on Gentoo (on the robot Nao) that has no make and no gcc on it, it is really hard for me to install portaudio. I managed to put pyaudio in the right location so that python can detect it but whenever I try "import pyaudio" it asks me to install portaudio first. i have a virtual machine running gentoo emulating the robot where gcc and make are available. I could compile portaudio on that machine but then after copying its content to the robot I cannot run make install. Where should I put each library file exactly so that pyAudio can find it? Thanks
Make python script to run forever on Amazon EC2
23,166,196
17
13
8,060
0
python,amazon-web-services,ssh,amazon-ec2
You can run the program using the nohup command, so that even when the SSH session closes your program continues running. Eg: nohup python yourscriptname.py & For more info you can check the man page for it using man nohup.
0
1
0
1
2014-04-19T05:16:00.000
2
1
false
23,166,158
0
0
1
1
I have a python script that basically runs forever and checks a webpage every second and notifies me if any value changes. I placed it on an AWS EC2 instance and ran it through ssh. The script was running fine when I checked after half an hour or so after I started it. The problem is that after a few hours when I checked again, the ssh had closed. When I logged back in, there was no program running. I checked all running processes and nothing was running. Can anyone teach me how to make it run forever (or until I stop it) on AWS EC2 instances? Thanks a lot. Edit: I used the Java SSH Client provided by AWS to run the script
Automatic SignOut in OpenERP 7 during System Shutdown
23,171,711
-1
0
183
0
python,windows,openerp,openerp-7
Can we make it in such way that when we Shutdown the System it should SignOut Automatically without the User interference there is no need to logoff the users. HTTP is a transactional protocol. All is done when the client has made any request. After any client request the system is always in a clean state. There is no state in the clients, that must be flushed to the server before switching off. When you shutdown and start-up the OpenERP server again, all clients has lost their "session" and if they do a new request they will be redirect to the login page. Of course, this could by annoying when users starts to fill a screen form (still in browser), send the request and then get redirected to the login, because there were no valid session.
0
1
0
0
2014-04-19T05:41:00.000
1
-0.197375
false
23,166,349
0
0
1
1
To SignIn/SignOut into OpenERP 7 we have to login into OpenERP and click on the Icon which is on the right top just beside the "Compose New Message" Icon. Now most of the users forget to SignOut from ERP. Can we make it in such way that when we Shutdown the System it should SignOut Automatically without the User interference. Just like a Windows service. Is there any way to do that ? Please help me out.
Replacing Python 2.7.5 with Python 3.4 on OS X 10.9.2
23,376,739
0
1
312
0
python
I simply replaced the executable link in my IDE from "/usr/bin/python" to "/Library/Frameworks/Python.framework/Versions/3.4/bin".
0
1
0
0
2014-04-19T05:45:00.000
1
0
false
23,166,386
1
0
0
1
I have Python 2.7.5 running on OS X 10.9.2. I downloaded the Python installer "python-3.4.0-macosx10.6.dmg" from python.org. After the installation, I still get 2.7.5 when querying python -V. I am not sure what I need to do to replace 2.7.5 with 3.4 besides installing python-3.4.0-macosx10.6.dmg.
Launch command line from pycharm with environment variables set
51,773,003
0
2
1,608
0
python,installation,pycharm
If you go to Run -> Edit Configurations in PyCharm, this will let you set CLI arguments, and there's also a couple of different PYTHONPATH-related fields (Add content roots to PYTHONPATH, Add source roots to PYTHONPATH). You can also right-click a folder under the Project menu and check Mark as Sources Root - which I believe adds this directory to the PYTHONPATH at PyCharm script run-time. Also, like metsfansaid, you could create a batch file to populate your Windows PYTHONPATH environmental variables prior to running anything in the new environment. I believe PyCharm will inherit those.
0
1
0
0
2014-04-19T17:04:00.000
1
0
false
23,172,943
1
0
0
1
Is it possible to launch a command line prompt from pycharm that has all the environment variables set already (IE PYTHONPATH) for my projects custom environment.
read README in setup.py
23,174,731
9
9
3,112
0
python,setuptools,distutils,setup.py
To manually include files in a distribution do the following: set include_package_data = True Create a MANIFEST.in file that has a list of include <glob> lines for each file you want to include from the project root. You can use recursive-include <dirname> <glob> to include from sub-directories of the project root. Unfortunately the documentation for this stuff is really fragmented and split across the Python distutils, setuptools, and old distribute docs so it can be hard to figure out what you need to do.
0
1
0
1
2014-04-19T19:30:00.000
1
1.2
true
23,174,516
1
0
0
1
So, I want the long_description of my setup script to be the contents from my README.md file. But when I do this, the installation of the source distribution will fail since python setup.py sdist does not copy the readme file. Is there a way to let distutils.core.setup() include the README.md file with the sdist command so that the installation will not fail? I have tried a little workaround where I default to some shorter text when the README.md file is not available, but I actually do want that not only PyPi gets the contents of the readme file but also the user that installs the package.
How do I get xhtml2pdf working on GAE?
23,335,617
0
0
113
0
python,google-app-engine,app.yaml,xhtml2pdf
I got it now! Don't use XHTML2PDF - use ReportLab on its own instead.
0
1
0
0
2014-04-20T16:17:00.000
1
0
false
23,184,702
0
0
1
1
I am new to GAE, web dev and python, but am working my way up. I have been trying to get xhtml2pdf working on GAE for some time now but have had no luck. I have downloaded various packages but keep getting errors of missing modules. These errors vary depending on what versions of these packages and dependencies I use. I have even tried using the xhtml2pdf "required dependency" versions. I know xhtml2pdf used to be hosted on GAE according to a stackoverflow post from 2010, but I don't know if this is the case anymore. Have they replaced it with something else that the GAE team think is better? I have also considered that the app.yaml is preventing my app from running. As soon as I try importing the pisca module, my app stops. Could anyone please give me some direction on how to get this working? In the sense of how to install these packages with dependencies and where they should be placed in my project folder (note that I am using Windows). And what settings I would need to add to my app.yaml file.
Accessing localhost from windows browser
23,191,551
1
1
1,043
0
python,cygwin,bottle
Since you get a connection refused error, the best I can think of is that this is a browser issue. Try editing the LAN settings on your Chrome browser to bypass proxy server for local address.
0
1
1
0
2014-04-21T05:16:00.000
1
1.2
true
23,191,241
0
0
0
1
I am running python 2.7 + bottle on cygwin and I wanted to access a sample webpage from chrome. I am unable to access the website running on http://localhost:8080/hello but when I do a curl within cygwin I am able to access it. Error Message when accessing through Chrome Connection refused Description: Connection refused Please let me know how I can access my local bottle website running inside Cygwin from windows browser.
Run daemon server or shell command?
23,227,250
2
0
206
0
python,linux,go
Python, being an interpreted language, requires the system to load the interpreter each time a script is run from the command line. Also On my particular system, after disk caching, it takes the system 20ms to execute a script with import string (which is plausible for your use case). If you're processing a lot information, and can't submit it all at once, you should consider setting up a daemon to avoid this kind of overhead. On the other hand, a daemon is more complex to write and test, so you should probably see if a script suits your needs before optimizing prematurely. There's no answer to your question that fits every possible case. Ultimately, you always have to try the performance with your data and in your system,
0
1
0
1
2014-04-22T18:04:00.000
1
1.2
true
23,227,044
0
0
0
1
I need to validate phone numbers and there is a very good python library that will do this. My stack however is Go and I'm really not looking forward to porting a very large library. Do you think it would be better to use the python library by running a shell command from within the Go codebase or by running a daemon that I then have to communicate with somehow?
Run Python script in Vagrant
29,586,100
2
1
1,918
0
python,ssh,virtual-machine,vagrant,vagrantfile
You have two options: You can go the classic route of using the shell provisioner using vagrant config.vm.provision "shell", inline: $script And in your script run the python script All files are pushed in /tmp, you can possible use this to run your python script
0
1
0
1
2014-04-22T23:35:00.000
2
0.197375
false
23,232,172
0
0
0
2
This is a dumb question but please help me. Q. How do I run Python script that is saved in my local machine? after vagrant up and vagrant ssh, I do not see any Python file in the VM. Then what if I want to run Python scripts that are saved in my Mac? I do not want to copy and paste them manually using vim. How would you run Python script in Vagrant ssh?
Run Python script in Vagrant
23,232,231
2
1
1,918
0
python,ssh,virtual-machine,vagrant,vagrantfile
On your Guest OS there will be a folder under / called /vagrant/ this will be all the files and directories under the directory on your host machine that contains the .VagrantFile If you put your script in that folder they will be shared with the VM. Additionally if you are using chef as your provisioner you can use a script resource to run external scripts during the provisioning step.
0
1
0
1
2014-04-22T23:35:00.000
2
1.2
true
23,232,172
0
0
0
2
This is a dumb question but please help me. Q. How do I run Python script that is saved in my local machine? after vagrant up and vagrant ssh, I do not see any Python file in the VM. Then what if I want to run Python scripts that are saved in my Mac? I do not want to copy and paste them manually using vim. How would you run Python script in Vagrant ssh?
How do I open a folder in Sublime Text 2 and have the inner directory show up on the side, like in Brackets?
23,233,193
0
0
39
0
python,ide,sublimetext2,sublimetext
Never mind... View - Side Bar - Show Side Bar
0
1
0
0
2014-04-23T00:54:00.000
1
0
false
23,232,933
0
0
0
1
I have tried Add Folder to a Project but no sidebar shows up.
Emacs IPython doesn't see the same environment variables as IPython from the terminal
24,714,702
0
1
143
0
python,bash,emacs
The issue was unrelated to ipython. I was starting Emacs from my desktop environment menu (GNOME on CentOS 6), rather than from the terminal. Doing the latter resolved the issue.
0
1
0
0
2014-04-23T20:33:00.000
1
1.2
true
23,254,639
1
0
0
1
I just noticed that my IPython (as called by run-python against my variable python-shell-interpreter) doesn't see all my environment variables, but IPython called from bash in the terminal does. I exporting MYVAR in both .bash_profile and .bashrc. When I evaluate os.getenv('MYVAR') in the terminal ipython, it works. But inside of emacs nothing shows up. Why would it be different in Emacs?
on google app engine are how are StructuredProperties updated?
23,277,351
1
0
153
0
google-app-engine,python-2.7,google-cloud-datastore,datamodel
StructuredPropertys belong to the entity that contains them - so your assumption that updating a single StructuredProperty will invalidate the memcache is correct. LocalStructuredProperty is the same behavior - the advantage however is that each property on a LocalStructuredProperty is obfuscated into a binary storage - the datastore has no idea about the structure of a LocalStructuredProperty. (There is probably a deserialization computational cost attributed to these properties - but that depends a lot on the amount of data they contain, I imagine.) To contrast, StructuredProperty actually makes its child properties available for Query indexing in most cases - allowing you to perform complicated lookups. Keep in mind - you should be calling put() for the containing entity, not for each StructuredProperty or LocalStructuredProperty - so you should be seeing a single RPC call for updating that parent entity - regardless of the number of repeated properties exist. I would advise using StructuredProperty that contain ndb.IntegerProperty(repeated=True), rather than making 'parallel lists' of integers and floats - that adds more complexity to your python model, and is exactly the behavior that ndb.StructuredProperty strives to replace.
0
1
0
0
2014-04-24T09:38:00.000
1
1.2
true
23,265,183
0
0
1
1
I am considering ways of organizing data for my application. One data model I am considering would entail having entities where each entity could contain up to roughly 100 repeated StructuredProperties. The StructuredProperties would be mostly read and updated only very infrequently. My question is - if I update any of those StructuredProperties, will the entire entity get deleted from Memcache and will the entire entity be reread from the ndb? Or is it just the single StructuredProperty that will get reread? Is this any different with LocalStructuredProperty? More generally, how are StructuredProperties organized internally? In situations where I could use multiple Float or Int properties - and I am using a StructuredProperty instead just to make my model more readable - is this a bad idea? If I am reading an entity with 100 StructuredProperties will I have to make 100 rpc calls or are the properties retrieved in bulk as part of the original entity?
how to ping two virtual hosts connected to two different virtual switches created in mininet under two different remote controllers
34,110,247
1
1
548
0
python
You have to program the controllers to configure the switches in the following way: If s1 gets a packet whose destination IP address = IP(h2), the action set should be outport = port that connects to s2 The same vice versa. If s1 gets a packet destined to h1, push it through the port that connects to h1. Do similar with s2. Considering that this solution abstract is pretty straight forward, it is possible that you have not considered programming the controller in the first place. The first thing that would help is going through a small tutorial on a simple (in-built) controller such as POX. The controller code could be overwhelming in the beginning but it really gets quite simple once you get the pattern of the controller code! I know I'm answering a little too late, but hope it helps the other people who are looking for similar solutions.
0
1
0
0
2014-04-26T09:42:00.000
1
0.197375
false
23,309,116
0
0
0
1
Suppose I have a created virtual network in mininet through python script.The network consists of Two remote controllers(c1,c2), Two switches(s1,s2):s1 is under the control of c1,s2 is under the control of c2,both s1 and s2 are connected to each other. Two hosts(h1,h2):h1 is connected to s1,h2 is connected to s2. When I have given ping command as h1 ping h2 -it is showing destination host unreachable. Please let me know why it is not pinging? c1 c2 / \ s1------s2 / \ h1 h2
Python pseudo service
23,311,785
1
0
48
0
python,service
The communication with daemons is usually done by signals. You can use userdefined signals or SIGSTOP(17) and SIGCONT(19) to pause and continue your daemon.
0
1
0
0
2014-04-26T13:00:00.000
1
1.2
true
23,311,233
1
0
0
1
I am writing a python 'sensor'. The sensor spawns two children, one that reads in data and the other processes and outputs the data in db format. I need to run it in the background with the ability to start, stop pretty much as a service/daemon. I've looked at various options: daemonizing, init scripts etc. The problem is I need more than just start, stop, restart and status. I also want to add a 'pause' option'. I am thinking that an init script would be the best option adding start, stop, restart, status, pause cases but how would I implement this the pause functionality? Thanks
Hash Mapping and ECMP
69,093,482
0
0
460
0
python,hash,routing
Typical algorithms split the traffic into semi-even groups of N pkts, where N is the number of ECMP links. So if the pkt sizes differ, or if some "streams" have more pkts than others, the overall traffic rates will not be even. Some algorithms factor for this. Breaking up or moving strean is bad (for many reasons). ECMP can be tiered --at layers1,2,3, and above; or at different physical pts. Typically, the src & dst ip-addr & protocol/port are used to define each stream. Sometimes it is configurable. Publishing the details can create "DoS/"IP"(Intellectual Property) vulnerabilities. Using the same algorithm at different "tiers" with certain numbers of links at each tier can lead to "polarization" (some links getting no traffic). To address this, a configurable or random input can be added to the algorithm. BGP ECMP requires IGP cost to be the same, else routing loops can happen(link/info @ cisco). Multicast adds more issues(link/info @ cisco). There are 3 basic types (link/info @ cisco). This is a deep subject.
0
1
0
1
2014-04-27T03:43:00.000
2
0
false
23,319,138
0
0
0
1
I would like to know , how an ECMP and hash mapping are used in load balancing or routing of a tcp packet .Any help with links,examples or papers would be really useful. Sorry for the inconvinience , as I am completely new to this type of scenario. Thanks for your time and consideration.
Mutating ndb repeated property
23,324,452
1
2
440
0
python,google-app-engine,app-engine-ndb
There is no automatic way of doing this. You need to perform queries for all types that could hold the key and then delete them in code. If there could be a lot and/or it could take a long time you might want to consider using a task.
0
1
0
0
2014-04-27T09:47:00.000
2
0.099668
false
23,321,825
0
0
1
1
I have two classes, Department and Employee. Department has a property declared as employees = ndb.KeyProperty(kind=ClassB, repeated=True) The problem is,when i delete the entity whose key is held in the employees list, the entity is deleted in the Employee datastore, but the list in Department datastore remains the same (with the key of the deleted employee still in it). How do i make sure that when the Employee is deleted, all references to it in the Department datastore is deleted as well?
Python OSX $ which Python gives /Library/Frameworks/Python.framework/Versions/2.7/bin/python
40,758,241
0
10
35,517
0
python,macos,python-2.7,twisted
I too was getting a ImportError: No module named xxxeven though I did a pip install xxx and pip2 install xxx. pip2.7 install xxx worked for me. This installed it in the python 2.7 directory.
0
1
0
0
2014-04-27T21:10:00.000
4
0
false
23,329,034
1
0
0
1
Hello I'm trying to run twisted along with python but python cannot find twisted. I did run $pip install twisted successfully but it is still not available. ImportError: No module named twisted.internet.protocol It seems that most people have $which python at /usr/local/bin/python but I get /Library/Frameworks/Python.framework/Versions/2.7/bin/python May this be the issue? If so, how can I change the PATH env?
unicodedecodeerror 'ascii' codec error in wxPython
25,013,664
0
1
992
0
python,unix,wxpython,robotframework
I think the problem is that the file contains UTF-8, not ASCII. Robot Framework appears to be expecting ASCII text. ASCII text only contains values in the range 0-127, when the ascii codec sees a byte 0xC3 it throws an error. (If the text was using the Western European Windows 8-bit encoding, 0xC3 would be Ã. If it was using the MacOS encoding, 0xC3 would be ∑. In fact, it is the first of two bytes which define a single character in the range of most of the interesting accented characters.) Somehow, you need to teach Robot Framework to use the correct encoding.
0
1
0
1
2014-04-28T06:07:00.000
1
0
false
23,333,669
0
0
0
1
I am getting an error unicodedecodeerror 'ascii' codec can't decode byte 0xc3 in position 1 ordinal not in range(128) while performing the below mentioned operation. I have a program that reads files from remote machine(Ubuntu) using grep and cat command for the same to fetch values and stores the value in a variable via robot framework builtin keyword export command from client. Following are the versions i am using:- Robot Framework: 2.8.11 Ride: 0.55 Putty: 0.63 Pyhton: 2.7.3 I am doing a SSH session on Linux machine and on that machine their is a file in which the data is having accented characters for eg: Õ Ü Ô Ý . While reading the text from the file containing accented characters using 'grep' and 'cat' command i am facing this issue. unicodedecodeerror 'ascii' codec can't decode byte 0xc3 in position 1 ordinal not in range(128) Thank you.
Installing cairo for python 3.3 on redhat 6
23,352,643
0
0
263
0
python-3.x,redhat,cairo,pycairo,rhel6
redhat 6 is clearly out of date. Of course it can be done bringing rh6 up to date with downloading and compiling your own 3.x kernel with all what's needed to meet the requirments for pycairo 1.10.... BUT it would be easier and nicer to install a more modern Linux Distribution which goes nicely with an old computer. Linux Mint 16 (Petra) provides a distro with replaxed requirments and window managers in i386 mode. I don't see any meaning in trying to get up to date code on such an old os version running. Every replacement hardware you can get hold on ebay will do better than that. cheers, Christian
0
1
0
0
2014-04-28T22:22:00.000
1
0
false
23,352,195
1
0
0
1
I am trying to install pycairo 1.10 for Python 3.3 on redhat 6. There are no packages in the official repo, and when I try building it myself it says glibc is out of date. I have the latest glibc from the official the repo, and am somewhat hesitant to go on updating it through other means. Are there any other packages that can help, or is there some way to get this working with an older version (we have tried back to cairo 1.8).
memcache.get returns wrong object (Celery, Django)
24,082,360
6
9
2,113
0
python,django,caching,memcached,celery
Solved it finally: Celery has dynamic scaling feature- it's capable to add/kill workers according to load It does it via forking existing one Opened sockets and files are copied to the forked process, so both processes share them, which leads to race condition, when one process reads response of another one. Simply, it's possible that one process reads response intended for second one, and vise-versa. from django.core.cache import cache this object stores pre-connected memcached socket. Don't use it when your process could be dynamically forked.. and don't use stored connections, pools and other. OR store them under current PID, and check it each time you're accessing cache
0
1
0
0
2014-04-29T07:54:00.000
2
1
false
23,358,787
0
0
1
1
Here is what we have currently: we're trying to get cached django model instance, cache key includes name of model and instance id. Django's standard memcached backend is used. This procedure is a part of common procedure used very widely, not only in celery. sometimes(randomly and/or very rarely) cache.get(key) returns wrong object: either int or different model instance, even same-model-different-id case appeared. We catch this by checking correspondence of model name & id and cache key. bug appears only in context of three of our celery tasks, never reproduces in python shell or other celery tasks. UPD: appears under long-running CPU-RAM intensive tasks only cache stores correct value (we checked that manually at the moment the bug just appeared) calling same task again with same arguments might don't reproduce the issue, although probability is much higher, so bug appearances tend to "group" in same period of time restarting celery solves the issue for the random period of time (minutes - weeks) *NEW* this isn't connected with memory overflow. We always have at least 2Gb free RAM when this happens. *NEW* we have cache_instance = cache.get_cache("cache_entry") in static code. During investigation, I found that at the moment the bug happens cache_instance.get(key) returns wrong value, although get_cache("cache_entry").get(key) on the next line returns correct one. This means either bug disappears too quickly or for some reason cache_instance object got corrupted. Isn't cache instance object returned by django's cache thread safe? *NEW* we logged very strange case: as another wrong object from cache, we got model instance w/o id set. This means, the instance was never saved to DB therefore couldn't be cached. (I hope) *NEW* At least one MemoryError was logged these days I know, all of this sounds like some sort of magic.. And really, any ideas how that's possible or how to debug this would be very appreciated. PS: My current assumption is that this is connected with multiprocessing: as soon as cache instance is created in static code and before Worker process fork this would lead to all workers sharing same socket (Does it sound plausibly?)
AWS Elastic Beanstalk (eb) installation in Ubuntu 14.04: command not found
43,012,792
0
5
10,002
0
python,linux,ubuntu,amazon-web-services,ubuntu-14.04
In ubuntu we write like : export PATH=$PATH:/usr/local/lib/python2.7/site-packages/ It worked for me after writing this because eb folder will be present inside mentioned folder.
0
1
0
0
2014-04-29T11:23:00.000
4
0
false
23,363,287
0
0
0
2
Im trying to install AWS eb command line interface in Ubuntu 14.04. I just donwloaded the .zip file. Extracted in a folder. if I go to folder where eb is (/home/roberto/app/AWS-ElasticBeanstalk-CLI-2.6.1/eb/linux/python2.7) and run it, I get: eb: command not found Same if I do it with python3 path.
AWS Elastic Beanstalk (eb) installation in Ubuntu 14.04: command not found
42,512,402
1
5
10,002
0
python,linux,ubuntu,amazon-web-services,ubuntu-14.04
I think all you have to do is, upgrade awsebcli by running: pip install --upgrade awsebcli
0
1
0
0
2014-04-29T11:23:00.000
4
0.049958
false
23,363,287
0
0
0
2
Im trying to install AWS eb command line interface in Ubuntu 14.04. I just donwloaded the .zip file. Extracted in a folder. if I go to folder where eb is (/home/roberto/app/AWS-ElasticBeanstalk-CLI-2.6.1/eb/linux/python2.7) and run it, I get: eb: command not found Same if I do it with python3 path.
Running cfx program for Firefox Add-on SDK on Windows
24,506,049
0
0
497
0
python,firefox-addon,firefox-addon-sdk
Just install Python 2.6 instead of Python 2.7. When i tried with Python 2.7 i got also the same error. Then i removed Python 2.7 and installed Python 2.6. And all worked finely.
0
1
0
0
2014-04-29T15:34:00.000
4
0
false
23,369,108
0
0
0
1
I have downloaded Add-on SDK and executed activate. Python 2.7 is installed. PATH variable is configured properly and py files can run from anywhere. However, when i am trying to execute cfx (from Far command prompt, using the full path), i get the message: 'python' is not recognized as an internal or external command. How do i make it run?
Make a Mac .App run a script on running of said .App
23,379,138
0
1
930
0
python,macos,shell,.app
if your using applescript, just save it as a bundle by save as, and click the drop down saying script, change that to bundle i think. after that, click on the bundle icon in apple script and drag the script to the folder you want. to run it put your run command in and drag the script that you placed in the bundle folders before in the directory slot of the run command. i can not give you anything exact due to the fact that i am not on my mac, but i am giving you the best i know.
0
1
0
0
2014-04-30T03:13:00.000
2
0
false
23,379,033
0
0
0
2
I have a script (Shell, chmod-ed to 755. Python is in the script, meaning not run from an outside .py file) that is executable. It works when I run it. How can I make a .app that executes said script on runtime? I have a simple .app that has this structure: APPNAME.App>Contents>MacOS>script This does not run. Is there any way I can piggyback a script onto another application, The Powder Toy, for example? I'm not new to OSX, I just don't have root privileges and can't install XCode. Rembember, I can't install anything from source or use setup scripts, effectively annihilating py2app as an option. EDIT: This answer is courtesy of mklement0. Automator lets you choose the environment to run your script, type it in, and bundle it into a .app, removing the need for a shell script.
Make a Mac .App run a script on running of said .App
23,379,360
1
1
930
0
python,macos,shell,.app
Run Automator and create a new Application project. Add a Run Shell Script action. In the Shell: list, select the interpreter of choice; /usr/bin/python in this case. Paste the contents of your Python script into the action and save the *.app bundle.
0
1
0
0
2014-04-30T03:13:00.000
2
1.2
true
23,379,033
0
0
0
2
I have a script (Shell, chmod-ed to 755. Python is in the script, meaning not run from an outside .py file) that is executable. It works when I run it. How can I make a .app that executes said script on runtime? I have a simple .app that has this structure: APPNAME.App>Contents>MacOS>script This does not run. Is there any way I can piggyback a script onto another application, The Powder Toy, for example? I'm not new to OSX, I just don't have root privileges and can't install XCode. Rembember, I can't install anything from source or use setup scripts, effectively annihilating py2app as an option. EDIT: This answer is courtesy of mklement0. Automator lets you choose the environment to run your script, type it in, and bundle it into a .app, removing the need for a shell script.
Using rabbitmq with twisted
23,382,374
3
2
545
0
python,rabbitmq,twisted
If your server is does not receive packets often, it will not improve much - only gain some tiny overhead on inter server communication. Still it is a very good design idea, because it scales well and once you finally get many packets you will just add an instance of data processing server.
0
1
0
1
2014-04-30T07:20:00.000
1
1.2
true
23,381,990
0
0
0
1
I am developing a tcp/ip server whose purpose is receive packets from client, parse them, do some computation(on data arriving in packet) and store it in database. Till now, everything was being done by single server application written using twisted python. Now I am across RabbitMQ so my question is, if it is possible and if it will lead to better performance if my twisted server application just receives the packets from clients and pass it another c++ application using RabbitMQ. The c++ application will in turn parse packets, do computation on it etc.. Everything will be done on single server.
Usage: fab mapfile [options] Please specify a valid map file
23,403,551
0
0
92
0
python,fabric
Looks like you have your /usr/local/bin directory set to 777 or world writable. This is very bad, lock it down to 755 and owned by root at the very least.
0
1
0
0
2014-04-30T16:35:00.000
1
0
false
23,393,532
0
0
0
1
When i try to run a simple deploy this message error comes to me: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin13/rbconfig.rb:212: warning: Insecure world writable dir /usr/local/bin in PATH, mode 040777 Usage: fab mapfile [options] Please specify a valid map file. I am not familiar with fabric. I am Mac os environment ...
how to prevent the window from self-close when building program in sublime
23,440,406
0
0
302
0
python,sublimetext2
Just add raw_input("Press ENTER to exit") and it will "pause" until you press a key. You should be able to add this line anywhere and as often as needed.
0
1
0
0
2014-05-03T04:49:00.000
2
0
false
23,440,391
0
0
0
1
I was learning python use sublime text2 dev. when I code "hello world" and build it, the "cmd"window appears and disappears in a moment. I want to make the output hold on,but I don't know how. help me, thank you.
Long-running Openshift Cron
23,485,693
4
3
629
0
python,cron,flask,openshift,nohup
I'm lazy. Cut and paste :) I have been told 5 minutes is the limit for the free accounts. That includes all background processes. I asked a similar question here on SO.
0
1
0
1
2014-05-05T16:40:00.000
1
1.2
true
23,477,570
0
0
1
1
I have a long-running daily cron on OpenShift. It takes a couple hours to run. I've added nohup and I'm running it in the background. It still seems to timeout at the default 5 minutes (It works appropriately for this time). I'm receiving no errors and it works perfectly fine locally. nohup python ${OPENSHIFT_REPO_DIR}wsgi/manage.py do_something >> \ ${OPENSHIFT_DATA_DIR}do_something_data.log 2> \ ${OPENSHIFT_DATA_DIR}do_something_error.log & Any suggestions is appreciated.
Forcing a python program to interpret using python 3
23,479,112
0
1
161
0
python-2.7,python-3.x,windows-7
To make sure that you are always running python 3, you can modify your Windows PATH Environment variable to include the python 3 directory, and remove the python 2 directory.
0
1
0
0
2014-05-05T18:14:00.000
2
0
false
23,479,021
1
0
0
1
I have python 2.7.6 and python 3.4 installed on windows 7 machine. When I open command prompt for windows and type python, python 2.7.6 starts by default. I have a python script which I want to compile (or interpret officially speaking) using python 3.4. Is there a command to use python 3.4 from c:/ prompt? or make 3.4 the default python interpreter? thanks
EOFError with multiprocessing Manager
25,519,766
6
5
5,086
0
python,multiprocessing
For me the error was actually that my receiving process had thrown an exception and terminated, and so the sending process was receiving an EOFError, meaning that the interprocess communication pipeline had closed.
0
1
0
0
2014-05-05T21:20:00.000
1
1
false
23,482,115
1
0
0
1
I have a bunch of clients connecting to a server via 0MQ. I have a Manager queue used for a pool of workers to communicate back to the main process on each client machine. On just one client machine having 250 worker processes, I see a bunch of EOFError's almost instantly. They occur at the point that the put() is being performed. I would expect that a lot of communication might slow everything down, but that I should never see EOFError's in internal multiprocessing logic. I'm not using gevent or anything that might break standard socket functionality. Any thoughts on what could make puts to a Manager queue start raising EOFError's?
Starting a python script on boot (startx) with an absolute path, in which there are relative paths
23,492,856
1
0
751
0
python,path,absolute-path
you could change your current working directory inside the script before you start calling your relative imports, use os.chdir("absolute path on where your script lives").
0
1
0
1
2014-05-06T10:48:00.000
2
1.2
true
23,492,589
0
0
0
2
I realise this question may already exist, but the answers I've found haven't worked and I have a slightly different setup. I have a python file /home/pi/python_games/frontend.py that I am trying to start when lxde loads by placing @python /home/pi/python_games/frontend.py in /etc/xdg/lxsession/LXDE/autostart. It doesn't run and there are no error messages. When trying to run python /home/pi/python_games/frontend.py, python complains about not being able to find the files that are loaded using relative links eg: /home/pi/python_games/image.png is called with image.png. Obviously one solution would be to give these resources absolute paths, but the python program also calls other python programs in its directory that also have relative paths, and I don't want to go changing all them. Anyone got any ideas? Thanks Tom
Starting a python script on boot (startx) with an absolute path, in which there are relative paths
23,496,772
0
0
751
0
python,path,absolute-path
Rather than change your current working directory, in yourfrontend.pyscript you could use the value of the predefined__file__module attribute, which will be the absolute pathname of the script file, to determine absolute paths to the other files in the same directory. Functions in theos.pathmodule, such assplit()andjoin(), will make doing this fairly easy.
0
1
0
1
2014-05-06T10:48:00.000
2
0
false
23,492,589
0
0
0
2
I realise this question may already exist, but the answers I've found haven't worked and I have a slightly different setup. I have a python file /home/pi/python_games/frontend.py that I am trying to start when lxde loads by placing @python /home/pi/python_games/frontend.py in /etc/xdg/lxsession/LXDE/autostart. It doesn't run and there are no error messages. When trying to run python /home/pi/python_games/frontend.py, python complains about not being able to find the files that are loaded using relative links eg: /home/pi/python_games/image.png is called with image.png. Obviously one solution would be to give these resources absolute paths, but the python program also calls other python programs in its directory that also have relative paths, and I don't want to go changing all them. Anyone got any ideas? Thanks Tom
apt-get installing older version of packages (Ubuntu)
23,498,202
2
2
3,744
0
python,ubuntu,pip,virtualenv,packages
apt-get update updates packages from Ubuntu package catalog, which has nothing to do with mainstream versions. LTS in Ubuntu stands for Long Term Support. Which means that after a certain period in time they will only release security-related bugfixes to the packages. In general, major version of packages will not change inside of a major Ubuntu release, to make sure backwards-compatibility is kept. So if then only thing you can do is apt-get update, you have 2 options: find a PPA that provides fresher versions of packages that you need, add it and repeat the update/install exercise find those packages elsewhere, download them in .deb format and install.
0
1
0
0
2014-05-06T14:53:00.000
2
1.2
true
23,498,086
1
0
0
1
I'm trying to install pip and virtualenv on a server (running Ubuntu 12.04.4 LTS) on which I have access, but I can only do it with sudo apt-get install (school politics). The problem is that althought I have run the sudo apt-get update command to update the packages list, I think it keeps installing old ones. After doing sudo apt-get install python-pip python-virtualenv, I do pip --version on which I get the 1.0, and virtualenv --version on which I get 1.7.1.2. These two version are quite old (pip is already in 1.5.5 and virtualenv in 1.11.5). I read that the problem is that the packages list is not up-to-date, but the command sudo apt-get update should solve this, but I guess no. How can I solve this? Thanks a lot!
How to install cabot in linux
23,514,046
0
0
1,740
0
python,github
.sh is a shell script, you can just execute it. ./setup.sh
0
1
0
0
2014-05-07T09:31:00.000
3
0
false
23,513,981
0
0
0
2
I have downloaded and unzip the cabot(python tool) in my linux system.But then I don't know how to install it.In the cabot folder there is setup.sh file. But when I put build or install it is not working.So What to do?
How to install cabot in linux
23,514,107
0
0
1,740
0
python,github
It's an ".sh" file right? Then to run the same what you have to do is :- 1)Open Terminal 2)Change directory to file location 3) run the following command. sh setup.sh
0
1
0
0
2014-05-07T09:31:00.000
3
0
false
23,513,981
0
0
0
2
I have downloaded and unzip the cabot(python tool) in my linux system.But then I don't know how to install it.In the cabot folder there is setup.sh file. But when I put build or install it is not working.So What to do?
How to delete some file with crontab in linux
23,515,529
1
0
374
0
python,linux
If you are using logrotate for log rotation then it has options to remove old files, if not you could run something as simple as this once a day in your cron: find /path/to/log/folder -mtime +5 -type f -exec rm {} \; Or more specific match a pattern in the filename find . -mtime +5 -type f -name *.log -exec ls -l {} \; Why not set up logrotate for syslog to rotate daily then use its options to remove anything older than 5 days. Other options involve parsing log file and keeping certain aspect etc removing other bits etc which involved writing to another file and back etc and when it comes to live log files this can end up causing other issues such as a requirement to restart service to relog back into files. so best option would be logrotate for the syslog
0
1
0
1
2014-05-07T10:26:00.000
1
1.2
true
23,515,224
0
0
0
1
I have two questions about using crontab file: 1.I am using a service. When it runs, a new log file created everyday in a log directory. i want to delete all files that already exist greater 5 day in that log directory 2.I want to delete all the infomation that exist greater than 5 days in a log file( /var/log/syslog) I don't know how to do that with crontab in linux. Please help me! Thanks in advance!
Use homebrew to install applications to virtual enviornment
23,530,352
2
0
55
0
python,macos,homebrew
Use pip inside of the virtualenv and it will isolate the packages to just that virtualenv. Each virtualenv has a local version of pip and will install the packages locally.
0
1
0
0
2014-05-07T23:36:00.000
2
0.197375
false
23,530,254
1
0
0
1
Is there anyway I can use homewbrew to install packages (like numpy or matplotlib) into isolated virtual environments created using virtualenv, without having the packaged installed system wide.
How to stop tornado web app when there is no connection?
23,540,966
0
0
53
0
python,websocket,tornado
If you will only ever have one connection, you can call IOLoop.current().stop() from your WebSocketHandler's on_close method. If you need more than one you can increment a counter in open(), decrement it in on_close(), and stop the IOLoop when it reaches zero.
0
1
0
0
2014-05-08T03:49:00.000
1
0
false
23,532,399
0
0
0
1
I want to start a temporary web app when there is a connection comes, stop it when there is no connection and timeout. I use python tornado WebSocketHandler, anyone can help? An example will be helpfull.
Handle multiple HTTP connections and a heavy blocking function like SSH
23,543,454
0
1
289
0
python,multithreading,concurrency,tornado,gevent
If by ssh you mean actually running the ssh process, try tornado.process.Subprocess. If you want something more integrated into the server, twisted has an asynchronous ssh implementation. If you're using something like paramiko, threads are probably your best bet.
0
1
0
0
2014-05-08T09:24:00.000
1
0
false
23,537,731
0
0
0
1
Scenario: My server/application needs to handle multiple concurrent requests and for each request the server opens an ssh link to another m/c, runs a long command and sends the result back. 1 HTTP request comes → server starts 1 SSH connection → waits long time → sends back the SSH result as HTTP response This should happen simultaneously for > 200 HTTP and SSH connections in real time. Solution: The server just has to do one task, that is, open an SSH connection for each HTTP request and keep the connection open. I can't even write the code in an asynchronous way as there is just one task to do: SSH. IOLoop will get blocked for each request. Callback and deferred functions don't provide an advantage as the ssh task runs for a long time. Threading sounds the only way out with event driven technique. I have been going through tornado examples in Python but none suit my particular need: tornado with twisted gevent/eventlet pseudo multithreading python threads established HTTP servers like Apache Environment: Ubuntu 12.04, high RAM & net speed Note: I am bound to use python for coding and please be limited to my scenario only. Opening up multiple SSH links while keeping HTTP connections open sounds all asynch work but I want it to look like a synchronous activity.
Server host for python raw sockets use
23,548,201
0
1
287
0
python,sockets,udp,ip,openshift
That will not work on OpenShift, we only offer two kinds of external ports for use, http/https and ws/wss
0
1
1
0
2014-05-08T17:03:00.000
2
0
false
23,548,149
0
0
0
1
I had built a python udp hole puncher using raw socket and I wonder whether is there a service, or an option to use an external server, on the web(like dedicated server) that will host and run this program. openshift was something i considered but it did not work because it uses apache is a proxy and therefore its impossible to use raw sockets for connection. I prefer a free solution thanks a lot
django-celery infrastructure over multiple servers, broker is redis
23,846,005
3
1
2,804
0
python,django,architecture,celery
Celery actually makes this pretty simple, since you're already putting the tasks on a queue. All that changes with more workers is that each worker takes whatever's next on the queue - so multiple workers can process at once, each on their own machine. There's three parts to this, and you've already got one of them. Shared storage, so that all machines can access the same files A broker that can hand out tasks to multiple workers - redis is fine for that Workers on multiple machines Here's how you set it up: User uploads file to front-end server, which stores in your shared storage (e.g. S3, Samba, NFS, whatever), and stores the reference in the database Front-end server kicks off a celery task to process the file e.g. def my_view(request): # ... deal with storing the file file_in_db = store_file(request) my_process_file_task.delay(file_in_db.id) # Use PK of DB record # do rest of view logic... On each processing machine, run celery-worker: python manage.py celery worker --loglevel=INFO -Q default -E Then as you add more machines, you'll have more workers and the work will be split between them. Key things to ensure: You must have shared storage, or this gets much more complicated Every worker machine must have the right Django/Celery settings to be able to find the redis broker and the shared storage (e.g. S3 bucket, keys etc)
0
1
0
0
2014-05-08T20:24:00.000
2
0.291313
false
23,551,808
0
0
1
2
Currently we have everything setup on single cloud server, that includes: Database server Apache Celery redis to serve as a broker for celery and for some other tasks etc Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed. I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated. Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers): User uploads a abc.pptx file->reference is stored in database->stored on server disk A task (convert document to pdf) is created and goes in redis (broker) queue celery which is running on same server picks the task from queue Read the file, convert it to pdf using software called docsplit create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file Considering the above use case, how can you setup up multiple web servers which can perform the same functionality?
django-celery infrastructure over multiple servers, broker is redis
23,552,055
4
1
2,804
0
python,django,architecture,celery
What will strongly simplify your processing is some shared storage, accessible from all cooperating servers. With such design, you may distribute the work among more servers without worrying on which server will be next processing step done. Using AWS S3 (or similar) cloud storage If you can use some cloud storage, like AWS S3, use that. In case you have your servers running at AWS too, you do not pay for traffic within the same region, and transfers are quite fast. Main advantage is, your data are available from all the servers under the same bucket/key name, so you do not have to bother about who is processing which file, as all have shared storage on S3. note: If you need to get rid of old files, you may even set up some policy file on give bucket, e.g. to delete files older than 1 day or 1 week. Using other types of shared storage There are more options Samba central file server FTP Google storage (very similar to AWS S3) Swift (from OpenStack) etc. For small files you could even use Redis, but such solutions are for good reasons rather rare.
0
1
0
0
2014-05-08T20:24:00.000
2
0.379949
false
23,551,808
0
0
1
2
Currently we have everything setup on single cloud server, that includes: Database server Apache Celery redis to serve as a broker for celery and for some other tasks etc Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed. I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated. Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers): User uploads a abc.pptx file->reference is stored in database->stored on server disk A task (convert document to pdf) is created and goes in redis (broker) queue celery which is running on same server picks the task from queue Read the file, convert it to pdf using software called docsplit create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file Considering the above use case, how can you setup up multiple web servers which can perform the same functionality?
How integrate a websocket between tornado and uwsgi?
23,577,862
1
0
728
0
python,websocket,tornado,uwsgi
The uWSGI tornado loop engine is no more than a proof of concept, you could try to use it but native uWSGI websockets support or having nginx routing requests to both uWSGI and tornado are better (and more solid) choices for sure.
0
1
0
0
2014-05-09T14:38:00.000
1
1.2
true
23,567,368
0
0
1
1
I'm work in a little message chat project and i use tornado websocket for the communication between web browser and the server here all work fine but i was working with tornado integrate web framework and i want to configurate my app for run on web server nginx -uwsgi , i read for integrate tornado and uwsgi i have to run the application tornado in wsgi mode but on this way the asynchronous methods are not supported. And i ask what is the best way for integrate a tornado websocket to uwsgi? Or i should run tornado websocket and configure it on nginx separate of the rest my app?
Thread vs Event Loop - network programming (language agnostic)
23,571,618
0
4
1,730
0
python,multithreading,events,twisted
The connection scheme is also important in the choice. How many concurrent connections do you expect ? How long will a client stay connected ? If each connection is tied to a thread, many concurrent connections or very long lasting connections ( as with websockets ) will choke the system. For these scenarios an event loop based solution will be better. When the connections are short and the heavy treatment comes in after the disconnection, both models weigh each other.
0
1
0
0
2014-05-09T15:39:00.000
3
0
false
23,568,682
1
0
0
1
I am writing a simple daemon to receive data from N many mobile devices. The device will poll the server and send the data it needs as simple JSON. In generic terms the server will receive the data and then "do stuff" with it. I know this topic has been beaten a bunch of times but I am having a hard time understanding the pros and cons. Would threads or events (think Twisted in Python) work better for this situation as far as concurrency and scalability is concerned? The event model seems to make more sense but I wanted to poll you guys. Data comes in -> Process data -> wait for more data. What if the "do stuff" was something very computationally intensive? What if the "do stuff" was very IO intensive (such as inserting into a database). Would this block the event loop? What are the pros and drawbacks of each approach?
How do I get IDLE to find NLTK?
23,572,642
1
1
2,634
0
python,nltk,python-idle
Supplementing the answer above, when you install python packages they will install under the default version of python you are using. Since the module imports in python 2.7.6 make sure that you aren't using the Python 3 version of IDLE.
0
1
0
0
2014-05-09T19:27:00.000
2
0.099668
false
23,572,471
1
0
0
1
Programming noob here. I'm on Mac OS 10.5.8. I have Python 2.7.6 and have installed NLTK. If I run Python from Terminal, I can "import nltk" with no problem. But if I open IDLE (either from Terminal or by double-clicking on the application) and try the same thing there, I get an error message, "ImportError: No module named nltk". I assume this is a path problem, but what exactly should I do? The directory where I installed NLTK is "My Documents/Python/nltk-2.0.4". But within this there are various other directories called build, dist, etc. Which of these is the exact directory that IDLE needs to be able to find? And how do I add that directory to IDLE's path?
mezzanine shop is not shown if behind a virtualhost
23,596,414
1
0
66
0
python,django,apache,virtualhost,mezzanine
It sounds like you've misconfigured the "sites" section. Under "sites" in the admin interface, you'll find each configured site. Each of the pages in a site is related to one of these, and matched to the host name you use in the browser. You'll probably find there's only one site record configured, and its domain doesn't match your production host that you're accessing the site via. If you update it, it should resolve everything.
0
1
0
0
2014-05-11T08:56:00.000
1
1.2
true
23,590,741
0
0
1
1
I developed my first shop using mezzanine. If I run it with python manage runserver 0.0.0.0:8000 it works well, but if I try to put an apache virtualhost in front of it, the result I get is awful, because I only see the home page, but not the other ones. I checked the generated HTML, and it looks very different. I think it's a problem of mezzanine configuration, maybe on the configured sites, but I am not able to understand what I have to change. Can you please give me a hint?
Adding Path in GIT for KIVY
23,609,691
0
1
256
0
python,git,kivy
This value is not typically something you would include in a git repo because it is specific to your system. Other people using your repo may have Kivy installed somewhere else, or be on an entirely different OS where that path does not exist.
0
1
0
0
2014-05-11T20:54:00.000
1
0
false
23,597,816
0
0
0
1
Create a file named 'py.ini' and place it either in your users application data directory, or in 'C:\Windows'. It will contain the path used to start Kivy. I put my Kivy installation at 'C:\utils\kivy' so my copy says: [commands] kivy="c:\utils\kivy\kivy.bat" (You could also add commands to start other script interpreters, such as jython or IronPython.) so my question is: What commands are supposed to be used in GIT to add this path into a variable "kivy". Or is it even suppose to be a variable? And in GIT, to get the script working, it uses "source /c/.../Kivy-1.8.0-py2.7-win32/kivyenv.sh" But, on to add path, they said to use "C:...\kivy.bat Why does " /c/" change to "C:" and why is it 'kivy.ba't not 'kivyenv.sh' Thank you.
GAE: how to quantify Frontend Instance Hours usage?
23,612,408
4
0
1,362
0
python,google-app-engine
There's no 100% sure way to assess the number of frontend instance hours. An instance can serve more than one request at a time. In addition, the algorithm of the scheduler (the system that starts the instances) is not documented by Google. Depending on how demanding your code is, I think you can expect a standard F1 instance to hold up to 5 requests in parallel, that's a maximum. 2 is a safer bet. My recommendation, if possible, would be to simulate standard interaction on your website with limited number of users, and see how the number of instances grow, then extrapolate. For example, let's say you simulate 100 requests per minute during 2 hours, and you see that GAE spawns 5 instances for that, then you can extrapolate that a continuous load of 3000 requests per minute would require 150 instances during the same 2 hours. Then I would double this number for safety, and end up with an estimate of 300 instances.
0
1
0
0
2014-05-12T13:44:00.000
1
1.2
true
23,610,748
0
0
1
1
We are developing a Python server on Google App Engine that should be capable of handling incoming HTTP POST requests (around 1,000 to 3,000 per minute in total). Each of the requests will trigger some datastore writing operations. In addition we will write a web-client as a human-usable interface for displaying and analyse stored data. First we are trying to estimate usage for GAE to have at least an approximation about the costs we would have to cover in future based on the number of requests. As for datastore write operations and data storage size it is fairly easy to come up with an approximate number, though it is not so obvious for the frontend and backend instance hours. As far as I understood each time a request is coming in, an instance is being started which then is running for 15 minutes. If a request is coming in within these 15 minutes, the same instance would have been used. And now it is getting a bit tricky I think: if two requests are coming in at the very same time (which is not so odd with 3,000 requests per minute), is Google firing up another instance, hence Google would count an addition of (at least) 0.15 instance hours? Also I am not quite sure how a web-client that is constantly performing read operations on the datastore in order to display and analyse data would increase the instance hours. Does anyone know a reliable way of counting instance hours and creating meaningful estimations? We would use that information to know how expensive it would be to run an application on GAE in comparison to just ordering a web server.
Making a Script that deletes its own folder when it finishes
23,622,356
1
0
123
0
python,path,directory,delete-directory
Yes, you can do this. The script will be loaded into memory when it runs, so it can delete its parent directory (and therefore itself) directly from the script without any issues. Just use shutil.rmtree rather than os.rmdir, because os.rmdir can't remove a directory that isn't empty. Here's a one-liner that will do it (be careful running this in a directory with stuff you don't want deleted!) shutil.rmtree(os.path.dirname(os.path.realpath(__file__)))
0
1
0
0
2014-05-13T03:22:00.000
2
1.2
true
23,622,285
0
0
0
2
Is it possible to make a script delete its own folder? clearly there are some issues here but I feel like it might be possible. Essentially what i'm trying to create is a script that once it's finished doing it's thing that will delete it's own folder and a few other files. But I'm mainly having an issue with trying to tell it to delete itself as a method of closing itself. As a final action or in a sense telling the pc to do one more action after its closed itself Any help would be greatly appreciated :) Tltr;can an application can tell windows to do a command after it close's it self
Making a Script that deletes its own folder when it finishes
23,622,353
1
0
123
0
python,path,directory,delete-directory
You mean it would delete the entire folder, including the script itself? Why not invoke the script with a cmd or batch file (on Windows; bash script on *nix) that would first execute the script, and then do whatever cleanup you want to do afterwards? The wrapper file could live in another directory so it would not also get deleted.
0
1
0
0
2014-05-13T03:22:00.000
2
0.099668
false
23,622,285
0
0
0
2
Is it possible to make a script delete its own folder? clearly there are some issues here but I feel like it might be possible. Essentially what i'm trying to create is a script that once it's finished doing it's thing that will delete it's own folder and a few other files. But I'm mainly having an issue with trying to tell it to delete itself as a method of closing itself. As a final action or in a sense telling the pc to do one more action after its closed itself Any help would be greatly appreciated :) Tltr;can an application can tell windows to do a command after it close's it self
pythonw.exe stopped working, can't run IDLE or any .py files
23,644,079
0
0
676
0
python,crash,installation
It turns out uninstalling Blender, uninstalling Python, reinstalling Python and then reinstalling Blender(optional) did the trick. Blender 2.70 has some Python 3.4 libraries bundled with it on installation, but I'm guessing for some reason, having an older version of Python already installed caused conflicts/dependency issues/wh. I'd love to know the details on why, but for now I'm just glad I can run python scripts again.
0
1
0
0
2014-05-13T05:58:00.000
1
0
false
23,623,763
1
0
0
1
This issue doesn't pertain to any code exactly. I think my installation (python 3.3.5) became corrupted somehow. I've tried uninstalling and reinstalling, as well as repairing, but nothing has worked. It's been a while since I last ran any python code or did anything involving python, really, so I can't say I accidentally messed up my own install. The only thing I can think of which might be an issue is installation and updates of Blender to 2.7.
Using TotalOrderPartitioner in Hadoop streaming
23,644,971
0
0
909
0
python,hadoop
Did not try, but taking the example with KeyFieldBasedPartitioner and simply replacing: -partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner with -partitioner org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner Should work.
0
1
0
0
2014-05-14T02:11:00.000
2
0
false
23,644,545
0
1
0
1
I'm using python with Hadoop streaming to do a project, and I need the similar functionality provided by the TotalOrderPartitioner and InputSampler in Hadoop, that is, I need to sample the data first and create a partition file, then use the partition file to decide which K-V pair will go to which reducer in the mapper. I need to do it in Hadoop 1.0.4. I could only find some Hadoop streaming examples with KeyFieldBasedPartitioner and customized partitioners, which use the -partitioner option in the command to tell Hadoop to use these partitioners. The examples I found using TotalOrderPartitioner and InputSampler are all in Java, and they need to use the writePartitionFile() of InputSampler and the DistributedCache class to do the job. So I am wondering if it is possible to use TotalOrderPartitioner with hadoop streaming? If it is possible, how can I organize my code to use it? If it is not, is it practical to implement the total partitioner in python first and then use it?
App Engine: Difference between NDB and Datastore
23,646,875
5
2
568
1
python,django,google-app-engine
In simple words these are two versions of datastore . db being the older version and ndb the newer one. The difference is in the models, in the datastore these are the same thing. NDB provides advantages like handling caching (memcache) itself. and ndb is faster than db. so you should definitely go with ndb. to use ndb datastore just use ndb.Model while defining your models
0
1
0
0
2014-05-14T04:19:00.000
1
1.2
true
23,645,572
0
0
1
1
I have been going through the Google App Engine documentation (Python) now and found two different types of storage. NDB Datastore DB Datastore Both quota limits (free) seem to be same, and their database design too. However NDB automatically cache data in Memcache! I am actually wondering when to use which storage? What are the general practices regarding this? Can I completely rely on NDB and ignore DB? How should it be done? I have been using Django for a while and read that in Django-nonrel the JOIN operations can be somehow done in NDB! and rest of the storage is used in DB! Why is that? Both storages are schemaless and pretty well use same design.. How is that someone can tweak JOIN in NDB and not in DB?
Ctrl+s does not always save a file on IDLE
41,288,341
1
1
1,776
0
python-idle
I've found that use of character æ, ø, å, even as it is comment out, Python 2.7 IDE won't save the file.
0
1
0
0
2014-05-14T06:49:00.000
2
0.099668
false
23,647,516
0
0
0
2
Sometimes I can save a script by using Ctrl+S when using IDLE, but then other times it fails to save for no reason, as haven't changed anything. Is this a bug of some kind? It's starting to irritate me. Platform: Windows 7 Professional
Ctrl+s does not always save a file on IDLE
34,439,578
2
1
1,776
0
python-idle
It may be some character. I faced that same trouble, then I removed some character, such as "Ç" and it worked fine.
0
1
0
0
2014-05-14T06:49:00.000
2
0.197375
false
23,647,516
0
0
0
2
Sometimes I can save a script by using Ctrl+S when using IDLE, but then other times it fails to save for no reason, as haven't changed anything. Is this a bug of some kind? It's starting to irritate me. Platform: Windows 7 Professional
python or pythonw in creating a cross-platform standalone GUI app
23,660,787
0
1
933
0
python,user-interface,cross-platform
Firstly, you should always use .pyw for GUIs. Secondly, you could convert it to .exe if you want people without python to be able to use your program. The process is simple. The hardest part is downloading one of these: for python 2.x: p2exe for python 3.x: cx_Freeze You can simply google instructions on how to use them if you decide to go down that path. Also, if you're using messageboxes in your GUI, it won't work. You will have to create windows/toplevels instead.
1
1
0
0
2014-05-14T15:40:00.000
2
0
false
23,659,248
0
0
0
2
I am developing a simple standalone, graphical application in python. My development has been done on linux but I would like to distribute the application cross-platform. I have a launcher script which checks a bunch of environment variables and then sets various configuration options, and then calls the application with what amounts to python main.py (specifically os.system('python main.py %s'% (arg1, arg2...)) ) On OS X (without X11), the launcher script crashed with an error like Could not run application, need access to screen. A very quick google search later, the script was working locally by replacing python main.py with pythonw main.py. My question is, what is the best way to write the launcher script so that it can do the right thing across platforms and not crash? Note that this question is not asking how to determine what platform I am on. The solution "check to see if I am on OS X, and if so invoke pythonw instead" is what I have done for now, but it seems like a somewhat hacky fix because it depends on understanding the details of the windowing system (which could easily break sometime in the future) and I wonder if there is a cleaner way. This question does not yet have a satisfactory answer.
python or pythonw in creating a cross-platform standalone GUI app
23,659,489
0
1
933
0
python,user-interface,cross-platform
If you save the file as main.pyw, it should run the script without opening up a new cmd/terminal. Then you can run it as python main.pyw
1
1
0
0
2014-05-14T15:40:00.000
2
0
false
23,659,248
0
0
0
2
I am developing a simple standalone, graphical application in python. My development has been done on linux but I would like to distribute the application cross-platform. I have a launcher script which checks a bunch of environment variables and then sets various configuration options, and then calls the application with what amounts to python main.py (specifically os.system('python main.py %s'% (arg1, arg2...)) ) On OS X (without X11), the launcher script crashed with an error like Could not run application, need access to screen. A very quick google search later, the script was working locally by replacing python main.py with pythonw main.py. My question is, what is the best way to write the launcher script so that it can do the right thing across platforms and not crash? Note that this question is not asking how to determine what platform I am on. The solution "check to see if I am on OS X, and if so invoke pythonw instead" is what I have done for now, but it seems like a somewhat hacky fix because it depends on understanding the details of the windowing system (which could easily break sometime in the future) and I wonder if there is a cleaner way. This question does not yet have a satisfactory answer.
Writing to file limitation on line length
23,697,040
0
0
1,868
0
python,python-2.7
m = s.readline() has \n at the end of line. Then you're doing .format(i, m, values) which writes m in the middle of the string. I leave it as exercise to the reader to find out what's happening when you're writing such line to a file. :-) (hint: m = s.readline().rstrip('\n'))
0
1
0
0
2014-05-16T13:23:00.000
1
1.2
true
23,696,185
1
0
0
1
I've been trying to write lines to a file based on specific file names from the same directory, a search of the file names in another log file(given as an input), and the modified date of the files. The output is limiting me to under 80 characters per line. def getFiles(flag, file): if (flag == True): file_version = open(file) if file_version: s = mmap.mmap(file_version.fileno(), 0, access=mmap.ACCESS_READ) file_version.close() file = open('AllModules.txt', 'wb') for i, values in dict.items(): # search keys in version file if (flag == True): index = s.find(bytes(i)) if index > 0: s.seek(index + len(i) + 1) m = s.readline() line_new = '{:>0} {:>12} {:>12}'.format(i, m, values) file.write(line_new) s.seek(0) else: file.write(i +'\n') file.close() if __name__ == '__main__': dict = {} for file in os.listdir(os.getcwd()): if os.path.splitext(file)[1] == '.psw' or os.path.splitext(file)[1] == '.pkw': time.ctime(os.path.getmtime(file)) dict.update({str(os.path.splitext(file)[0]).upper():time.strftime('%d/%m/%y')}) if (len(sys.argv) > 1) : if os.path.exists(sys.argv[1]): getFiles(True, sys.argv[1]) else: getFiles(False, None) The output is always like: BW_LIB_INCL 13.1 rev. 259 [20140425 16:28] 16/05/14 The interpretation of data is correct, then again the formatting is not correct as the time is put on the next line (not on the same). This is happening to all the lines of my new file. Could someone give me a hint?
Making Python 3.3.3 scripts executable?
23,704,780
0
3
293
0
python,executable,py2exe
On most *nix systems it is sufficient to put #!/usr/bin/python as the first line of the main script and then chmod +x /path/to/script.py.
0
1
0
0
2014-05-16T21:21:00.000
2
0
false
23,704,737
1
0
0
1
I've googled and googled, and everything I've seen has directed me to py2exe. I've looked at it and downloaded the latest version of it, but it says I have to have Python 2.6 to use it! Does this mean I have to use Python 2.6 rather than 3.3.3, or is there an alternative to py2exe? Edit: Thanks! I can now use cxFreeze, but is there a way I can compile it further so I don't have to run it from a different folder? Or should I create a batch file calling the .exe from the command line and convert the batch file to an executable?
Bluetooth Socket no incoming connection
24,003,966
0
1
689
0
python,sockets,bluetooth,connection,bluez
Setting class of device in my programm in the first place did not work as it got reset. To make the HIDServer work on blueZ I had to set the class of device right before I wait for connections. I cannot say why it gets reset, but I know it does. Maybe somebody else can tell why.
0
1
0
1
2014-05-20T09:51:00.000
1
0
false
23,756,453
0
0
0
1
I have developped a HIDServer (bluetooth keyboard) with python on my computer. There are 2 Serversockets (psm 0x11 and 0x13) listening for connections. When I try to connect my IPhone to my computer, I receive an incoming connection (as can be seen in hcidump), but somehow the connection is terminated by remote host. My sockets never get to accept a client connection. Can you help me please? hcidumps: After starting my programm: HCI Event: Command Complete (0x0e) plen 4 Write Extended Inquiry Response (0x03|0x0052) ncmd 1 status 0x00 When trying to connect IPhone: HCI Event: Connect Request (0x04) plen 10 bdaddr 60:D9:C7:23:96:FF class 0x7a020c type ACL HCI Event: Command Status (0x0f) plen 4 Accept Connection Request (0x01|0x0009) status 0x00 ncmd 1 HCI Event: Connect Complete (0x03) plen 11 status 0x00 handle 11 bdaddr 60:D9:C7:23:96:FF type ACL encrypt 0x00 HCI Event: Command Status (0x0f) plen 4 Read Remote Supported Features (0x01|0x001b) status 0x00 ncmd 1 HCI Event: Read Remote Supported Features (0x0b) plen 11 status 0x00 handle 11 Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87 HCI Event: Command Status (0x0f) plen 4 Read Remote Extended Features (0x01|0x001c) status 0x00 ncmd 1 HCI Event: Read Remote Extended Features (0x23) plen 13 status 0x00 handle 11 page 1 max 2 Features: 0x07 0x00 0x00 0x00 0x00 0x00 0x00 0x00 HCI Event: Command Status (0x0f) plen 4 Remote Name Request (0x01|0x0019) status 0x00 ncmd 1 HCI Event: Remote Name Req Complete (0x07) plen 255 status 0x00 bdaddr 60:D9:C7:23:96:FF name 'iPhone' HCI Event: Command Complete (0x0e) plen 10 Link Key Request Reply (0x01|0x000b) ncmd 1 status 0x00 bdaddr 60:D9:C7:23:96:FF HCI Event: Encrypt Change (0x08) plen 4 status 0x00 handle 11 encrypt 0x01 HCI Event: Disconn Complete (0x05) plen 4 status 0x00 handle 11 reason 0x13 Reason: Remote User Terminated Connection
RabbitMQ: What Does Celery Offer That Pika Doesn't?
27,367,747
21
23
12,748
0
python,rabbitmq,celery,task-queue,pika
I’m going to add an answer here because this is the second time today someone has recommended celery when not needed based on this answer I suspect. So the difference between a distributed task queue and a broker is that a broker just passes messages. Nothing more, nothing less. Celery recommends using RabbitMQ as the default broker for IPC and places on top of that adapters to manage task/queues with daemon processes. While this is useful especially for distributed tasks where you need something generic very quickly. It’s just construct for the publisher/consumer process. Actual tasks where you have defined workflow that you need to step through and ensure message durability based on your specific needs, you’d be better off writing your own publisher/consumer than relying on celery. Obviously you still have to do all of the durability checking etc. With most web related services one doesn’t control the actual “work” units but rather, passes them off to a service. Thus it makes little sense for a distributed tasks queue unless you’re hitting some arbitrary API call limit based on ip/geographical region or account number... Or something along those lines. So using celery doesn’t stop you from having to write or deal with state code or management of workflow etc and it exposes the AMQP in a way that makes it easy for you to avoid writing the constructs of publisher/consumer code. So in short if you need a simple tasks queue to chew through work and you aren’t really concerned about the nuances of performance, the intricacies of durability through your workflow or the actual publish/consume processes. Celery works. If you are just passing messages to an api or service you don't actually control, sure, you could use Celery but you could just as easily whip up your own publisher/consumer with Pika in a couple of minutes. If you need something robust or that adheres to your own durability scenarios, write your own publish/consumer code like everyone else.
0
1
0
1
2014-05-20T17:50:00.000
2
1
false
23,766,658
0
0
0
1
I've been working on getting some distributed tasks working via RabbitMQ. I spent some time trying to get Celery to do what I wanted and couldn't make it work. Then I tried using Pika and things just worked, flawlessly, and within minutes. Is there anything I'm missing out on by using Pika instead of Celery?
py2exe executable containing private password
23,772,838
1
0
213
0
python,py2exe
An exe compiled by py2exe isn't compiled in the same sense as a c/c++ application is. When you run py2exe's setup command, it collects your dependencies and packages them together. Depending on the options supplied, it can create an archive file that contains the .py[odc] files that comprise your app, but they are still on the user system. They can be accessed, decompiled, inspected, or modified. What a user does with your code once they have it is out of your hands. You should not deploy sensitive information, passwords, private keys, or anything else that might cause damage in the "wrong" hands.
0
1
0
0
2014-05-20T23:45:00.000
1
1.2
true
23,771,759
1
0
0
1
In my setup, the py2exe bundles all the dependency modules into a zip and I can see them on the deployed machine. (*.pyo) My script windows_app.py is specified in the setup.py as setup(windows = ["windows_app.py"] However I do not see windows_app.pyo on the deployed box anywhere (is this correct?). I do see "windows_app.exe" though which is expected. My question here is, can I keep my private password in the windows_app.py (which goes into windows_app.exe) and assume it is a better place as the .pyo are easily decompilable.