Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Python vs Perl for portability?
| 25,508,481
| 1
| 0
| 254
| 0
|
python,perl
|
Every modern Linux or Unixlike system comes with both installed; which you choose is a matter of taste. Just pick one.
I would say that whatever you choose, you should try to write easily-readable code in it, even if you're the only one who will be reading that code. Now, Pythonists will tell you that writing readable code is easier in Python, which may be true, but it's certainly doable in Perl as well.
| 0
| 1
| 0
| 1
|
2014-08-26T14:20:00.000
| 2
| 0.099668
| false
| 25,508,177
| 0
| 0
| 0
| 2
|
I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl.
|
Python vs Perl for portability?
| 25,508,315
| 0
| 0
| 254
| 0
|
python,perl
|
This is more personal, but I always used python and will use it till something shows me it's better to use other. Its simple, very expansible, strong and fast and supported by a lot of users.
The version of python doesn't matter much since you can update it and most OSes have python 2.5 with expands the compatibility a lot. Perl is also included in Linux and Mac OS, though.
I think that Python will be good, but if you like perl and have always worked with it, just use perl and don't complicate your life.
| 0
| 1
| 0
| 1
|
2014-08-26T14:20:00.000
| 2
| 0
| false
| 25,508,177
| 0
| 0
| 0
| 2
|
I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl.
|
Error installing pycrypto in mac 10.9.6
| 25,542,008
| 0
| 0
| 494
| 0
|
python,pycrypto
|
Got it working after installing xcode.
pycrypto was installed by default once xcode was installed and fabric is working now.
(I should have mentioned that I am new to MAC in the question)
| 0
| 1
| 0
| 1
|
2014-08-27T06:25:00.000
| 1
| 0
| false
| 25,520,264
| 1
| 0
| 0
| 1
|
I am trying to install 'fabric'. I tried using 'pip install fabric' and the installation is failing when it is trying to install the 'pycrypto'
I see it is fetching the 2.6.1 version. I tried installing lower versions and I am getting same error.
'sudo easy_install fabric' also throws same error.
I also have the gmplib installed. I have the lib file in these places
/usr/lib/libgmp.dylib
/usr/local/lib/libgmp.dylib
pip install fabric
Requirement already satisfied (use --upgrade to upgrade): fabric in /Library/Python/2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): paramiko>=1.10.0 in /Library/Python/2.7/site-packages/paramiko-1.14.1-py2.7.egg (from fabric)
Downloading/unpacking pycrypto>=2.1,!=2.4 (from paramiko>=1.10.0->fabric)
Downloading pycrypto-2.6.1.tar.gz (446kB): 446kB downloaded
Running setup.py (path:/private/tmp/pip_build_root/pycrypto/setup.py) egg_info for package pycrypto
Requirement already satisfied (use --upgrade to upgrade): ecdsa in /Library/Python/2.7/site-packages/ecdsa-0.11-py2.7.egg (from paramiko>=1.10.0->fabric)
Installing collected packages: pycrypto
Running setup.py install for pycrypto
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for __gmpz_init in -lgmp... yes
checking for __gmpz_init in -lmpir... no
checking whether mpz_powm is declared... yes
checking whether mpz_powm_sec is declared... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for inttypes.h... (cached) yes
checking limits.h usability... yes
checking limits.h presence... yes
checking for limits.h... yes
checking stddef.h usability... yes
checking stddef.h presence... yes
checking for stddef.h... yes
checking for stdint.h... (cached) yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking wchar.h usability... yes
checking wchar.h presence... yes
checking for wchar.h... yes
checking for inline... inline
checking for int16_t... yes
checking for int32_t... yes
checking for int64_t... yes
checking for int8_t... yes
checking for size_t... yes
checking for uint16_t... yes
checking for uint32_t... yes
checking for uint64_t... yes
checking for uint8_t... yes
checking for stdlib.h... (cached) yes
checking for GNU libc compatible malloc... yes
checking for memmove... yes
checking for memset... yes
configure: creating ./config.status
config.status: creating src/config.h
building 'Crypto.PublicKey._fastmath' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -Wall -Wstrict-prototypes -Wshorten-64-to-32 -fwrapv -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -std=c99 -O3 -fomit-frame-pointer -Isrc/ -I/usr/include/ -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/_fastmath.c -o build/temp.macosx-10.9-intel-2.7/src/_fastmath.o
src/_fastmath.c:83:13: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = p->ob_size;
~ ~~~^~~~~~~
src/_fastmath.c:86:10: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = -p->ob_size;
~ ^~~~~~~~~~~
src/_fastmath.c:113:49: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
int size = (mpz_sizeinbase (m, 2) + SHIFT - 1) / SHIFT;
~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~
src/_fastmath.c:1310:12: warning: implicit conversion loses integer precision: 'unsigned long' to 'unsigned int' [-Wshorten-64-to-32]
offset = mpz_get_ui (mpz_offset);
~ ^~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/gmp.h:840:20: note: expanded from macro 'mpz_get_ui'
#define mpz_get_ui __gmpz_get_ui
^
src/_fastmath.c:1360:10: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
return return_val;
~~~~~~ ^~~~~~~~~~
src/_fastmath.c:1373:27: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
rounds = mpz_get_ui (n) - 2;
~ ~~~~~~~~~~~~~~~^~~
src/_fastmath.c:1433:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
return return_val;
~~~~~~ ^~~~~~~~~~
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
9 warnings generated.
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
2 warnings generated.
This is the error i get when i execute 'fab'
Traceback (most recent call last):
File "/usr/local/bin/fab", line 5, in
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in
working_set.require(requires)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require
needed = self.resolve(parse_requirements(requirements))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: pycrypto>=2.1,!=2.4
|
google app engine datastore
| 25,538,591
| 1
| 0
| 28
| 0
|
database,google-app-engine,python-2.7
|
Assuming that you are talking about an entity that you have on your local machine but not on App Engine once you deploy the app: your local datastore is for testing purposes only and nothing from it will be deployed to GAE. You will need to re-create all datastore data once your app is deployed if it wasn't there already.
| 0
| 1
| 0
| 0
|
2014-08-27T18:15:00.000
| 1
| 0.197375
| false
| 25,534,295
| 0
| 0
| 1
| 1
|
I have an instance which is blobstore.BlobReferenceProperty() in local data base viewer it appears but when I deploy the application in the google database the value is '{}' and when click on an entity it appears at that instance that it has an unknow property.Can anyone help me?
|
Python + cx_Oracle : Unable to acquire Oracle environment handle
| 27,795,948
| 2
| 4
| 6,803
| 1
|
python,oracle
|
If python finds more than one OCI.DLL file in the path (even if they are identical) it will throw this error. (Your path statement looks like it may throw up more than one). You can manipulate the path inside your script to constrain where python will look for the supporting ORACLE files which may be your only option if you have to run several versions of oracle/clients locally.
| 0
| 1
| 0
| 0
|
2014-08-28T07:10:00.000
| 1
| 0.379949
| false
| 25,542,787
| 0
| 0
| 0
| 1
|
Background.
My OS is Win7 64bit.
My Python is 2.7 64bit from python-2.7.8.amd64.msi
My cx_Oracle is 5.0 64bit from cx_Oracle-5.0.4-10g-unicode.win-amd64-py2.7.msi
My Oracle client is 10.1 (I don't know 32 or 64 arch, but SQL*Plus is 10.1.0.2.0
Database is
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
ORACLE_HOME variable added from haki reply.
C:\Oracle\product\10.1.0\Client_1\
Not work problem still persist.
ORACLE_HOME Try Oracle instant from instantclient-basic-win64-10.2.0.5.zip
C:\instantclient_10_2\
C:\Users\PavilionG4>sqlplus Lee/123@chstchmp
Error 6 initializing SQL*Plus
Message file sp1.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
My sql*plus is not let me set the Oracle.
ORACLE_HOME Come back to the
C:\Oracle\product\10.1.0\Client_1\
PATH variable
C:\Program Files (x86)\Seagate Software\NOTES\C:\Program Files (x86)\Seagate Software\NOTES\DATA\C:\Program Files (x86)\Java\jdk1.7.0_05\binC:\Oracle\product\10.1.0\Client_1\binC:\Oracle\product\10.1.0\Client_1\jre\1.4.2\bin\clientC:\Oracle\product\10.1.0\Client_1\jre\1.4.2\binC:\app\PavilionG4\product\11.2.0\dbhome_1\binC:\app\PavilionG4\product\11.2.0\client_2\binc:\Program Files (x86)\AMD APP\bin\x86_64c:\Program Files (x86)\AMD APP\bin\x86C:\Windows\system32C:\WindowsC:\Windows\System32\WbemC:\Windows\System32\WindowsPowerShell\v1.0\c:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-StaticC:\Users\PavilionG4\AppData\Local\Smartbar\Application\C:\PROGRA~2\IBM\SQLLIB\BINC:\PROGRA~2\IBM\SQLLIB\FUNCTIONC:\Program Files\gedit\binC:\Kivy-1.7.2-w32C:\Program Files (x86)\ZBar\binjC:\Program Files (x86)\Java\jdk1.7.0_05\binC:\Program Files\MATLAB\R2013a\runtime\win64C:\Program Files\MATLAB\R2013a\binC:\Python27
TNS is :
C:\Oracle\product\10.1.0\Client_1\NETWORK\ADMIN\tnsnames.ora
REPORT1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.28.128.110)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = REPORT1)
)
)
f1.py shows me error
import cx_Oracle
ip = '172.25.25.42'
port = 1521
SID = 'REPORT1'
dns_tns = cx_Oracle.makedsn(ip,port,SID)
connection = cx_Oracle.connect(u"Lee",u"123",dns_tns)
cursor = connection.cursor()
connection.close()
Error
Traceback (most recent call last):
File "f1.py", line 6, in
connection = cx_Oracle.connect(u"Lee",u"123",dns_tns)
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
Questions
1. How to acquire Oracle environment handle?
I had searched the websites. Unfortunately they are not hit my problem at all.
2. How to let Python use another Oracle client without impact to the existing one?
|
IPython Notebook crashes Macbook Pro every day around noon
| 25,556,599
| 0
| 0
| 63
| 0
|
ipython,ipython-notebook
|
just a guess, but maybe there is a feature turned on in IPython that's calling home for updates or something, and it's running at that time? maybe check to see if it has that feature, and turn it off, and see if that helps?
EDITED: see my comment below, I don't think this is an ipython related issue.
| 0
| 1
| 0
| 0
|
2014-08-28T19:24:00.000
| 1
| 0
| false
| 25,556,512
| 1
| 0
| 0
| 1
|
This is an odd problem I am having to which I have no solution!
Every day, at around noon -- sometimes closer to 1pm -- my computer locks up. It only does so if I am running an IPython Notebook kernel.
I am running Mavericks on a MBPr 2013.
Has anyone else had this issue or related?
How can I investigate further?
Thanks.
|
Running Python from Atom
| 69,060,186
| 0
| 86
| 209,849
| 0
|
python,atom-editor
|
There is a package called "platformio-ide-terminal" that allows you to run Atom code with Ctrl + Shift + B". That's the only package you need (Windows).
| 0
| 1
| 0
| 1
|
2014-08-30T18:23:00.000
| 6
| 0
| false
| 25,585,500
| 0
| 0
| 0
| 2
|
In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Github's atom editor?
|
Running Python from Atom
| 57,858,735
| 3
| 86
| 209,849
| 0
|
python,atom-editor
|
To run the python file on mac.
Open the preferences in atom ide. To open the preferences press 'command + . '
( ⌘ + , )
Click on the install in the preferences to install packages.
Search for package "script" and click on install
Now open the python file(with .py extension ) you want to run and press 'control + r ' (^ + r)
| 0
| 1
| 0
| 1
|
2014-08-30T18:23:00.000
| 6
| 0.099668
| false
| 25,585,500
| 0
| 0
| 0
| 2
|
In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Github's atom editor?
|
How to find modified files in Python
| 25,587,191
| 1
| 0
| 1,192
| 0
|
python,python-2.7,inotify
|
There are several ways to detect changes in files. Some are easier to
fool than others. It doesn't sound like this is a security issue; more
like good faith is assumed, and you just need to detect changes without
having to outwit an adversary.
You can look at timestamps. If files are not renamed, this is a good way
to detect changes. If they are renamed, timestamps alone wouldn't
suffice to reliably tell one file from another. os.stat will tell you
the time a file was last modified.
You can look at inodes, e.g., ls -li. A file's inode number may change
if changes involve creating a new file and removing the old one; this is
how emacs typically changes files, for example. Try changing a file
with the standard tool your organization uses, and compare inodes before
and after; but bear in mind that even if it doesn't change this time, it
might change under some circumstances. os.stat will tell you inode
numbers.
You can look at the content of the files. cksum computes a small CRC
checksum on a file; it's easy to beat if someone wants to. Programs such
as sha256sum compute a secure hash; it's infeasible to change a file
without changing such a hash. This can be slow if the files are large.
The hashlib module will compute several kinds of secure hashes.
If a file is renamed and changed, and its inode number changes, it would
be potentially very difficult to match it up with the file it used to
be, unless the data in the file contains some kind of immutable and
unique identifier.
Think about concurrency. Is it possible that someone will be changing a
file while the program runs? Beware of race conditions.
| 0
| 1
| 0
| 0
|
2014-08-30T21:40:00.000
| 4
| 1.2
| true
| 25,586,996
| 1
| 0
| 0
| 3
|
I want to monitor a folder and see if any new files are added, or existing files are modified. The problem is, it's not guaranteed that my program will be running all the time (so, inotify based solutions may not be suitable here). I need to cache the status of the last scan and then with the next scan I need to compare it with the last scan before processing the files.
What are the alternatives for achieving this in Python 2.7?
Note1: Processing of the files is expensive, so I'm trying to process the files that are not modified in the meantime. So, if the file is only renamed (as opposed to a change in the contents of the file), I would also like to detect that and skip the processing.
Note2: I'm only interested in a Linux solution, but I wouldn't complain if answers for other platforms are added.
|
How to find modified files in Python
| 25,587,068
| 1
| 0
| 1,192
| 0
|
python,python-2.7,inotify
|
I would've probably go with some kind of sqlite solution, such as writing the last polling time.
Then on each such poll, sort the files by last_modified_time (mtime) and get all the ones who are having mtime greater than your previous poll (this value will be taken out of the sqlite or some kind of file if you insist on not having requirement of such db).
| 0
| 1
| 0
| 0
|
2014-08-30T21:40:00.000
| 4
| 0.049958
| false
| 25,586,996
| 1
| 0
| 0
| 3
|
I want to monitor a folder and see if any new files are added, or existing files are modified. The problem is, it's not guaranteed that my program will be running all the time (so, inotify based solutions may not be suitable here). I need to cache the status of the last scan and then with the next scan I need to compare it with the last scan before processing the files.
What are the alternatives for achieving this in Python 2.7?
Note1: Processing of the files is expensive, so I'm trying to process the files that are not modified in the meantime. So, if the file is only renamed (as opposed to a change in the contents of the file), I would also like to detect that and skip the processing.
Note2: I'm only interested in a Linux solution, but I wouldn't complain if answers for other platforms are added.
|
How to find modified files in Python
| 25,587,181
| 1
| 0
| 1,192
| 0
|
python,python-2.7,inotify
|
Monitoring for new files isn't hard -- just keep a list or database of inodes for all files in the directory. A new file will introduce a new inode. This will also help you avoid processing renamed files, since inode doesn't change on rename.
The harder problem is monitoring for file changes. If you also store file size per inode, then obviously a changed size indicates a changed file and you don't need to open and process the file to know that. But for a file that has (a) a previously recorded inode, and (b) is the same size as before, you will need to process the file (e.g. compute a checksum) to know if it has changed.
| 0
| 1
| 0
| 0
|
2014-08-30T21:40:00.000
| 4
| 0.049958
| false
| 25,586,996
| 1
| 0
| 0
| 3
|
I want to monitor a folder and see if any new files are added, or existing files are modified. The problem is, it's not guaranteed that my program will be running all the time (so, inotify based solutions may not be suitable here). I need to cache the status of the last scan and then with the next scan I need to compare it with the last scan before processing the files.
What are the alternatives for achieving this in Python 2.7?
Note1: Processing of the files is expensive, so I'm trying to process the files that are not modified in the meantime. So, if the file is only renamed (as opposed to a change in the contents of the file), I would also like to detect that and skip the processing.
Note2: I'm only interested in a Linux solution, but I wouldn't complain if answers for other platforms are added.
|
Need help downloading bucket from Google Cloud
| 26,923,909
| 0
| 1
| 775
| 0
|
python,download,cloud,bucket,gsutil
|
First thing you need to know is that gsutil tools works only with python version 2.7 or lower for windows.
Once you have the correct python version, Please follow following steps if you are a windows user :
open commend prompt and switch to your gsutil directory using:
-- cd\
-- cd gsutil
Once you are in the gsutil directory execute following command:
python gsutil config -b
This will open a link a browser requesting you access to your google account. Please make sure you are logged into google from the account you want to access cloud storage and grant access
Once done, this will give you a KEY (authorization code). Copy that key and paste it back into your command prompt. Hit enter and now this will ask you a PROJECT-ID.
Now Navigate to your cloud console and provide the PROJECT-ID.
If Successful, this will create a .boto file in c:\users.
Now you are ready to access your private buckets from cloud console. For this, user following command: C:\python27>python c:\gsutil\gsutil cp -r gs://your_bucked_id/path_to_file path_to_save_files
| 0
| 1
| 0
| 0
|
2014-09-01T14:46:00.000
| 2
| 0
| false
| 25,608,336
| 0
| 0
| 1
| 1
|
My computer crashed and I need to download everything I stored on the Google Cloud. I am not a computer tech and I can't seem to find a way to download whole buckets from Google Cloud.
I have tried to follow the instructions given in the Google help docs. I have downloaded and installed Python and I downloaded gsutil and followed the instructions to put it in my c:\ drive (I can see it there). When I go to the command prompt and type cd \gsutil the next prompt says "c:\gsutil>" but I'm not sure what to do with that.
When I type "gsutil config" it says "file 'c:\gsutil\gsutil.py", line 2 SyntaxError: encoding problem utf8".
When I type "python gsutil" (which the instructions said would give me a list of commands) it says "'python' is not recognized as an internal or external command, operable program or batch file" even though I did the full installation process for Python.
Someone suggested a more user-friendly program called Cloudberry Explorer which I downloaded and installed, but the list of sources I can set up does not include Google Cloud.
Can anyone help?
|
'Listening' for a file in Python
| 25,617,956
| -1
| 2
| 6,642
| 0
|
python
|
You can use Twisted, and it is reactor it is much better than an infinite loop ! Also you can use reactor.callLater(myTime, myFunction), and when myFunction get called you can adjust the myTime and add another callback with the same API callLater().
| 0
| 1
| 0
| 1
|
2014-09-02T07:08:00.000
| 5
| -0.039979
| false
| 25,617,706
| 0
| 0
| 0
| 1
|
I have a python script that does some updates on my database.
The files that this script needs are saved in a directory at around 3AM by some other process.
So I'm going to schedule a cron job to run daily at 3AM; but I want to handle the case if the file is not available exactly at 3AM, it could be delayed by some interval.
So I basically need to keep checking whether the file of some particular name exists every 5 minutes starting from 3AM. I'll try for around 1 hour, and give up if it doesn't work out.
How can I achieve this sort of thing in Python?
|
Maintaining jobs history in apscheduler
| 25,633,337
| 1
| 2
| 472
| 0
|
python,scheduler,apscheduler
|
If you want such extra functionality, add the appropriate event listeners to the scheduler to detect the adding and any modifications to a job. In the event listener, get the job from the scheduler and store it wherever you want. They are serializable btw.
| 0
| 1
| 0
| 0
|
2014-09-02T10:18:00.000
| 1
| 0.197375
| false
| 25,621,035
| 0
| 0
| 1
| 1
|
I am using apscheduler to schedule my scrapy spiders. I need to maintain history of all the jobs executed. I am using mongodb jobstore. By default, apscheduler maintains only the details of the currently running job. How can I make it to store all instances of a particular job?
|
Porting Python on Windows using pywin32/excel to Linux on Vagrant Machine
| 25,629,595
| 2
| 0
| 850
| 1
|
python,linux,excel,vagrant,pywin32
|
The short answer is, you can't. WINE does not expose a bottled Windows environment's COM registry out to linux—and, even if it did, pywin32 doesn't build on anything but Windows.
So, here are some options, roughly ordered from the least amount of change to your code and setup to the most:
Run both your Python script and Excel under real Windows, inside a real emulator.
Run both your Python script and Excel under WINE.
Write or find a library that does expose a bottled Windows environment's COM registry out to Linux.
Write or find a cross-platform DCOM library that presents a win32com-like API, then change your code to use that to connect to the bottled Excel remotely.
Rewrite your code to script Excel indirectly by, e.g., sshing into a Windows box and running minimal WSH scripts.
Rewrite your code to script LibreOffice or whatever you prefer instead of Excel.
Rewrite your code to process Excel files (or CSV or some other interchange format) directly instead of scripting Excel.
| 0
| 1
| 0
| 0
|
2014-09-02T17:56:00.000
| 1
| 1.2
| true
| 25,629,462
| 0
| 0
| 0
| 1
|
I have written an extensive python package that utilizes excel and pywin32.
I am now in the progress of moving this package to a linux environment on a Vagrant machine.
I know there are "emulator-esque" software packages (e.g. WINE) that can run Windows applications and look-a-likes for some Windows applications (e.g. Excel to OpenOffice).
However, I am not seeing the right path to take in order to get my pywin32/Excel dependent code written for Windows running in a Linux environment on a Vagrant machine. Ideally, I would not have to alter my code at all and just do the appropriate installs on my Vagrant machine.
Thanks
|
port conflict between Hadoop and python
| 25,667,671
| 0
| 0
| 430
| 0
|
python,hadoop,port,conflict
|
It appears Cloudera installed python 2.7. This was removed / replace with python 3.2.
The $jps command on Hadoop now returns the expected results including NameNode.
| 0
| 1
| 0
| 0
|
2014-09-03T16:03:00.000
| 2
| 0
| false
| 25,648,888
| 0
| 0
| 0
| 1
|
I am installing Hadoop 2.5.0 on a Ubuntu 12.04 cluster, 64-bit. At the end of the instructions I type $ jps on the master node and do not get a NameNode. I checked the Hadoop logs and found:
BindException error stating :9000 is already in use.
$ netstat -a -t --numeric-ports -p | grep :9000 returns that python is listening on this port. It appears I need to move python 2.7 to another port. How do I move python?
Followed the command below, the pid=2346.
$ ps -p 2346
PID TTY TIME CMD
2346 ? 01:28:13 python
Tried second command:
$ ps -lp 2346
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 2346 1 0 80 0 - 332027 poll_s ? 01:28:30 python
more detail:
$ ps -Cp 2346
PID TTY STAT TIME COMMAND
2346 ? Ssl 88:34 /usr/lib/cmf/agent/build/env/bin/python /usr/lib/cmf/agent/src/cmf/agent.py --package_dir /usr/lib/cmf
It appears a failed Cloudera Hadoop distribution installation has not been removed. It installed python 2.7 automatically. Not sure what else is automatically running. Will attempt to uninstall python 2.7.
|
Start Tkinter in background
| 25,689,654
| 0
| 0
| 184
| 0
|
python-2.7,tkinter
|
I am using Linux Mint. In order to make a program not show up in the foreground (i.e. be hidden behind all of the other windows), one should use root.lower() as aforementioned in the comments. However, please note (and this seems to happen on multiple platforms) that root.lower() will not change the focus of the window. Therefore, even if you use .lower() and run the script, and if you press [alt] + [F4], for example, the Tkinter window that was just opened (even though you cannot see it) will be closed.
I noticed, however, that it is prudent to place the root.lower() after attributes for the Tkinter root. For example, if you use root.attributes("-zoomed", True) to expand the window, be sure to place root.lower() after the root.attributes(..). Moreover, it did not work for me when I put root.lower() before root.attributes(..).
| 1
| 1
| 0
| 0
|
2014-09-03T16:04:00.000
| 1
| 1.2
| true
| 25,648,912
| 0
| 0
| 0
| 1
|
Comically enough, I was really annoyed when tkinter windows opened in the background on Mac. However, now I am on Linux, and I want tkinter to open in background.
I don't know how to do this, and when I google how to do it, all I can find are a lot of angry Mac users who can't get tkinter to open in the foreground.
I should note that I am using python2.7 and thus Tkinter not tkinter (very confusing).
|
How to write file to RAM on Linux
| 25,654,779
| 4
| 2
| 5,650
| 0
|
python,linux,proc,tmpfs
|
Pass /proc/self/fd/1 as the filename to the child program. All of the writes to /proc/self/fd/1 will actually go to the child program's stdout. Use subprocess.Popen(), et al, to capture the child's stdout.
| 0
| 1
| 0
| 0
|
2014-09-03T21:22:00.000
| 3
| 0.26052
| false
| 25,653,881
| 0
| 0
| 0
| 1
|
A program that I can not modify writes it's output to a file provided as an argument. I want to have the output to go to RAM so I don't have to do unnecessary disk IO.
I thought I can use tmpfs and "trick" the program to write to that, however not all Linux distros use tmpfs for /tmp, some mount tmpfs under /run (Ubuntu) others under /dev/shm (RedHat).
I want my program to be as portable as possible and I don't want to create tmpfs file systems on the user's system if I can avoid it.
Obviously I can do df | grep tmpfs and use whatever mount that returns, but I was hoping for something a bit more elegant.
Is it possible to write to a pseudo terminal or maybe to /proc somewhere?
|
Can't stop web server in Google App Engine Launcher
| 33,986,803
| 0
| 12
| 981
| 0
|
python,google-app-engine,python-2.7
|
I face this issue too. It has to do with the application you are running. If you are sure it runs perfectly fine, then it may be over burdening the server in a way. I strongly recommend logging relevant aspect of your code so it displays any issue in the log console. Hope this helps
| 0
| 1
| 0
| 0
|
2014-09-04T14:48:00.000
| 4
| 0
| false
| 25,668,522
| 0
| 0
| 1
| 3
|
I am running development web server in Google App Engine Launcher without any troubles.
But I can't successfully stop it. When I am press Stop button, nothing happens.
Nothing adds in logs after pressing Stop.
And after that I can't close launcher.
The only way to close launcher is Task Manager.
Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C.
By the way I am under proxy.
|
Can't stop web server in Google App Engine Launcher
| 28,325,931
| 0
| 12
| 981
| 0
|
python,google-app-engine,python-2.7
|
I think your server is crashed because maybe you overloaded it or maybe there's an internal error that can be solved by re-installing the web-server.
| 0
| 1
| 0
| 0
|
2014-09-04T14:48:00.000
| 4
| 0
| false
| 25,668,522
| 0
| 0
| 1
| 3
|
I am running development web server in Google App Engine Launcher without any troubles.
But I can't successfully stop it. When I am press Stop button, nothing happens.
Nothing adds in logs after pressing Stop.
And after that I can't close launcher.
The only way to close launcher is Task Manager.
Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C.
By the way I am under proxy.
|
Can't stop web server in Google App Engine Launcher
| 27,682,317
| 0
| 12
| 981
| 0
|
python,google-app-engine,python-2.7
|
This is just a suggestion, but I think if you overloaded the server by repeatedly pinging the IP, you could crash the webserver.
| 0
| 1
| 0
| 0
|
2014-09-04T14:48:00.000
| 4
| 0
| false
| 25,668,522
| 0
| 0
| 1
| 3
|
I am running development web server in Google App Engine Launcher without any troubles.
But I can't successfully stop it. When I am press Stop button, nothing happens.
Nothing adds in logs after pressing Stop.
And after that I can't close launcher.
The only way to close launcher is Task Manager.
Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C.
By the way I am under proxy.
|
I just downloaded Aptana Studio 3 in Windows 8.1 Pro with Java SDK
| 26,286,915
| 0
| 0
| 576
| 0
|
python
|
Try: C:\Users\\appdata\roaming\appcelerator\
that is where I found it. I had the same problem. I also just put aptan into the search input field and let the system do its thing.
| 0
| 1
| 0
| 0
|
2014-09-05T10:26:00.000
| 1
| 0
| false
| 25,683,758
| 0
| 0
| 1
| 1
|
But I can't find it where it is installed. It isn't even listed at start menu, it can't be found at Program Files (64 bit) and also in Program Files (x86). I repaired installation but again no way to find.
|
Windows Task Scheduler not allowing python subprocess.popen
| 52,074,147
| 0
| 3
| 1,475
| 0
|
python,windows,batch-file,scheduled-tasks
|
Same situation: taks -> batch script -> Python process -> subprocess(es), but on Windows Server 2012
I worked around the problem by providing the absolute path to the script/exe in subprocess.Popen.
I verified the environment variables available inside the Python process and the script/exe is in a directory on the PATH. Still, Windows gives a FileNotFoundError unless I provide the absolute path.
Strangely, calling the same script/exe from the batch script is possible without providing its absolute path.
| 0
| 1
| 0
| 0
|
2014-09-05T16:55:00.000
| 2
| 0
| false
| 25,690,621
| 0
| 0
| 0
| 1
|
I noticed a rather interesting problem the other day.
I have a windows scheduled task on Windows server 2008 RT. This task runs a batch file which runs a python script I've built. Within this python script there is a subprocess.Popen call to run several other batch files. However for the past couple days I've noticed that the task has successfully run however the secondary batch files did not. I know the python script ran successfully due to the logs it created and all the files it makes that the secondary batch files use are all there. However the completed files are not.
If I just run the batch file by itself everything works perfectly. Does Microsoft's task scheduler not allow a program to open additional batch files and is there a workaround for this?
|
How to access system display memory / frame buffer in a Java program?
| 25,699,358
| 0
| 1
| 886
| 0
|
java,python,memory,vnc
|
directly access system display memory on Linux
You can't. Linux is a memory protected virtual address space operating system. Ohh, the kernel gives you access to the graphics memory through some node in /dev but that's not how you normally implement this kind of thing.
Also in Linux you're normally running a display server like X11 (or in the future something based on the Wayland protocol) and there might be no system graphics memory at all.
I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screen shot), convert it into RAW format, compress it and store it in an ArrayList.
That's exactly how its done. Use the display system's method to capture the screen. It's the only reliable way to do this. Note that if conversion or compression is your bottleneck, you'd have that with fetching it from graphics memory as well.
| 0
| 1
| 0
| 0
|
2014-09-06T10:28:00.000
| 1
| 1.2
| true
| 25,699,308
| 0
| 0
| 1
| 1
|
I am trying to create my own VNC client and would like to know how to directly access system display memory on Linux? So that I can send it over a Socket or store it in a file locally.
I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screenshot), convert it into RAW format, compress it and store it in an ArrayList.
But, I find this method a bit too resource heavy. So, was searching for alternatives.
Please, let me also know if there are other ways for the same (using Java or Python only)?
|
Running C in A Browser
| 51,140,813
| 13
| 6
| 8,593
| 0
|
javascript,python,c,google-app-engine,browser
|
Old question but for those that land in here in 2018 it would be worth looking at Web Assembly.
| 0
| 1
| 0
| 0
|
2014-09-07T17:58:00.000
| 3
| 1
| false
| 25,713,194
| 0
| 0
| 1
| 1
|
I've spent days of research over the seemingly simple question: is it possible to run C code in a browser at all? Basically, I have a site set up in Appengine that needs to run some C code supplied by (a group of trusted) users and run it, and return the output of the code back to the user. I have two options from here: I either need to completely run the code in the browser, or find some way to have Python run this C code without any system calls.
I've seen mixed responses to my question. I've seen solutions like Emscripten, but that doesn't work because I need the LLVM code to be produced in the browser (I cannot run compilers in AppEngine.) I've tried various techniques, including scraping from the output page on codepad.org, but the output I will produce is so high that I cannot use services like codepad.org because they trim the output (my output will be ~20,000 lines of about 60 characters each, which is trimmed by codepad due to a timeout). My last resort is to make my own server that can serve my requests from my Appengine site, but that seems a little extreme.
The code supplied by my users will be very simple C. There are no I/O or system operations called by their code. Unfortunately, I probably cannot simply use a find/replace operation in their code to translate it to Javascript, because they may use structures like multidimensional arrays or maybe even classes.
I'm fine with limiting my users to one cross-platform browser, e.g. Chrome or Firefox. Can anyone help me find a solution to this question? I've been baffled for days.
|
GAE multi-module, multi-language application on localhost
| 26,067,953
| 2
| 4
| 503
| 0
|
java,python,google-app-engine,module,google-cloud-datastore
|
You might be able to do something similar by using "appscale" (an open source project that could be able to help you, if you setup Virtual Box and load the image on it). Look at community.appscale.com
Another way (mind you, this is tricky) would be to :
1- deploy your python as a standalone project on localhost:9000
2- deploy your java as a standalone project on localhost:8000
3- Change your python and java code so that when they are in Dev, they hit the right localhost (java
hits localhost:9000 and python hits localhost:8000)
4- Try, like @tx802 suggested, to specify a path to local_db.
I am not sure either method works, but I figure they are both worth trying at the very least.
| 0
| 1
| 0
| 0
|
2014-09-09T13:51:00.000
| 1
| 0.379949
| false
| 25,746,419
| 0
| 0
| 1
| 1
|
I have a multi-module GAE Application that is structured like this:
a Python27 module, that is a regular web application. This Python app uses the Datastore API. Regular, boring web app.
a Java module (another web application) that hooks on the Datastore calls (calls made by the Python web app), and displays aggregated data about the recorded Datastore calls.
I have been able to deploy this application on the GAE cloud, and everything works fine.
However, problems arise when I want to run my application on localhost.
The Python module must be started using the Python SDK. The Java module must be started using the Java SDK.
However, the 2 SDK's do not seem to share the same datastore (I believe the 2 SDKs write/read to separate files on disk).
It seems to me that the 2 SDK's also differ in the advancement of the Development Console implementation.
The Python SDK sports a cleaner, more "recent-looking" Development Console (akin to the new console.developers.google.com console) than the Java SDK, which has the old-looking version of the Development Console (akin to the old appspot.com console)
So my question is, is there a way to boot 2+ modules (in different languages: Python, Java) that share the same Datastore files? That'd be nice, since it would allow the Java module to hook on the Python Datastore calls, which does not seem to be possible at the moment.
|
System Call in Python via MINGW32 on Windows
| 25,780,451
| 1
| 0
| 1,827
| 0
|
python,windows,wget,system-calls,mingw32
|
There's no such thing as under "MinGW". You probably mean under MSYS, a Unix emulation environment for Windows. MSYS makes things look like Unix, but you're still running everything under Windows. In particular MSYS maps /bin to the drive and directory where you install MSYS. If you installed MSYS to C:\MSYS then your MSYS /bin directory is really C:\MSYS\bin.
When you add /bin to your MSYS PATH environment variable, MSYS searches the directory C:\MSYS\bin. When you add /bin to the Windows PATH environment using the command SETX, Windows will look in the \bin directory of the current drive.
Presumably your version of Python is the standard Windows port of Python. Since it's a normal Windows application, it doesn't interpret the PATH environment variable the way you're expecting it to. With /bin in the path, it will search the \bin directory of the current drive. Since wget is in C:\MSYS\bin not \bin of the current directory you an error when trying to run it from Python.
Note that if you run a Windows command from the MSYS shell, MSYS will automatically convert its PATH to a Windows compatible format, changing MSYS pathnames into Windows pathnames. This means you should be able to get your Python script to work by running Python from the MSYS shell.
| 0
| 1
| 0
| 0
|
2014-09-10T23:44:00.000
| 1
| 1.2
| true
| 25,776,832
| 0
| 0
| 0
| 1
|
I am trying to figure out a way to call wget from my python script on a windows machine. I have wget installed under /bin on the machine. Making a call using the subprocess or os modules seems to raise errors no matter what I try. I'm assuming this is related to the fact that I need to route my python system call through minGW so that wget is recognized.
Does anyone know how to handle this?
Thanks
|
Appengine: Query only a subset of the data?
| 25,779,584
| 1
| 0
| 37
| 0
|
python,google-app-engine,google-cloud-datastore
|
You should add to each Datastore entity an indexed property to query one.
For example you could create an "hash" property that will contain the date (in ms since epoch) modulo 15 minutes (in ms).
Then you just have to query with a filter saying hash=0, or rather a random value between 0 and 15 min (in ms).
| 0
| 1
| 0
| 0
|
2014-09-11T03:30:00.000
| 1
| 0.197375
| false
| 25,778,586
| 0
| 0
| 1
| 1
|
My users can supply a start and end date and my server will return a list of points between those two dates.
However, there are too many points between each hour and I am interested to pick only one random point per every 15 minutes.
Is there an easy to do this in Appengine?
|
Forcing a GUI application to run as root
| 25,791,380
| 1
| 0
| 969
| 0
|
python,macos,root,pyinstaller
|
Running the installer as root will have no effect when you later start the application itself as a normal user.
Try sudo python /path/to/script.py instead.
If that works, then put this into a shell script and run that to start the app as root from now on (and the people who know MacOS can probably tell you how you can create a nice icon for the script).
WARNING Doing this makes your system vulnerable to attacks. If you do this on your own Mac, that's fine. If you're developing a product that you're selling to other people, then you need to revisit your design since it's severely broken.
| 0
| 1
| 0
| 0
|
2014-09-11T15:15:00.000
| 2
| 0.099668
| false
| 25,791,196
| 0
| 0
| 0
| 1
|
I'm getting ready to deploy an app on OS X. This is the first time I've written an application on this platform which requires root permissions to run properly, so I need that functionality integrated for every startup attempt.
The application itself is written in Python 2.7, and then compiled to binary using PyInstaller. So far, I've tried:
Running PyInstaller using sudo pyinstaller -w --icon=/path/to/icon /path/to/script.py
Invoking the PyInstaller command using sudo su
I don't know what else to try at this point. Is it something that could be achieved using symlinks?
|
How to find out from where is a Python script called?
| 25,974,955
| 0
| 1
| 497
| 0
|
python,bash,shell
|
I combined two things:
ran automated tests with the old and new version of pythona nd compared results
used snakefood to track the dependencies and ran the parent scripts
Thanks for the os.walk and os.getppid suggestion, however, I didn't want to write/use any additional code.
| 0
| 1
| 0
| 1
|
2014-09-11T16:09:00.000
| 2
| 1.2
| true
| 25,792,285
| 0
| 0
| 0
| 1
|
I need to test whether several .py scripts (all part of a much bigger program) work after updating python. The only thing I have is their path. Is there any intelligent way how to find out from which other scripts are these called? Brute-forece grepping wasn't as good aas I expected.
|
Python ctypes error GOMP_critical_end when loading library
| 25,822,184
| 4
| 2
| 1,110
| 0
|
python,c,openmp,ctypes,intel-mkl
|
Having -fopenmp while compiling enables OpenMP support and introduces in the resultant object file references to functions from the GNU OpenMP run-time support library libgomp. You should then link your shared object (a.k.a. shared library) against libgomp in order to tell the run-time linker to also load libgomp (if not already loaded via some other dependency) whenever your library is used so that it could resolve all symbols.
Linking against libgomp can be done in two ways:
If you use GCC to also link the object files and produce the shared object, just give it the -fopenmp flag.
If you use the system linker (usually that's ld), then give it the -lgomp option.
A word of warning for the second case: if you are using GCC that is not the default system-wide one, e.g. you have multiple GCC versions installed or use a version that comes from a separate package or have built one yourself, you should provide the correct path to libgomp.so that matches the version of GCC.
| 0
| 1
| 0
| 0
|
2014-09-11T19:59:00.000
| 1
| 1.2
| true
| 25,795,944
| 1
| 0
| 0
| 1
|
I have a library that I compiled with gcc using -fopenmp and linking to libmkl_gnu_thread.a.
When I try to load this library using ctypes I get the error message
undefined symbol: GOMP_critical_end
Compiling this without openmp and linking to libmkl_sequential.a instead of gnu_thread, the library works fine, but I'd rather not have to build different versions in order to support Python.
How do I fix this error? Do I need to build python from source with openmp support? I'd like to avoid this since users don't want to have to build their own python to use this software.
I'm using python2.7.6.
|
Python deployment with third-party libraries
| 25,810,294
| 1
| 0
| 452
| 0
|
python,windows,python-2.7,deployment,exe
|
The executable creation packages should be able to grab 3rd party packages if they're installed. Sometimes you have to specify what to include if the library abuses Python's importing system or it's not a "pure Python" package. For example, I would sometimes have to specifically include lxml to get py2exe to pick it up properly.
The py2exe project for Python 2 hasn't been updated in quite a long time, so I would certainly recommend one of the alternatives: PyInstaller, cx_freeze or bb_freeze.
I have only seen issues with MSVCP90.dll when using non pure Python packages, such as wxPython. Normally you can add that in your setup.py to include it. If that doesn't work, then you could also add it using an installer utility like NSIS. Or you may just have to state in your README that your app depends on Microsoft's C++ redistributable and include a link to it.
| 0
| 1
| 0
| 0
|
2014-09-12T00:26:00.000
| 1
| 1.2
| true
| 25,798,916
| 1
| 0
| 0
| 1
|
I want to deploy an executable (.exe) from my python2.7 project with everything included. I have seen pyinstaller and py2exe but the problem is that my project uses a lot of third-party packages that are not supported by default. What is the best choice for such cases? Is there any other distribution packager that could be used?
Thank you
|
Exporting cpython AST symbols on Windows
| 25,815,355
| 0
| 0
| 33
| 0
|
python,visual-studio,cpython
|
OK, I figured it out
I was using VS 2013 while Python's build system was designed for VS 2010.
I ended up retargeting everything for 2013 (including a small modification to the tix makefile) and it compiled with all non-static symbols (AST and all) as expected.
Python.org's official pre-built Windows libraries still seem to omit the AST symbols. I don't mind building Python from source myself, but I think the official builds should package the whole shebang.
| 0
| 1
| 0
| 0
|
2014-09-12T03:56:00.000
| 1
| 0
| false
| 25,800,481
| 1
| 0
| 0
| 1
|
I'm writing a C application that makes use of Python's AST API to transform Python code expressions before emitting bytecode. I've been a longtime POSIX developer (currently OS X), but I wish learn how to port my projects to Windows as well.
I'm using the static libraries (.lib) generated by build.bat in Python's PCBuild directory. The trouble with these libraries is they somehow skip over the symbols in Python/Python-ast.c as well as Python/asdl.c. I need these APIs for their AST constructors, but I'm not sure how to get Visual Studio to export them.
Do I need to add __declspec(dllexport) for static libraries?
EDIT: I do not have this problem with static libraries generated on POSIX platforms
|
update user installed packages with pip
| 25,808,077
| 0
| 3
| 2,767
| 0
|
python,pip
|
I would suggest creating virtual environment if it is possible for you.
You would just use sudo apt-get install python-virtualenv to install virtualenv, then enter your folder where you store python projects and type into terminal virtualenv venv. After that, you can activate it like this source venv/bin/activate.
What it does is it creates almost full copy of python (some libraries are just linked to save space) and everything you do after activating only affects that copy, not global environment. Therefore you can install any set of libraries using pip, update them etc. and you won't change anything outside of virtual environment. But don't forget to activate it first before you do anything.
| 0
| 1
| 0
| 0
|
2014-09-12T09:37:00.000
| 3
| 0
| false
| 25,805,200
| 1
| 0
| 0
| 1
|
I'm using a bunch of python packages for my research that I install in my home directory using the --user option of pip. There are also some packages that were installed by the package manager of my distribution for other things. I would like to have a pip command that only upgrade the packages I installed myself with the --user option.
I tried the recommend version pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install -U but this seems to only work using virtualenvs. pip freeze --local is showing packages that are installed for my user and systemwide.
Is there a way to upgrade only the packages installed locally for my user?
|
ipython notebook error with tornado
| 25,821,699
| 0
| 0
| 478
| 0
|
tornado,ipython-notebook
|
i think you start the python on behalf of user, that do not have access to python source files. Try to start the python application as root.
| 0
| 1
| 0
| 0
|
2014-09-13T09:04:00.000
| 1
| 0
| false
| 25,821,590
| 1
| 0
| 0
| 1
|
When I try to use cmd to open ipython notebook, I got this
error:tornado.access:500 GET/static/base/images/favicon.ico?v=4e6c6be5716444f7ac7b902e7f388939<::1> 150.00ms referer=None
I tried to reinstall the python2.7.8 , pythonxy and chrome. But it still failed.
Can anyone have any idea to help me fix that?
|
Is python code platform independent?
| 25,826,816
| 4
| 4
| 8,517
| 0
|
python,linux,windows,cross-platform
|
The Python bytecode itself is not platform-dependent, assuming a full Python VM implementation.
There are specific modules and functions that are only available on certain platforms, therefore Python source code can be made platform-dependent if it uses these. The documentation specifies if a name is only available on a restricted subset of platforms, so avoiding these will go far to make it platform-independent.
| 0
| 1
| 0
| 0
|
2014-09-13T15:24:00.000
| 1
| 1.2
| true
| 25,824,658
| 1
| 0
| 0
| 1
|
Lets assume a python code written and tested on linux system with python 2.7.1. It utilizes only the default python libraries like: os, itertools, tkinter, csv, collections.
If we take this code and put into a python 2.7.1 on a windows system, will it work fine?
|
Prevent creating new child process using subprocess in Python
| 25,854,761
| 1
| 1
| 972
| 0
|
python,bash,subprocess,parent-child,popen
|
Why not just create a shell script with all the commands you need to run, then just use a single subprocess.Popen() call to run it? If the contents of the commands you need to run depend on results calculated in your Python script, you can just create the shell script dynamically, then run it.
| 0
| 1
| 0
| 0
|
2014-09-15T18:42:00.000
| 3
| 0.066568
| false
| 25,854,722
| 1
| 0
| 0
| 1
|
I need to run a lot of bash commands from Python. For the moment I'm doing this with
subprocess.Popen(cmd, shell=True)
Is there any solution to run all these commands in the same shell? subprocess.Popen opens a new shell at every execution and I need to set up all the necessary variables at every call, in order for cmd command to work properly.
|
Can Apache be used as a front end for Django and Tornado at the same time?
| 25,861,972
| 1
| 0
| 534
| 0
|
python,django,apache,tornado,wsgi
|
You would be better off to use nginx as a front end proxy on port 80 and have it proxy to both Apache/mod_wsgi and Tornado as backends on their own ports. Apache/mod_wsgi will actually benefit from this as well if everything is setup properly as nginx will isolate Apache from slow HTTP clients allowing Apache to perform better with fewer resources.
| 0
| 1
| 0
| 0
|
2014-09-16T02:16:00.000
| 2
| 0.099668
| false
| 25,859,704
| 0
| 0
| 1
| 1
|
I have Apache set up as a front end for Django and it's working fine. I also need to handle web sockets so I have Tornado running on port 8888. Is it possible to have Apache be a front end for Tornado so I don't have to specify the 8888 port?
My current /etc/apache2/sites-enabled/000-default.conf file is:
WSGIDaemonProcess myappiot python-path=/home/ubuntu/myappiot/sw/www/myappiot:/usr/local/lib/python2.7/site-packages
WSGIProcessGroup myappiot
WSGIScriptAlias / /home/ubuntu/myappiot/sw/www/myappiot/myappiot/wsgi.py
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
|
APScheduler store job in custom database of mongodb
| 25,898,609
| 1
| 0
| 708
| 1
|
mongodb,python-2.7,apscheduler
|
Simply give the mongodb jobstore a different "database" argument. It seems like the API documentation for this job store was not included in what is available on ReadTheDocs, but you can inspect the source and see how it works.
| 0
| 1
| 0
| 0
|
2014-09-17T07:01:00.000
| 1
| 0.197375
| false
| 25,884,242
| 0
| 0
| 0
| 1
|
I want to store the job in mongodb using python and it should schedule on specific time.
I did googling and found APScheduler will do. i downloaded the code and tried to run the code.
It's schedule the job correctly and run it, but it store the job in apscheduler database of mongodb, i want to store the job in my own database.
Can please tell me how to store the job in my own database instead of default db.
|
In Python, how can I run a module that's not in my path?
| 26,268,411
| 1
| 1
| 1,132
| 0
|
python,pycharm,pythonpath
|
There are multiple ways to solve this.
In PyCharm go to Run/Edit Configurations and add the environment variable PYTHONPATH to $PYTHONPATH: and hit apply. The problem with this approach is that the imports will still be unresolved but the code will run fine as python knows where to find your modules at run time.
If you are using mac or unix systems. Use the command "EXPORT PYTHONPATH=$PYTHONPATH:". If Windows, you will have to add the directory to the PYTHONPATH environment variable.
This is as plarke suggested.
| 0
| 1
| 0
| 0
|
2014-09-18T00:24:00.000
| 3
| 0.066568
| false
| 25,902,367
| 1
| 0
| 0
| 1
|
I'm using PyCharm, and in the shell, I can't run a file that isn't in the current directory. I know how to change directories in the terminal. But I can't run files from other folders. How can I fix this? Using Mac 2.7.8. Thanks!
|
Python: Is original source code viewable when converting a python script to .exe file using py2exe?
| 25,904,445
| 2
| 0
| 143
| 0
|
python,text,exe,py2exe
|
Py2Exe packages your script with a standalone Python interpreter, but under normal circumstances won't "compile" it.
Viewing the source code for a Py2Exe package executable would be trivial.
| 0
| 1
| 0
| 0
|
2014-09-18T04:47:00.000
| 1
| 1.2
| true
| 25,904,333
| 1
| 0
| 0
| 1
|
I made a python script that saves its output into a .text file. The .text file contents is scrambled but the python script I made can unscramble the text. If I use py2exe to make the script an .exe file, will others be able to see the script it uses to unscramble the text if they have a copy of the .exe file?
|
After turning into OSX app, Python subprocess can't call external console command
| 25,923,581
| 1
| 0
| 559
| 0
|
python,macos,subprocess
|
The cause of the application halt turns out to be not the subprocess.Popen call, but the call of mktemp that creates a temporary file inside of *.app folder, where a Mac app is definitely not permitted to write by default. After commenting this out, the code runs just fine. I'll make note of this and remind myself not to create temp file inside *.app folder again!
| 0
| 1
| 0
| 0
|
2014-09-18T16:35:00.000
| 1
| 1.2
| true
| 25,917,996
| 0
| 0
| 0
| 1
|
I am developing a GUI application using Kivy that in turn it will call an external console program from Python script using subprocess.Popen and capture its stderr output live. Finally, it works (thanks to SO for this!). I package the application using Pyinstaller, in which it produce an *.app that contains the executable resided in Contents\MacOS. If I run this executable directly from within Terminal, it runs well. The stderr output can be capture live. But, if I try to run the *.app directly either using open command from Terminal or double click its *.app icon from Finder, the call to subprocess.Popen simply halt.
I am not sure about this, but is there any restriction on an OSX app about how it can execute external program?
|
Command line interface application with background process
| 25,936,008
| 0
| 0
| 59
| 0
|
python,linux
|
If you put something in the background, then it's no longer connected to the current shell (or the terminal). So you would need the background process to open a socket so the command line part could send it the command.
In the end, there is no way around creating a new connection to the server every time you start the command line process and close the connection when the command line process exits.
The only alternative is to use the readline module to simulate the command line inside of your script. That way, you can open the connection, use readline to ask for any number of commands to send to the server. Plus you need an "exit" command which terminates the command line process (which also closes the server connection).
| 0
| 1
| 0
| 0
|
2014-09-19T14:02:00.000
| 1
| 0
| false
| 25,935,799
| 1
| 0
| 0
| 1
|
I'm new to python. I'm trying to write an application with command line interface. The main application is communicating with server using tcp protocol. I want it to work in the background so I won't have to connect with the server every time I use interface. What is a proper approach to such a problem?
I don't want the interface to be an infinite loop. I would like to use it like this:
my_app.py command arguments.
Please note that I have no problems with writing interface (I'm using argparse library right now) but don't know what architecture would suit me best and how to implement it in python.
|
Redhat Python 2.7.6 installation on virtualenv
| 25,943,276
| 0
| 0
| 1,684
| 0
|
python,linux,installation,virtualenv,redhat
|
With Linux you don't need to worry about where to install files, the OS takes care of that for you. Google CentOS Yum and read the Yum docs on how to install everything. You probably already have Python 2.7 installed, to check just open the terminal CTRL + ALT + T, and type python. This will start the python interpreter and display the version. The next step would be to see if pip and virtualenv are installed. You can simply type the command at the command prombt (exit python first). If you get something to the effect of command not found then you need to install them. Install pip with the Yum installer and virtualenv with pip. If everything is install then you just need to make your virtual environment, ex virtualenv name_of_directory, if the directory doesn't exist then yum will create it. And now you're done.
| 0
| 1
| 0
| 0
|
2014-09-19T22:18:00.000
| 1
| 1.2
| true
| 25,943,156
| 1
| 0
| 0
| 1
|
in what order should I install things? My goal is to have python 2.7.6 running on a virtualenv for a project for work. I am working on a Virtual Box machine in CentOS 6.5.
What folders should I be operating in to install things? I have never used linux before today, and was just kind of thrust into this task of getting a program running that requires python 2.7.6 and a bunch of packages for it. Thanks in advance if you can get me command line entries. I have opened about 3 Virtual Boxes and deleted them because I installed things in the wrong order. Please let me know how things should be installed, with command line entries, if possible.
|
How do I push a subprocess.call() output to terminal and file?
| 25,974,983
| 0
| 1
| 1,563
| 0
|
python,linux,subprocess,dd
|
The solution that worked for me was subprocess.call(["ddrescue $0 $1 | tee -a drclog", in_file_path, out_file_path], shell=True).
| 0
| 1
| 0
| 0
|
2014-09-21T19:35:00.000
| 2
| 0
| false
| 25,963,074
| 0
| 0
| 0
| 1
|
I have subprocess.call(["ddrescue", in_file_path, out_file_path], stdout=drclog). I'd like this to display the ddrescue in the terminal as it's running and write the output to the file drclog. I've tried using subprocess.call(["ddrescue", in_file_path, out_file_path], stdout=drclog, shell=True), but that gives me an input error into ddrescue.
|
"Text file busy" error for the mapper in a Hadoop streaming job execution
| 26,011,654
| 0
| 0
| 317
| 0
|
python,hadoop,mapreduce,streaming
|
Can you please try stopping all the daemons using 'stop-all' first and then rerun your MR job after restarting the daemons (using 'start-all')?
Lets see if it helps!
| 0
| 1
| 0
| 0
|
2014-09-21T20:21:00.000
| 1
| 1.2
| true
| 25,963,463
| 0
| 1
| 0
| 1
|
I have an application that creates text files with one line each and dumps it to hdfs.
This location is in turn being used as the input directory for a hadoop streaming job.
The expectation is that the number of mappers will be equal to the "input file split" which is equal to the number of files in my case. Some how all the mappers are not getting triggered and I see a weird issue in the streaming output dump:
Caused by: java.io.IOException: Cannot run program "/mnt/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411140750872_0001/container_1411140750872_0001_01_000336/./CODE/python_mapper_unix.py": error=26, Text file busy
"python_mapper.py" is my mapper file.
Environment Details:
A 40 node aws r3.xlarge AWS EMR cluster [No other job runs on this cluster]
When this streaming jar is running, no other job is running on the cluster, hence none of the external processes should be trying to open the "python_mapper.py" file
Here is the streaming jar command:
ssh -o StrictHostKeyChecking=no -i hadoop@ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming.jar -files CODE -file CODE/congfiguration.conf -mapper CODE/python_mapper.py -input /user/hadoop/launchidlworker/input/1 -output /user/hadoop/launchidlworker/output/out1 -numReduceTasks 0
|
script timed out before returning headers openshift
| 26,024,110
| 2
| 1
| 684
| 0
|
python,openshift,wsgi,timed
|
The solution was remove the cartridge and install Python 2.6
| 0
| 1
| 0
| 0
|
2014-09-22T03:09:00.000
| 3
| 1.2
| true
| 25,966,195
| 0
| 0
| 0
| 2
|
I have a python application (webservice) hosted in Openshift but, a few days ago, the app don't work anymore. The log points to "[error] script timed out before returning headers" and I can't solve this.
Someone can help me?
|
script timed out before returning headers openshift
| 25,966,254
| 0
| 1
| 684
| 0
|
python,openshift,wsgi,timed
|
Please log in to your openshift account and check whether your application and cartridges are up and running.
| 0
| 1
| 0
| 0
|
2014-09-22T03:09:00.000
| 3
| 0
| false
| 25,966,195
| 0
| 0
| 0
| 2
|
I have a python application (webservice) hosted in Openshift but, a few days ago, the app don't work anymore. The log points to "[error] script timed out before returning headers" and I can't solve this.
Someone can help me?
|
dev_appserver.py doesn't load appengine_config.py
| 25,990,536
| 3
| 5
| 698
| 0
|
python,google-app-engine
|
I had the same problem before. Solved by changing the loading method in app.yaml to wsgi, for example, from:
script: my_app/main.py
To:
script: my_app.main.application
Let me know if it works for you.
| 0
| 1
| 0
| 0
|
2014-09-23T08:06:00.000
| 1
| 1.2
| true
| 25,990,036
| 0
| 0
| 1
| 1
|
I have an App Engine app running locally using dev_appserver.py. In the app directory I have the standard appengine_config.py that is supposed to execute on every request made to the app. In the past it used to execute the module, but suddenly it stopped doing it.
In another app runs on the same machine it works fine.
I checked with Process Monitor to see if the file is loaded from another location, but it's not (I can see the other app's file being loaded).
Any ideas why appengine_config.py is not executed?
|
Encrypted and secure docker containers
| 26,134,653
| 6
| 58
| 34,645
| 0
|
python,security,encryption,docker
|
Sounds like Docker is not the right tool, because it was never intended to be used as a full-blown sandbox (at least based on what I've been reading). Why aren't you using a more full-blown VirtualBox approach? At least then you're able to lock up the virtual machine behind logins (as much as a physical installation on someone else's computer can be locked up) and run it isolated, encrypted filesystems and the whole nine yards.
You can either go lightweight and open, or fat and closed. I don't know that there's a "lightweight and closed" option.
| 0
| 1
| 0
| 0
|
2014-09-24T00:35:00.000
| 6
| 1
| false
| 26,006,727
| 0
| 0
| 0
| 1
|
We all know situations when you cannot go open source and freely distribute software - and I am in one of these situations.
I have an app that consists of a number of binaries (compiled from C sources) and python code that wraps it all into a system. This app used to work as a cloud solution so users had access to app functions via network but no chance to touch the actual server where binaries and code are stored.
Now we want to deliver the "local" version of our system. The app will be running on PCs that our users will physically own. We know that everything could be broken, but at least want to protect the app from possible copying and reverse-engineering as much as possible.
I know that docker is a wonderful deployment tool so I wonder: it is possible to create encrypted docker containers where no one can see any data stored in the container's filesystem? Is there a known solution to this problem?
Also, maybe there are well known solutions not based on docker?
|
Hard times with IDLE
| 26,088,770
| 0
| 0
| 309
| 0
|
python,python-idle,python-2.5
|
The fact that Windows changed the right-context menu for .py files has nothing to do with Idle, and probably nothing to do with Python either. You are not the first to have this problem. You can potentially restore 'Edit with Idle' but without directly editing the registry (an expert option) I only knew how to do so in XP. You might also be able to fix it be going back to a restore point before it changed, but you would lose all updates since, so I would not do that.
I am surprised that re-installing did not restore it. The line was once gone for me, too, and was restored by a recent install.
I have Win7. I just now tried 'Open with', navigated to 3.4 idlelib, and selected idle.bat (the .py files were not offered as a choice). The .py file opened in an Idle editor just fine. It is now a permanent option for Open with, without having to navigate.
Idle has gotten perhaps 150 patches since 2.5. Even if you have to edit programs to run on 2.5, I strongly recommend installing a current version of Python and Idle.
I have no ideal what your comment "the programs still can't find anything associated with it, like Tkinter for example " means.
| 0
| 1
| 0
| 0
|
2014-09-24T17:38:00.000
| 2
| 0
| false
| 26,023,136
| 1
| 0
| 0
| 2
|
So I've been working with Python on my computer for about the last 2 months with no issues. Just recently however, something went wrong with IDLE. I am running python 2.5
I used to be able to right-click and select "Edit with IDLE" for a python program. That option no longer is available. When I try "open with" and navigate to the idlelib in python, I can select idle.bat, idle.py, or idle.py (no console). I've tried each option and each fails to open and returns an error that either it is not a valid Win32 application or that "Windows cannot find idle.pyw"
I am able to open IDLE on its own and use the open function in IDLE to open files, but can't open files directly using IDLE as I could before.
There was formerly the White background icon with the python logo, which is now replace by windows' logo for no program (white square, blue and red dots). I have tried to repair-install and unistall-re-install both with no success. There is no firewall or antivirus, and it was installed with permissions for all users.
Any help is much appreciated, this has been maddeningly difficult to figure out.
|
Hard times with IDLE
| 26,025,133
| 0
| 0
| 309
| 0
|
python,python-idle,python-2.5
|
The native one that comes with python on windows is problematic at times, so you could uninstall and reinstall it as a solution, or open it from its directory instead of a shortcut, or get another IDE. I recommend the Ninja IDE very nice and light looking, or if you're on linux you could just use vim from terminal.
Also, if it's extremely necessary, try upgrading your python version and IDE. I think the IDE included for windows looks like a modified emacs to be honest.
| 0
| 1
| 0
| 0
|
2014-09-24T17:38:00.000
| 2
| 0
| false
| 26,023,136
| 1
| 0
| 0
| 2
|
So I've been working with Python on my computer for about the last 2 months with no issues. Just recently however, something went wrong with IDLE. I am running python 2.5
I used to be able to right-click and select "Edit with IDLE" for a python program. That option no longer is available. When I try "open with" and navigate to the idlelib in python, I can select idle.bat, idle.py, or idle.py (no console). I've tried each option and each fails to open and returns an error that either it is not a valid Win32 application or that "Windows cannot find idle.pyw"
I am able to open IDLE on its own and use the open function in IDLE to open files, but can't open files directly using IDLE as I could before.
There was formerly the White background icon with the python logo, which is now replace by windows' logo for no program (white square, blue and red dots). I have tried to repair-install and unistall-re-install both with no success. There is no firewall or antivirus, and it was installed with permissions for all users.
Any help is much appreciated, this has been maddeningly difficult to figure out.
|
how to change file extensions in windows 8? i tried and it will not recognize the change
| 26,028,450
| 1
| 1
| 5,230
| 0
|
python,windows-8
|
I created a file called myfile.txt. It showed up in Explorer as myfile.txt. If you just see myfile (no extension), then go to Folder Options, Advanced, and uncheck "Hide extensions for known file types".
I Right-clicked myfile.txt, selected Rename, and Windows selected just "myfile", not ".txt". I changed the selection to "txt", overwrote it with "py" and hit enter. Windows popped up a message warning that I was changing the extension. I clicked OK and the file was renamed.
An alternate approach is to open a command prompt, cd to the directory and use "move" to change the name.
Yet another option, if you are doing this from a text editor, click "Save As", change the save dialog's "Save As Type" drop-down to All Files, change the name to .py and hit OK. You end up with a .txt and a .py.
| 0
| 1
| 0
| 0
|
2014-09-24T23:41:00.000
| 1
| 0.197375
| false
| 26,028,253
| 1
| 0
| 0
| 1
|
How do I change file extensions in windows 8? I tried and my system will not recognize the change.
I tried changing from .txt to .py for python so i can use in IDLE.
|
Python Homebrew Error with Pip
| 50,116,996
| 0
| 0
| 140
| 0
|
python,pip,homebrew
|
A linux command must be in a bin directory so you have no problem. Pip should be store in /usr/local/bin/pip not in /usr/local/share/python/pip.
| 0
| 1
| 0
| 0
|
2014-09-25T23:20:00.000
| 2
| 0
| false
| 26,049,674
| 1
| 0
| 0
| 2
|
I installed homebrew and
which python
gives /usr/local/bin/python . However when I type which pip I get/usr/local/bin/pip and not the desired /usr/local/share/python/pip
How do I fix this?
|
Python Homebrew Error with Pip
| 26,049,920
| 0
| 0
| 140
| 0
|
python,pip,homebrew
|
For my python 2.7.6 installation:
My pip is installed to: /usr/local/bin/pip
What makes you think it should be installed to /usr/local/share/python/pip?
Are you seeing a problem when you try and use pip install (whatever)?
| 0
| 1
| 0
| 0
|
2014-09-25T23:20:00.000
| 2
| 0
| false
| 26,049,674
| 1
| 0
| 0
| 2
|
I installed homebrew and
which python
gives /usr/local/bin/python . However when I type which pip I get/usr/local/bin/pip and not the desired /usr/local/share/python/pip
How do I fix this?
|
Running Python GUI apps on C9.io
| 26,051,264
| 1
| 0
| 686
| 0
|
python-2.7,wxpython
|
I don't know if Cloud9 supports it but normally to run a remote GUI application you would have ssh forward the X11 communication over the ssh connection via a tunnel. So basically the application is running on the remote system and it is communicating with a local X11 server which provides you with the display and handling of the mouse and keyboard.
If you run ssh with the -X parameter then it will attempt to set up the X11 tunnel and set $DISPLAY in the remote shell so any GUI applications you run there will know how to connect to the X11 tunnel. Bw aware however that this is something that can be turned off on the remote end, so ultimately it is up to Cloud9 whether they will allow you to do this.
| 1
| 1
| 0
| 0
|
2014-09-26T00:53:00.000
| 1
| 0.197375
| false
| 26,050,414
| 0
| 0
| 0
| 1
|
Does anyone know if it is possible to run python-gui apps, like wxPython, on a c9.io remote server? I have my home server set up with c9 via SSH, and no issues logging in and running apps in the terminal on the VM. However, when I try to run GUI apps, I get the following error message.
Unable to access the X Display, is $DISPLAY set properly?
After searching and searching, I can't seem to find a guide or anything in the docs that detail how to set $DISPLAY in the script. X display is installed and active on my server, but I don't know how to configure the c9 script to access it properly. Any assistance would be appreciated!
|
Return value from Python script to Jenkins variable
| 26,064,841
| 0
| 3
| 6,670
| 0
|
python,jenkins
|
There's no easy way I know of to store the "variable" in jenkins. Your best bet is to use some other place to store this MAC address. A file would be a good place, but it probably needs to be on a shared fileserver somewhere.
| 0
| 1
| 0
| 0
|
2014-09-26T01:24:00.000
| 2
| 0
| false
| 26,050,645
| 0
| 0
| 0
| 1
|
I have a Jenkins job which executes a Python script (checkoutDevice.py) via shell.
In checkoutDevice script, it connects to a inventory server and check out an available unit, unit's MAC address is available to return to Jenkins job
I would like to return unit's MAC address from Python script to Jenkins job, so Jenkins job can pass that MAC address to another Python script.
a. How would I store unit's MAC address to Jenkins' environment variable so I can pass it to another Python script in the same job?
b. Another solution I am looking at is to write MAC address to a text file during execution of checkoutDevice script, then Jenkins will read that MAC address from the text file to store into a variable then pass to another Python script?
|
Get free space of a directory in linux
| 39,438,068
| 0
| 2
| 854
| 0
|
c#,python,linux
|
For Linux you will find Statvfs in Mono.Unix.Native, in the Mono.Posix assembly.
| 0
| 1
| 0
| 0
|
2014-09-26T08:04:00.000
| 1
| 0
| false
| 26,054,851
| 0
| 0
| 0
| 1
|
I have a question. In my c# application I need to get the free space of a directory. According to my research, GetdiskfreespaceEx is proper and it works for me in my windows xp. Now I'm wondering if it works the same in a linux system. Since I wrote my c# program according to a python one, and for this function the developer of the python one made 2 cases: the windows os or else. For the windows case "ctypes.windll.kernel32.GetDiskFreeSpaceExW" is used while in else case "os.statvfs(folder)" is used.
I did some more research but haven't found anything saying if GetdiskfreespaceEx could be used for linux. Anyone could tell me that? If no, any ways to get free disk space for linux in c#? Thanks in advance!
|
Does using Python on OS X expose me to Shellshock?
| 26,074,861
| 0
| 1
| 61
| 0
|
security,python-2.7,osx-mavericks,shellshock-bash-bug
|
Using python does not increase your risk of being exposed to shellshock.
The best way to protect your computer is to make sure your computer has up-to-date patches and updates installed.
Microsoft, Apple and various Linux distributions have all released updtes/patches which are supposed to fix the problem.
| 0
| 1
| 0
| 1
|
2014-09-26T11:50:00.000
| 1
| 0
| false
| 26,058,888
| 0
| 0
| 0
| 1
|
Various outlets, along with Apple, are assuring OS X users that they are not at particular risk from the Shellshock bash exploit. However, I use Python frequently on my system and wonder if that would increase my risk; and whether there is anything I can do to mitigate it (short of installing a different bash).
I use Apple's (2.7.5) Python and bash, on OS X 10.9.5.
|
How can I eliminate the need of the command 'python' when running a python file in terminal?
| 26,069,546
| 0
| 0
| 85
| 0
|
python,bash,shell,python-2.7,command
|
sharkbyte,
It's easy to insert '#!/usr/bin/env python' at the top of all your Python files. Just run this sed command in the directory where your Python files live:
sed -i '1 i\#! /usr/bin/env python\n' *.py
The -i option tells sed to do an in-place edit of the files, the 1 means operate only on line 1, and the i\ is the insert command. I put a \n at the end of the insertion text to make the modified file look nicer. :)
If you're paranoid of stuffing up, copy some files to an empty directory first & do a test run.
Also, you can tell sed to make a backup of each original file, eg:
sed -i.bak '1 i\#! /usr/bin/env python\n' *.py
for MS style, or
sed -i~ '1 i\#! /usr/bin/env python\n' *.py
for the usual Linux style.
| 0
| 1
| 0
| 1
|
2014-09-26T17:28:00.000
| 2
| 0
| false
| 26,065,129
| 0
| 0
| 0
| 1
|
I would like to run python files from terminal without having to use the command $ python. But I would still like to keep the ability of using '$ python to enter the python interpreter. For example if I had a file named 'foo.py', I can use $ foo.py rather than $ python foo.py to run the file.
How can I do this? Would I need to change the bash file or the paths? And is it possible to have both commands available, so I can use both $ foo.py and $ python foo.py?
I am using ubuntu 14.04 LTS and my terminal/shell uses a '.bashrc' file. I have multiple versions of python installed on my computer, but when running a python file I want the default version to be the latest version of 2.7.x. If what I am asking is not possible or not recommended, I want to at least shorten the command $ python to $ py.
Thank you very much for any help!
|
centos 6.5 python 2.7 can not find pygtk that python 2.6 sees fine
| 26,068,565
| 1
| 1
| 1,310
| 0
|
python,python-2.7,pygtk,centos6.5,tryton
|
Unfortunately packages that are installed with one minor version of Python are not able to be used with another minor version (as an example, version 2.7.8 is major version 2, minor version 7, micro version 8). Different micro versions are compatible with one another, so packages installed with 2.7.3 will work with 2.7.8, for example. So, while it may seem redundant, anything that you have with 2.6 you'll have to reinstall with 2.7 in order to work with it under 2.7. This is due to changes in the ABI from version to version, and other "under the hood" differences.
| 0
| 1
| 0
| 0
|
2014-09-26T21:07:00.000
| 1
| 1.2
| true
| 26,068,324
| 1
| 0
| 0
| 1
|
I have installed python 2.7 beside 2.6 on a CentOS 6.5 os. The particular application I want to install needs 2.7, but it also needs pygtk (as well as other stuff). If I start an interpreter with 2.6, it imports pygtk fine. But if I start an interpreter with 2.7 it can not find what it needs [pygtk].
There are plenty of helpful posts that address installing duplicate versions of python on CentOS 6, but could someone please help me make the python 2.7 find the other stuff [pygtk]?
Why else would I want to install python 2.7 beside python 2.6 on CentOS if I didn't want to use a bunch of the standard things in both?
|
How to include file in wing-IDE
| 26,086,990
| 0
| 1
| 95
| 0
|
python,wing-ide
|
Open the file in Wing and then select Evaluate File in Python Shell from the Source menu. After that, functions/etc defined in the file can be accessed from the shell. You do need to redo that if you've edited the file.
Or you may want to use the Active Range feature that's new in Wing 5.0.9: Select a range of code, and press the + icon in the Python Shell (top right) to make it the active range. Then you can easily reevaluate that range in the Python Shell by pressing the cog icon (or bind a key to the command python-shell-evaluate-active-range).
Another option is to set a breakpoint, debug to it, and then use the Debug Probe, which is a shell that lets you interact with the runtime state in the currently selected debug stack frame.
| 0
| 1
| 0
| 0
|
2014-09-28T02:55:00.000
| 1
| 0
| false
| 26,081,164
| 1
| 0
| 0
| 1
|
I have a file called tweet.py located on my desktop which contains numerous functions that I would like to use in the wing-IDE. How do I include the file so I can use the functions in the python shell? I looked online but did not find anything to help me. Thanks guys. I'm using ubuntu 14.04, if that helps.
|
Execute script as a specific user
| 26,106,567
| 0
| 2
| 1,406
| 0
|
python,windows,python-2.7
|
check if you are the specific user and if not do not run the python script. It can be done at least in the unix way by checking the uid, in windows you can use wmic useraccount get name,sid to get a users security identifier or in batch use %USERNAME%.
| 0
| 1
| 0
| 0
|
2014-09-29T18:27:00.000
| 2
| 0
| false
| 26,106,430
| 0
| 0
| 0
| 1
|
I have a python script that does some file processing and need to run as a specific user. Seems on Unix this can be done by using os.setuid. How do I do that in python on windows?
|
Changing default environment (default folder) in Canopy on Mac OSX
| 26,110,135
| 0
| 0
| 1,380
| 0
|
python,osx-mavericks,enthought,canopy
|
Quit Canopy
Delete the file ~/.canopy/locations.cfg. You can do this by opening a Terminal window and typing:
rm ~/.canopy/locations.cfg
Restart Canopy
You'll be prompted again for your environment path.
| 0
| 1
| 0
| 0
|
2014-09-29T19:49:00.000
| 2
| 1.2
| true
| 26,107,730
| 1
| 0
| 0
| 2
|
I want to change the default environment (the folder where my Canopy files are stored), which I previously set to 'Documents' folder. But now I want to change the folder. If I just delete the folders, Canopy creates them again automatically, indicating that somewhere inside its logs it has saved the default location address, and I want to change this default location address.
|
Changing default environment (default folder) in Canopy on Mac OSX
| 33,323,045
| 1
| 0
| 1,380
| 0
|
python,osx-mavericks,enthought,canopy
|
You can change the root path for the file browser by modifying the variable "root_paths" in the file ./canopy/preferences.ini under the section [file_browser]
| 0
| 1
| 0
| 0
|
2014-09-29T19:49:00.000
| 2
| 0.099668
| false
| 26,107,730
| 1
| 0
| 0
| 2
|
I want to change the default environment (the folder where my Canopy files are stored), which I previously set to 'Documents' folder. But now I want to change the folder. If I just delete the folders, Canopy creates them again automatically, indicating that somewhere inside its logs it has saved the default location address, and I want to change this default location address.
|
Removing padding from UDP packets in python (Linux)
| 26,326,206
| 2
| 0
| 1,669
| 0
|
python,sockets,udp,padding
|
This is pretty much impossible without playing around with the Linux drivers. This isn't the best answer but it should guide anyone else looking to do this in the right direction.3
Type
sudo ethtool -d eth0
to see if your driver has pad short packets enabled.
| 0
| 1
| 1
| 0
|
2014-09-30T05:33:00.000
| 1
| 1.2
| true
| 26,113,419
| 0
| 0
| 0
| 1
|
I am trying to remove null padding from UDP packets sent from a Linux computer. Currently it pads the size of the packet to 60 bytes.
I am constructing a raw socket using AF_PACKET and SOCK_RAW. I created everything from the ethernet frame header, ip header (in which I specify a packet size of less than 60) and the udp packet itself.
I send over a local network and the observed packet in wireshark has null padding.
Any advice on how to overcome this issue?
|
use dev_appserver.py for specific file in a directory
| 26,136,777
| 1
| 0
| 52
| 0
|
python,google-app-engine,cmd
|
dev_appserver doesn't "run a file" at all. It launches the GAE dev environment, and routes requests using the paths defined in app.yaml just like the production version.
If you need to route your requests to a specific Python file, you should define that in app.yaml.
| 0
| 1
| 0
| 0
|
2014-10-01T07:53:00.000
| 1
| 1.2
| true
| 26,136,643
| 0
| 0
| 1
| 1
|
In order to start Google app engine through the cmd i use: dev_appserver.py --port=8080, but i can't figure how does it pick a file to run from the current directory.
My question is, is there an argument specifying a file to run in the server from all possible files in the directory?
|
WLST - force stop application
| 26,177,888
| 1
| 1
| 3,907
| 0
|
java,python,weblogic,weblogic-10.x,wlst
|
Try again without the options. Just the stopApplication(appName).
This is what the admin console does, kills all existing sessions and drags it to prepared state. You are trying to stop it gradually and hence the delay.
You had mentioned, "when one of this application fail to deploy for X/Y reason, I just want to force stop this application and to pass to the other one."
If an application fails to deploy, you should not have to stop it. If the App runs, its successful correct?
| 0
| 1
| 0
| 0
|
2014-10-01T08:06:00.000
| 1
| 1.2
| true
| 26,136,844
| 0
| 0
| 1
| 1
|
I am currently working with weblogic and the thing is that I deploy several applications on my weblogic server. Sadly, when one of this application fail to deploy for X/Y reason, I just want to force stop this application and to pass to the other one.
I've already looked into the WLST doc and I don't find what I am searching for.
Here is the function I use :
stopApplication(applicationName, gracefulProductionToAdmin="true", gracefulIgnoreSessions="true")
It takes about 5 minutes to stop application this way. When I stop application through Administration Console (force stop actually) it takes about 5 seconds to stop application.
So is there any way to force stop application through WLST script?
Thanks
|
Cannot mix incompatible Qt library (version 0x40801) with this library (version 0x40805)
| 26,186,264
| 0
| 4
| 3,076
| 0
|
python,qt,anaconda
|
QT aggressively tries to find other QT installations on the system. It is likely finding the one installed by Linux. You likely can't remove this, as there are probably several programs that come with Linux that use QT, but is it possible to update it to the same version?
| 1
| 1
| 0
| 0
|
2014-10-02T06:26:00.000
| 2
| 0
| false
| 26,155,448
| 0
| 0
| 0
| 1
|
I have install Anaconda 2.0.1 on KDE desktop. When I run python and would see all modules installed, I have this message "Cannot mix incompatible Qt library (version 0x40801) with this library (version 0x40805)",
Can I fix the problem if I unintall Qt library (version 0x40801)?
How I do that?
Or if someone have another suggestion,please help me,
Thx very much
|
How do I duplicate an appengine app
| 26,194,692
| 1
| 0
| 59
| 0
|
python,google-app-engine
|
You could create a new application, use Datastore Admin to copy your entities to the new application's Datastore, then re-deploy your application. Is there anything else that needs duplicating?
| 0
| 1
| 0
| 0
|
2014-10-04T00:47:00.000
| 2
| 0.099668
| false
| 26,188,548
| 0
| 0
| 1
| 2
|
I want to backup my python app and restore it to a different app on Appengine. In the Application Setting Page, under Duplicate Applications, I add a new application identifier.
When I click the Duplicate Application button, I get this error: "The developer does not own the app id being forked".
Further research indicates that this seems to be a bug, but that a workaround is to send an email invitation to the other email addresses in my Google account to add them.
I am able to send those emails from the Permissions screen by clicking a button and inserting the email address. When I click link in the email that is sent, it opens My Applications, listing all my apps, instead of a confirmation that my response. It appears to open the wrong page.
In the Permissions page, the email address still shows Pending after about 10 hours.
Is there a simple way to duplicate an application?
|
How do I duplicate an appengine app
| 26,194,002
| 1
| 0
| 59
| 0
|
python,google-app-engine
|
Do you have more than one google account? I have found that app engine does unexpected things when you are logged into more than one google account at a time.
I suggest logging into only a single google account and trying the operations again.
| 0
| 1
| 0
| 0
|
2014-10-04T00:47:00.000
| 2
| 0.099668
| false
| 26,188,548
| 0
| 0
| 1
| 2
|
I want to backup my python app and restore it to a different app on Appengine. In the Application Setting Page, under Duplicate Applications, I add a new application identifier.
When I click the Duplicate Application button, I get this error: "The developer does not own the app id being forked".
Further research indicates that this seems to be a bug, but that a workaround is to send an email invitation to the other email addresses in my Google account to add them.
I am able to send those emails from the Permissions screen by clicking a button and inserting the email address. When I click link in the email that is sent, it opens My Applications, listing all my apps, instead of a confirmation that my response. It appears to open the wrong page.
In the Permissions page, the email address still shows Pending after about 10 hours.
Is there a simple way to duplicate an application?
|
How to create Celery custom logging
| 26,271,816
| 0
| 0
| 368
| 0
|
python,celery
|
When you say you want different log files for each worker do you mean for each worker node or for each pool worker process?
If it is for each node, that is already supported. Do celery worker --help for more info.
-f LOGFILE, --logfile=LOGFILE
Path to log file. If no logfile is specified, stderr
is used.
If you are using supervisord for running your workers, you can use stdout_logfile and stderr_logfile
If you want separate logfile for each pool worker process, can you explain why? Note that pool worker processes will keep changing if maxtasksperchild is set to limit the number of tasks executed by a process. You will need to figure out how you want to relate each worker process to a log file in this case.
| 0
| 1
| 0
| 0
|
2014-10-04T10:03:00.000
| 1
| 1.2
| true
| 26,191,698
| 0
| 0
| 0
| 1
|
I have a custom python logging working. I want to build a celery custom logging, based on workers. I went through the Docs but couldn find a hint. Anyone can suggest me one such method to do so?
|
Executing/Running python script on ubuntu server
| 26,196,763
| 0
| 0
| 2,380
| 0
|
python,ubuntu,vps
|
Running a script on the server is the same as running locally.
python script.py
You may check if you have python installed first using which python (should return the python location)
If not, get it using sudo apt-get install python
after that, go to www directory and run it.
| 0
| 1
| 0
| 1
|
2014-10-04T19:06:00.000
| 1
| 1.2
| true
| 26,196,165
| 0
| 0
| 0
| 1
|
I've uploaded my .html chat file on my Ubuntu VPS, all that remains is to execute/run the python script pywebsock.py which will run the python server. I've uploaded the pywebsock.py to /bin/www and now I want to run it but I have no idea where to start.
When I run the pywebsock.py on my desktop it opens up a terminal saying "waiting connection".
This is what I've done so far to try and run it:
Downloaded Putty
Downloaded WinSCP
Installed version of Python according to .py (2.7)
Any ideas?
|
SimpleGUICS2Pygame Installing on Linux
| 26,200,815
| 0
| 0
| 88
| 0
|
python,linux,codeskulptor
|
It looks like an ordinary python package, so just:
pip install --user SimpleGUICS2Pygame
If that gives errors, post them.
| 0
| 1
| 0
| 0
|
2014-10-05T07:55:00.000
| 3
| 0
| false
| 26,200,726
| 0
| 0
| 0
| 1
|
Python.
I've exhausted my research on this topic. I have it running on Windows just fine, but I can't figure out a way to install it on Linux.
How do I install it on Linux?
|
Send a string from windows to vmware-ubuntu over socket using python
| 26,802,000
| 1
| 2
| 2,245
| 0
|
python,linux,sockets,python-2.7,vmware
|
When you use NAT, the host machine has no way to directly contact the client machine. All you can do is usign port forwarding to tell vmware that all traffic directed to the designated ports on the host is to be delivered to the client. It is intended to install a server on the client machine that can be accessed from outside the host machine.
If you want to test network operation between the host and the client, you should configure a host-only adapter on the client machine. It is a virtual network between the host and the client(s) machine(s) (more than one client can share same host-only network, of course with different addresses)
I generally configure 2 network adapters on my client machines :
one NAT to give the client machine an access to the open world
on host-only to have a private network between host and clients and allow them to communicate with any protocol on any port
You can also use a bridged interface on the client. In this mode, the client machine has an address on same network than the external network of the host : it combines both previous modes
| 0
| 1
| 1
| 0
|
2014-10-07T09:28:00.000
| 2
| 0.099668
| false
| 26,232,798
| 0
| 0
| 0
| 1
|
I am trying to send a string from the windows to the linux vmware on the same machine.
I did the following:
- opened a socket on 127.0.0.1 port 50000 on the linux machine and reading the socket in a while loop. My programming language is python 2.7
- send a command using nc ( netcat ) on 127.0.0.1 port 50000 from the windows machine ( using cygwin ).
However, I dont receive any command on the linux machine although the command sent through windows /cygwin is successful.
I am using NAT ( sharing the hosts IP address ) on the VMWARE Machine.
Where could be the problem?
|
Run a Ruby or Python script from iruby or ipython notebook?
| 33,757,780
| 0
| 3
| 435
| 0
|
python,ruby,ipython,ipython-notebook
|
Require ./your_program works well for me
| 0
| 1
| 0
| 1
|
2014-10-09T00:38:00.000
| 1
| 0
| false
| 26,268,586
| 1
| 0
| 0
| 1
|
Is there a way to run a Ruby program with iruby? I want to run a script instead of entering my code in iruby notebook console.
I assume that iruby would be the same with ipython.
|
Handling SIGINT (ctrl+c) in script but not interpreter?
| 26,269,376
| 0
| 0
| 47
| 0
|
python,multithreading,multiprocessing,ipython,interpreter
|
One option is to set a variable (e.g. environment variable, commandline option) when debugging.
| 0
| 1
| 0
| 1
|
2014-10-09T02:27:00.000
| 1
| 0
| false
| 26,269,366
| 1
| 0
| 0
| 1
|
I'm working on a project that spins off several long-running workers as processes. Child workers catch SIGINT and clean up after themselves - based on my research, this is considered a best practice, and works as expected when terminating scripts.
I am actively developing this project, which means that I am regularly testing changes in the interpreter. When I'm working in an interpreter, I often hit CTRL+C to clear currently written text and get a fresh prompt. Unfortunately, if I do this while a subprocess is running, SIGINT is sent to that worker, causing it to terminate.
Is there a solution to this problem other than "never hit CTRL+C in your interpreter"?
|
how to update python 2.7 to python 3 in linux?
| 36,884,696
| 4
| 4
| 20,066
| 0
|
python
|
Python 2 and 3 can safely be installed together. They install most of their files in different locations. So if the prefix is /usr/local, you'll find the library files in /usr/local/lib/pythonX.Y/ where X.Y are the major and minor version numbers.
The only point of contention is generally is the file python itself, which is generally a symbolic link.
Currently it seems most operating systems still use Python 2 as the default, which means that python is a symbolic link to python2. This is also recommended in the Python documentation.
It is best to leave it like that for now. Some programs in your distributions may depend on this, and might not work with Python 3.
So install Python 3 (3.5.1 is the latest version at this time) using your favorite package manager or compiling it yourself. And then use it by starting python3 or by putting #!/usr/bin/env python3 as the first line in your Python 3 scripts and making them executable (chmod +x <file>).
| 0
| 1
| 0
| 0
|
2014-10-09T16:04:00.000
| 2
| 0.379949
| false
| 26,282,986
| 1
| 0
| 0
| 1
|
My OS is CentOS 7.0. It's embedded python version is 2.7, and I want to update it to Python 3.4.
when input the
print sys.path
output is:
['', '/usr/lib/python2.7/site-packages/setuptools-5.8-py2.7.egg',
'/usr/lib64/python27.zip', '/usr/lib64/python2.7',
'/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk',
'/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload',
'/usr/lib64/python2.7/site-packages',
'/usr/lib64/python2.7/site-packages/gtk-2.0',
'/usr/lib/python2.7/site-packages']
So, if I download the python 3.7, then ./configure , make , make install. Will it override all the python-related files ? Or if I use the
./configure --prefix=***(some path)
then is it safe to remove all the old python files or directory?
In a word, hope someone gives me instructions about how to update to python 3 on linux. Thanks a lot.
|
Running a python script several times at the same time?
| 26,297,007
| 0
| 0
| 70
| 0
|
python,python-2.7,cmd
|
It's totally posible and will fork fine.
| 0
| 1
| 0
| 0
|
2014-10-10T10:11:00.000
| 1
| 0
| false
| 26,296,993
| 1
| 0
| 0
| 1
|
I have got a python (2.7) script and I would like to start it in multiple CMDs at the same time. Is this possible or would the script crash?
|
odoo mod_wsgi schedulers not working
| 26,330,930
| 0
| 0
| 246
| 0
|
python,apache,deployment,openerp,mod-wsgi
|
The schedulers don't work when running through wsgi because your Odoo instances are just workers. AFAIK you just run a standalone instance on a 127.0.0.1 port and it runs your scheduled tasks.
| 0
| 1
| 0
| 0
|
2014-10-10T15:18:00.000
| 1
| 0
| false
| 26,302,718
| 0
| 0
| 1
| 1
|
When I deploy openerp/odoo using mod_wsgi, I found my schedulers stop working, can any one help how can I get my cron/schedulers working. If I deploy it using mod_proxy it will solve the issue but I want to deploy using mod_wsgi.
|
Best way to display real-time data from an SSH accesible file on webpage
| 26,307,516
| 1
| 1
| 856
| 0
|
php,python,wordpress,ssh,webpage
|
Have a program monitor the file, either locally or via SSH. Have that program push updates into your web backend, via HTTP API or such.
| 0
| 1
| 0
| 1
|
2014-10-10T19:58:00.000
| 2
| 0.099668
| false
| 26,307,163
| 0
| 0
| 1
| 1
|
I have a temperature logger that measures the records temperature values at specified time intervals. Currently I push these to a google spreadsheet but would like to display the values automatically on a web-page.
I have no experience with anything to do with web-pages, except setting up a few Wordpress sites but am reasonably comfortable with C++, Python, Matlab and Java.
A complicating factor is that the machine is in a VPN, so that access it via SSH I need to join the VPN.
I suspect the best way is to have a Python script that periodically send a up-to-date file to the web-server via ftp and then some script on the server that plots this.
My initial though was to use Python via something like CGI to read the data and create a plot on the server. However, I have no idea what the best approach on the server side would be. Is it worth to learn some PHP? Or should I write a Java Applet? Or CGI is the way to go?
Thank you for your help
|
google app engine dev_appserver.py file not found
| 39,908,883
| 1
| 2
| 1,087
| 0
|
python,google-app-engine,python-2.7
|
On my PC it is found under this directory:
C:\Users\Bob\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine
I would assume that on Apple OS it will be similar based on where you decided to install the Cloud SDK.
| 0
| 1
| 0
| 0
|
2014-10-12T11:28:00.000
| 1
| 0.197375
| false
| 26,324,636
| 0
| 0
| 1
| 1
|
i have searched through many python issues and none seem to help me with mine or i just don't understand it enough to resolve the issue. basically i am trying to learn python so in the process on my mac i have installed serveral versions and had many version issues which so far i have been able to resolve by looking up ways to resolve it. however i am now at a part in head first python that needs to install google app engine the book is a good bit out of date so i installed the latest app engine and made the symbolic links but when i run the application the browser is greyed out. i have seen many references to dev_appserver.py in my long search to resolve this. i cannot my this file anywhere on my machine so i presume i have an issue with my install of python2.7 i have re-installed and then uninstalled and re-installed python 2.7 over and over but still cannot find the dev_appserver.py file . does anyone have a concrete way to ensure dev_appserver.py will be installed. thanks in advance from a seriously frustrated python beginner.
|
Right path for python on OSX
| 26,328,952
| 0
| 0
| 92
| 0
|
python,macos
|
Things that are in System are there for a reason: because they're used by the system. You should not change things in there unless you know what you're doing, and even then not unless you have a very good reason. Library is the right place for software you install for your own use.
| 0
| 1
| 0
| 0
|
2014-10-12T19:01:00.000
| 1
| 1.2
| true
| 26,328,818
| 1
| 0
| 0
| 1
|
On my OSX 10.6.8 there was an old version of python installed and it was in:
System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
I downloaded and installed a newer version from the official website and it went to:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
I was just wondering. Which one is the correct path? And
Should I move this installation into /System/?
|
How can I pip freeze and get only pip --user installs, no system installs?
| 69,124,029
| 1
| 2
| 669
| 0
|
python,pip
|
It's fairly easy in recent versions of pip (the PR in the other answer is now part of pip).
pip freeze --user
This will output a list of packages currently installed to the user's site-packages.
| 0
| 1
| 0
| 0
|
2014-10-14T12:55:00.000
| 2
| 0.099668
| false
| 26,361,405
| 1
| 0
| 0
| 1
|
I have dutifully uninstalled all the Python packages I installed with sudo pip install and installed them with pip --user install instead. Yay me :)
On Ubuntu, I know I can find the relevant binaries at /home/<USERNAME>/.local/bin and the packages themselves at /home/<USERNAME>/.local/lib/python2.7/site-packages ... but navigating there is not as simple as good old pip freeze.
How can I pip freeze and get only the packages I installed with pip --user install rather than all the Python packages, including those installed via apt?
|
Windows unkillable process
| 54,080,629
| 0
| 1
| 1,170
| 0
|
python,python-3.x,windows-8,admin,exe
|
So, I know this question is like 5 years old, but you can make it actually kinda unkillable to even the admin. Make the program run as admin, so only admins can kill it. Then, make a loop that kills consent.exe constantly (consent.exe is the UAC pop-up). To kill the process, you need to be an admin, but you can't be an admin, because you can't accept the UAC prompt. But there is a catch. If you disable UAC via UAC settings, you can become an admin and kill the process. This can be used as a kind of failsafe.
| 0
| 1
| 0
| 0
|
2014-10-15T04:28:00.000
| 2
| 0
| false
| 26,374,480
| 1
| 0
| 0
| 2
|
For the last several weeks, I have been making a parental controls program (just for my friend and myself), in Python (it's what I know). I used CX_Freeze to get the .exe, and it works wonderfully. Everything is great... But I need a way to make the process unkillable to standard users. (just standard users. I want admins to be able to kill this easily if need be.)
I was pursuing a method in which my .exe was turned into a windows service, thereby making it "SYSTEM" and unkillable to standard users. So far, the service cannot kill a process by using taskkill /im, and cannot create required setup .txt files.
Since that method appears to be failing, I thought I would ask if anyone knows of a way to make a process untouchable to standard users? I'm not entirely sure what professional parental controls/keyloggers/virus protection software uses to keep the user from killing the process, but perhaps something like that?
|
Windows unkillable process
| 26,374,614
| 0
| 1
| 1,170
| 0
|
python,python-3.x,windows-8,admin,exe
|
Making it a service is probably the right way to go - because it is the best way to automatically launch a process with some admin rights.
I think the reason that your service wasn't able to kill other processes was due to the account used to run the service under.
A service can run as system, local service, network service, or a specified account/password.
The idea here is to limit services to just be allowed to access what they need. A 'network service' service has very little access to local resources, while "local service" won't have network access. MSDN has all the details.
I don't remember offhand the details of exactly how you specify this during service registration, but I think it is pretty straightforward.
You will notice also that there is a checkbox for "Allow service to interact with the desktop".
Normally you don't want a service to directly control any UI because a UI is very hard to defend from attack - other processes could send messages which cause mischief in the service - potentially allowing them to hack the system.
I think that for your purposes, using a specific login for an admin account will suffice.
In "services.msc", it is simple enough to select your service, and enter a username/password for the service to run in.
| 0
| 1
| 0
| 0
|
2014-10-15T04:28:00.000
| 2
| 0
| false
| 26,374,480
| 1
| 0
| 0
| 2
|
For the last several weeks, I have been making a parental controls program (just for my friend and myself), in Python (it's what I know). I used CX_Freeze to get the .exe, and it works wonderfully. Everything is great... But I need a way to make the process unkillable to standard users. (just standard users. I want admins to be able to kill this easily if need be.)
I was pursuing a method in which my .exe was turned into a windows service, thereby making it "SYSTEM" and unkillable to standard users. So far, the service cannot kill a process by using taskkill /im, and cannot create required setup .txt files.
Since that method appears to be failing, I thought I would ask if anyone knows of a way to make a process untouchable to standard users? I'm not entirely sure what professional parental controls/keyloggers/virus protection software uses to keep the user from killing the process, but perhaps something like that?
|
The time to queue tasks in celery bottlenecks my application - how to parallelize .delay()?
| 26,732,448
| 0
| 0
| 349
| 0
|
python,parallel-processing,rabbitmq,celery,django-celery
|
If queuing up your tasks takes longer than the task, how about you increase the scope of the tasks so they operate on N files at a time. So instead of queuing up 1000 tasks for 1000 files. You queue up 10 tasks that operate on 100 files at a time.
Make your task take a list of files, rather than a file for input. Then when you loop through your list of files you can loop 100 at time.
| 0
| 1
| 0
| 0
|
2014-10-15T23:00:00.000
| 1
| 0
| false
| 26,393,540
| 0
| 0
| 1
| 1
|
I'm having a major problem in my celery + rabbitmq app where queuing up my jobs is taking longer than the time for my workers to perform jobs. No matter how many machines I spin up, my queuing time will always overtake my task time.
This is because I have one celery_client script on one machine doing all the queuing (calling task.delay()) sequentially. I am iterating through a list of files stored in S3. How can I parallelize the queuing process? I imagine this is a widespread basic problem, yet I cannot find a solution.
EDIT: to clarify, I am calling task.delay() inside a for loop that iterates through a list of S3 files (of which there are a huge amount of small files). I need to get the result back to me so I can return it to the client, so for this reason I iterate through a list of results after the above to see if the result is completed -- if it is, I append it to a result file.
Some solutions I can think of immediately is some kind of multi threaded support in my for loop, but I am not sure whether .delay() would work with this. Is there no built in celery support for this problem?
EDIT2 More details: I am using one queue in my celeryconfig -- my tasks are all the same.
EDIT3: I came across "chunking", where you can group a lot of small tasks into one big one. Not sure if this can help out my problem, as although I can transform a large number of small tasks into a small number of big ones, my for loop is still sequential. I could not find much information in the docs.
|
How to use python virtual environment in another computer
| 26,399,797
| 22
| 15
| 10,942
| 0
|
python,virtualenv
|
You should not. The other computer can have a different operating system, other packages or package versions installed, so copying the files will not work.
The point of a virtual environment is to be able to replicate it everywhere you need it.
Make a script which installs all necessary dependencies from a requirements.txt file and use it.
Use pip freeze > requirements.txt to get the list of all python packages installed. Then install the dependencies in another virtual environment on another computer using pip install -r requirements.txt.
If you want the exact environment, including system packages, on another computer, use Docker.
| 0
| 1
| 0
| 0
|
2014-10-16T08:36:00.000
| 3
| 1.2
| true
| 26,399,754
| 1
| 0
| 0
| 1
|
I have created an virtual environment by using virtualenv pyenv in my linux system. Now i want to use the virtual environment in another computer. Can i direct copy the virtual environment and use it in another computer? Or need i do something to set up it?
|
How can you message a background process from mod_python?
| 26,405,300
| 1
| 0
| 76
| 0
|
python,background-process,interprocess,mod-python
|
you can use celery / redis task queue.
| 0
| 1
| 0
| 1
|
2014-10-16T13:02:00.000
| 1
| 0.197375
| false
| 26,405,171
| 0
| 0
| 0
| 1
|
We're running a Linux server running Apache2 with mod_python. One mod_python script inserts an entry in a database logging table. The logging table is large can be a point of disk-write contention, or it can be temporarily unavailable during database maintenance. We would like to spin off the logging as an asynchronous background task, so that the user request can complete before the logging is done.
Ideally there would be a single background process. The web handler would pass its log request to the background process. The background process would write log entries to the database. The background process would queue up to a hundred requests and drop requests when its queue is full.
Which techniques are available in Python to facilitate communicating with a background process?
|
Live reading / writing to a subprocess stdin/stdout
| 26,413,875
| 1
| 2
| 439
| 0
|
python,io,subprocess
|
I think you'll be fine (carefully) ignoring the warnings using Popen.stdin, etc yourself. Just be sure to process the streams line-by-line and iterate through them on a fair schedule so not to fill up any buffers. A relatively simple (and inefficient) way of doing this in Python is using separate threads for the three streams. That's how Popen.communicate does it internally. Check out its source code to see how.
| 0
| 1
| 0
| 0
|
2014-10-16T20:43:00.000
| 2
| 1.2
| true
| 26,413,572
| 1
| 0
| 0
| 2
|
I want to make a Python wrapper for another command-line program.
I want to read Python's stdin as quickly as possible, filter and translate it, and then write it promptly to the child program's stdin.
At the same time, I want to be reading as quickly as possible from the child program's stdout and, after a bit of massaging, writing it promptly to Python's stdout.
The Python subprocess module is full of warnings to use communicate() to avoid deadlocks. However, communicate() doesn't give me access to the child program's stdout until the child has terminated.
|
Live reading / writing to a subprocess stdin/stdout
| 26,413,852
| 1
| 2
| 439
| 0
|
python,io,subprocess
|
Disclaimer: This solution likely requires that you have access to the source code of the process you are trying to call, but may be worth trying anyways. It depends on the called process periodically flushing its stdout buffer which is not standard.
Say you have a process proc created by subprocess.Popen. proc has attributes stdin and stdout. These attributes are simply file-like objects. So, in order to send information through stdin you would call proc.stdin.write(). To retrieve information from proc.stdout you would call proc.stdout.readline() to read an individual line.
A couple of caveats:
When writing to proc.stdin via write() you will need to end the input with a newline character. Without a newline character, your subprocess will hang until a newline is passed.
In order to read information from proc.stdout you will need to make sure that the command called by subprocess appropriately flushes its stdout buffer after each print statement and that each line ends with a newline. If the stdout buffer does not flush at appropriate times, your call to proc.stdout.readline() will hang.
| 0
| 1
| 0
| 0
|
2014-10-16T20:43:00.000
| 2
| 0.099668
| false
| 26,413,572
| 1
| 0
| 0
| 2
|
I want to make a Python wrapper for another command-line program.
I want to read Python's stdin as quickly as possible, filter and translate it, and then write it promptly to the child program's stdin.
At the same time, I want to be reading as quickly as possible from the child program's stdout and, after a bit of massaging, writing it promptly to Python's stdout.
The Python subprocess module is full of warnings to use communicate() to avoid deadlocks. However, communicate() doesn't give me access to the child program's stdout until the child has terminated.
|
Executable read permissions for "Program Files (x86)"
| 26,456,745
| 0
| 0
| 1,108
| 0
|
python,windows,permissions,file-permissions
|
If I understand correctly, you want your program to both run and edit files in its current folder. Programs that users invoke run using the user credentials by default.
If you want to prevent users from editing those application config files there are a few tricks:
Wrap your app in a DOS batch file. In that batch file use "runas" to start your app using a different account that has both execute and write permissions to those configs. Ensure the invoking user does not have write permissions. That should solve your problem.
Instead of flat text files for configs, how about using SQLite? Or encrypt the file. Either way, the same result, the user maybe able to open the file but not know what they are looking at using a typical text editor.
| 0
| 1
| 0
| 1
|
2014-10-19T23:29:00.000
| 1
| 0
| false
| 26,456,543
| 0
| 0
| 0
| 1
|
I made a python program, and froze it to make an executable. The only problem I can see, it that it cannot read/write the contents of several support files. I know that this is permission error because the Program Files (x86) folder is protected. I would prefer to keep my supporting files in the same folder as my executable, so that the users cannot alter them, and so my python program can look locally for them.
I have tried changing the permissions, but I'm not sure which one controls whether my executable can read/write to the local folder.
|
Send Parameter to a Python Script Running at Background From PHP
| 26,473,069
| 1
| 0
| 1,520
| 0
|
php,python,python-2.7,background-process,python-daemon
|
The obvious answer here is to either:
Run analyze.py once per filename, instead of running it as a daemon.
Pass analyze.py a whole slew of filenames at startup, instead of passing them one at a time.
But there may be a reason neither obvious answer will work in your case. If so, then you need some form of inter-process communication. There are a few alternatives:
Use the Python script's standard input to pass it data, by writing to it from the (PHP) parent process. (I'm not sure how to do this from PHP, or even if it's possible, but it's pretty simple from Python, sh, and many other languages, so …)
Open a TCP socket, Unix socket, named pipe, anonymous pipe, etc., giving one end to the Python child and keeping the other in the PHP parent. (Note that the first one is really just a special case of this one—under the covers, standard input is basically just an anonymous pipe between the child and parent.)
Open a region of shared memory, or an mmap-ed file, or similar in both parent and child. This probably also requires sharing a semaphore that you can use to build a condition or event, so the child has some way to wait on the next input.
Use some higher-level API that wraps up one of the above—e.g., write the Python child as a simple HTTP service (or JSON-RPC or ZeroMQ or pretty much anything you can find good libraries for in both languages); have the PHP code start that service and make requests as a client.
| 0
| 1
| 0
| 1
|
2014-10-20T18:53:00.000
| 2
| 1.2
| true
| 26,472,868
| 0
| 0
| 0
| 1
|
I have a python script (analyze.py) which takes a filename as a parameter and analyzes it. When it is done with analysis, it waits for another file name. What I want to do is:
Send file name as a parameter from PHP to Python.
Run analyze.py in the background as a daemon with the filename that came from PHP.
I can post the parameter from PHP as a command line argument to Python but I cannot send parameter to python script that already runs at the background.
Any ideas?
|
How to install libxml2-dev libxslt-dev on Mac os
| 45,097,599
| 0
| 8
| 10,943
| 0
|
python,macos,pip,libxml2
|
I had the same problem and installing the Command Line Tools fixed the problem for me.
I just wanted to note, that calling xcode-select -p
and getting the output /Applications/Xcode.app/Contents/Developer
does not tell if Xcode Command Line Tools are installed (like stated in the comments on the question)!
For me it returned the same output but xcode-select --install started the installation. After the installation xcode-select --install printed xcode-select: error: command line tools are already installed, use "Software Update" to install updates
So to check if command line tools are installed better use xcode-select --install.
| 0
| 1
| 0
| 0
|
2014-10-20T19:13:00.000
| 2
| 0
| false
| 26,473,197
| 0
| 0
| 0
| 2
|
I have installed both libxml2 and libxslt with homebrew, but it doesn't want to install libxml2-dev or libxslt-dev:
Error: No available formula for libxml2-dev
I have pip, port, and all I could found. I even installed the Xcode command line tools,
but with no luck. What is the way to install libxml2-dev & libxslt-dev on Mac 10.10?
|
How to install libxml2-dev libxslt-dev on Mac os
| 26,490,857
| 11
| 8
| 10,943
| 0
|
python,macos,pip,libxml2
|
Try adding STATIC_DEPS, like this
STATIC_DEPS=true sudo pip install lxml
| 0
| 1
| 0
| 0
|
2014-10-20T19:13:00.000
| 2
| 1.2
| true
| 26,473,197
| 0
| 0
| 0
| 2
|
I have installed both libxml2 and libxslt with homebrew, but it doesn't want to install libxml2-dev or libxslt-dev:
Error: No available formula for libxml2-dev
I have pip, port, and all I could found. I even installed the Xcode command line tools,
but with no luck. What is the way to install libxml2-dev & libxslt-dev on Mac 10.10?
|
Python psutil collect process resources usage on Mac OS X
| 26,627,865
| 0
| 0
| 605
| 0
|
python,shell,subprocess,psutil
|
If the process "dies" or gets reaped how are you supposed to interact with it? Of course you can't, 'cause it's gone. If on the other hand the process is a zombie, then in that case you might be able to extract some info off of it, like the parent PID, but not CPU or memory stats.
| 0
| 1
| 0
| 0
|
2014-10-21T02:58:00.000
| 1
| 0
| false
| 26,478,208
| 0
| 0
| 0
| 1
|
Apparently I can't get the process resources usage in Mac OS X with psutil after the process got reaped, i.e. after p.wait() where p is a psutil.Popen() instance. So for example, if I try ps.cpu_times().system where ps is a psutil.Process() instance, I get a raise of no such process. What are the other options for measuring the resources usage in a mac (elapsed time, memory and cpu usage)?
|
Run specific django manage.py commands at intervals
| 26,487,761
| 1
| 0
| 368
| 0
|
python,django,django-chronograph
|
I would suggest you to configure cron to run your command at specific times/intervals.
| 0
| 1
| 0
| 0
|
2014-10-21T13:14:00.000
| 3
| 0.066568
| false
| 26,487,648
| 0
| 0
| 1
| 2
|
I need to run a specific manage.py commands on an EC2 instance every X minutes. For example: python manage.py some_command.
I have looked up django-chronograph. Following the instructions, I've added chronograph to my settings.py but on runserver it keeps telling me No module named chronograph.
Is there something I'm missing to get this running? And after running how do I get manage.py commands to run using chronograph?
Edit: It's installed in the EC2 instance's virtualenv.
|
Run specific django manage.py commands at intervals
| 26,488,221
| 0
| 0
| 368
| 0
|
python,django,django-chronograph
|
First, install it by running pip install django-chronograph.
| 0
| 1
| 0
| 0
|
2014-10-21T13:14:00.000
| 3
| 0
| false
| 26,487,648
| 0
| 0
| 1
| 2
|
I need to run a specific manage.py commands on an EC2 instance every X minutes. For example: python manage.py some_command.
I have looked up django-chronograph. Following the instructions, I've added chronograph to my settings.py but on runserver it keeps telling me No module named chronograph.
Is there something I'm missing to get this running? And after running how do I get manage.py commands to run using chronograph?
Edit: It's installed in the EC2 instance's virtualenv.
|
Python dual install
| 26,502,745
| 0
| 1
| 395
| 0
|
windows,python-2.7,python-3.x
|
Create a python2.bat and a python3.bat file somewhere on your path (could in your main python folder). That file only contains the location of the relavant python.exe, e.g.
C:\Programs\Python26\python.exe %*
| 0
| 1
| 0
| 0
|
2014-10-21T21:19:00.000
| 2
| 0
| false
| 26,496,539
| 1
| 0
| 0
| 1
|
How do I set permanent paths for both Python 2 and 3 in command prompt such that I can invoke either every time I open the command window ie 'python2' for python 2 interpreter or 'python3' for python 3 interpreter
|
How to Accept Command Line Arguments With Python Using <
| 26,496,759
| 0
| 0
| 299
| 0
|
python,shell,terminal
|
You can use the sys module's stdin attribute as a file like object.
| 0
| 1
| 0
| 0
|
2014-10-21T21:30:00.000
| 3
| 0
| false
| 26,496,708
| 1
| 0
| 0
| 2
|
Is it possible to run a python script and feed in a file as an argument using <? For example, my script works as intended using the following command python scriptname.py input.txt and the following code stuffFile = open(sys.argv[1], 'r').
However, what I'm looking to do, if possible, is use this command line syntax: python scriptname.py < input.txt. Right now, running that command gives me only one argument, so I likely have to adjust my code in my script, but am not sure exactly how.
I have an automated system processing this command, so it needs to be exact. If that's possible with a Python script, I'd greatly appreciate some help!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.