Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is there a simple way to move up one directory in python using a single line of code? Something similar to `cd ..` in command line
``` >>> import os >>> print os.path.abspath(os.curdir) C:\Python27 >>> os.chdir("..") >>> print os.path.abspath(os.curdir) C:\ ```
Using [**`os.chdir`**](https://docs.python.org/3/library/os.html#os.chdir) should work: ``` import os os.chdir('..') ```
Moving up one directory in Python
[ "", "python", "directory", "" ]
I have a query below, and it works. ``` $query_for_cat3 = "SELECT sum,candidate_no,@curRank := @curRank + 1 AS rank FROM (SELECT SUM(score) / 5 SUM,candidate_no FROM SCORE WHERE category_no='$category_no1' GROUP BY candidate_no ) a, ( SELECT @curRank := 0 ) r ORDER BY sum DESC,candidate_no DESC LIMIT 5"; ``` What I need to do is to combine this query to another table which is named candidates. It has columns candidate\_no and candidate\_name. I want to produce the candidate\_name in respect to their corresponding candidate\_no. Please help. Thanks.
Is this is something you looking for? ``` SELECT temp.candidate_no, C.candidatename, temp.[sum], temp.rank FROM candidate C INNER JOIN (SELECT sum,candidate_no,@curRank := @curRank + 1 AS rank FROM (SELECT SUM(score) / 5 SUM,candidate_no FROM SCORE WHERE category_no='$category_no1' GROUP BY candidate_no ) a, ( SELECT @curRank := 0 ) r ORDER BY sum DESC,candidate_no DESC LIMIT 5) As temp ON C.candidate_no=temp.candidate_no ```
You can do a join like this ``` SELECT table1.field table2.field FROM table1 LEFT JOIN table2 ON table1.field=table2.field ```
Combining 2 tables to produce 1 output - SQL
[ "", "mysql", "sql", "" ]
I have a text file created by a Fortran program ran on SL6. It looks like: > 1 > > ``` > R - Z binning n. 1 "1 " , generalized particle n. 223 > R coordinate: from 0.0000E+00 to 1.1000E+02 cm, 110 bins ( 1.0000E+00 cm wide) > Z coordinate: from -2.9000E+02 to 2.9000E+02 cm, 290 bins ( 2.0000E+00 cm wide) > axis coordinates: X = 0.0000E+00, Y = 0.0000E+00 cm > Data follow in a matrix A(ir,iz), format (1(5x,1p,10(1x,e11.4))) > > accurate deposition along the tracks requested > this is a track-length binning > 3.0406E-01 2.3565E-02 1.0664E-02 7.2081E-03 5.2534E-03 4.8756E-03 4.5011E-03 4.2792E-03 4.1801E-03 3.9648E-03 > 3.9108E-03 3.8301E-03 3.7256E-03 3.6330E-03 3.5912E-03 3.5461E-03 3.4579E-03 3.4813E-03 3.4395E-03 3.3868E-03 > And so on for 6000 lines... > ``` I want to read all the numbers into a list of lists, so I have to skip the first nine lines, but Python is not recognizing the endlines despite opening as 'rU'. Just as a test, this code: ``` f = open(file, 'rU') print f.readlines(2) ``` Outputs (with '\n's read as part of the string): > BlBlockq['1\n', ' R - Z binning n. 1 "1 " , generalized particle n. 223\n', ' R coordinate: from 0.0000E+00 to 1.1000E+02 cm, 110 bins ( 1.0000E+00 cm wide)\n', ' Z coordinate: from -2.9000E+02 to 2.9000E+02 cm, 290 bins ( 2.0000E+00 cm wide)\n', ' axis coordinates: X = 0.0000E+00, Y = 0.0000E+00 cm\n', ' Data follow in a matrix A(ir,iz), format (1(5x,1p,10(1x,e11.4)))\n', '\n', ' accurate deposition along the tracks requested\n', ' this is a track-length binning\n', ' 3.0406E-01 2.3565E-02 1.0664E-02 7.2081E-03 5.2534E-03 4.8756E-03 4.5011E-03 4.2792E-03 4.1801E-03 3.9648E-03 \n', ' 3.9108E-03 3.8301E-03 3.7256E-03 3.6330E-03 3.5912E-03 3.5461E-03 3.4579E-03 3.4813E-03 3.4395E-03 3.3868E-03\n', ' 3.3292E-03 3.2912E-03 3.2342E-03 3.1778E-03 3.1790E-03 3.1501E-03 3.1095E-03 3.0531E-03 3.0162E-03 3.0427E-03\n', ' 2.9452E-03 2.8939E-03 2.8759E-03 2.8347E-03 2.8078E-03 2.7564E-03 2.7169E-03 2.7287E-03 2.6690E-03 2.6258E-03\n', ' 2.6070E-03 2.5763E-03 2.5385E-03 2.5521E-03 2.4891E-03 2.4825E-03 2.4690E-03 2.4200E-03 2.3839E-03 2.3437E-03\n', ' 2.3140E-03 2.3068E-03 2.2621E-03 2.2337E-03 2.2014E-03 2.1855E-03 2.1596E-03 2.1531E-03 2.1182E-03 2.1182E-03\n', ' 2.0879E-03 2.0681E-03 2.0307E-03 2.0172E-03 2.0099E-03 1.9916E-03 1.9581E-03 1.9515E-03 1.9164E-03 1.8902E-03\n', ' 1.8913E-03 1.8793E-03 1.8706E-03 1.8478E-03 1.8104E-03 1.8075E-03 1.7837E-03 1.7434E-03 1.7204E-03 1.7091E-03\n', ' 1.6946E-03 1.6776E-03 1.6521E-03 1.6512E-03 1.6228E-03 1.6121E-03 1.5918E-03 1.5848E-03 1.5748E-03 1.5444E-03\n', ' 1.5317E-03 1.5280E-03 1.5066E-03 1.4802E-03 1.4487E-03 1.4437E-03 1.4265E-03 1.4193E-03 1.4142E-03 1.3982E-03\n', ' 1.3899E-03 1.3632E-03 1.3435E-03 1.3263E-03 1.3241E-03 1.3130E-03 1.2938E-03 1.2828E-03 1.2769E-03 1.2656E-03\n', ' 3.0497E-01 2.3320E-02 1.0399E-02 7.1513E-03 5.2022E-03 4.8103E-03 4.4078E-03 4.1423E-03 4.0100E-03 3.8307E-03\n', ' 3.7849E-03 3.8155E-03 3.6134E-03 3.5784E-03 3.4755E-03 3.3726E-03 3.3375E-03 3.2878E-03 3.2996E-03 3.2516E-03\n', ' 3.2164E-03 3.1519E-03 3.1368E-03 3.1073E-03 3.0511E-03 3.0447E-03 3.0236E-03 2.9774E-03 2.9112E-03 2.8960E-03\n', ' 2.8631E-03 2.8311E-03 2.7943E-03 2.7737E-03 2.7331E-03 2.7091E-03 2.7088E-03 2.6326E-03 2.6396E-03 2.5996E-03\n', ' 2.5588E-03 2.5404E-03 2.5130E-03 2.4893E-03 2.4464E-03 2.4332E-03 2.4431E-03 2.3831E-03 2.3552E-03 2.3305E-03\n', ' 2.3033E-03 2.3048E-03 2.2519E-03 2.2359E-03 2.2127E-03 2.1836E-03 2.1486E-03 2.1193E-03 2.1092E-03 2.0924E-03\n', ' 2.0966E-03 2.0535E-03 2.0043E-03 2.0146E-03 1.9817E-03 1.9776E-03 1.9542E-03 1.9027E-03 1.8876E-03 1.8756E-03\n', ' 1.8760E-03 1.8721E-03 1.8438E-03 1.8270E-03 1.7769E-03 1.7644E-03 1.7371E-03 1.7354E-03 1.7224E-03 1.7026E-03\n', ' 1.6694E-03 1.6437E-03 1.6364E-03 1.6486E-03 1.6130E-03 1.6150E-03 1.6029E-03 1.5856E-03 1.5595E-03 1.5353E-03\n', ' 1.5230E-03 1.5227E-03 1.5091E-03 1.4668E-03 1.4580E-03 1.4477E-03 1.4292E-03 1.4027E-03 1.3987E-03 1.3952E-03\n', ' 1.3774E-03 1.3677E-03 1.3548E-03 1.3280E-03 1.3174E-03 1.3040E-03 1.2928E-03 1.2865E-03 1.2802E-03 1.2549E-03\n', ' 3.0509E-01 2.3272E-02 1.0357E-02 7.0505E-03 5.0352E-03 4.6982E-03 4.3180E-03 4.0935E-03 3.8469E-03 3.6525E-03\n', ' 3.5883E-03 3.5556E-03 3.4086E-03 3.4375E-03 3.3495E-03 3.2696E-03 3.2182E-03 3.2249E-03 3.0992E-03 3.0897E-03\n', ' 3.0598E-03 3.0394E-03 3.0273E-03 2.9569E-03 2.9956E-03 2.9403E-03 2.8916E-03 2.9049E-03 2.8632E-03 2.7784E-03\n', ' 2.7787E-03 2.7386E-03 2.7034E-03 2.6822E-03 2.6373E-03 2.6070E-03 2.5835E-03 2.5861E-03 2.5695E-03 2.5765E-03\n', ' 2.5236E-03 2.4904E-03 2.4510E-03 2.4232E-03 2.4039E-03 2.3812E-03 2.3566E-03 2.3374E-03 2.3185E-03 2.2919E-03\n', ' 2.2724E-03 2.2345E-03 2.2360E-03 2.2162E-03 2.1863E-03 2.1568E-03 2.1213E-03 2.1009E-03 2.0799E-03 2.0681E-03\n', ' 2.0659E-03 2.0498E-03 2.0391E-03 1.9898E-03 1.9438E-03 1.9394E-03 1.9185E-03 1.8904E-03 1.8929E-03 1.8697E-03\n', ' 1.8682E-03 1.8425E-03 1.8035E-03 1.7709E-03 1.7639E-03 1.7456E-03 1.7236E-03 1.7238E-03 1.7163E-03 1.7145E-03\n', ' 1.6621E-03 1.6467E-03 1.6545E-03 1.6298E-03 1.6022E-03 1.5985E-03 1.5753E-03 1.5643E-03 1.5375E-03 1.5466E-03\n', ' 1.5277E-03 1.5178E-03 1.4924E-03 1.4771E-03 1.4579E-03 1.4398E-03 1.4112E-03 1.4110E-03 1.3973E-03 1.3907E-03\n', ' 1.3802E-03 1.3493E-03 1.3278E-03 1.3369E-03 1.3274E-03 1.3164E-03 1.3124E-03 1.2916E-03 1.2747E-03 1.2640E-03\n', ' 3.0610E-01 2.3126E-02 1.0185E-02 6.8339E-03 4.8459E-03 4.5390E-03 4.1334E-03 3.9815E-03 3.8058E-03 3.5676E-03\n', ' 3.4392E-03 3.3838E-03 3.3611E-03 3.2674E-03 3.1803E-03 3.1474E-03 3.1800E-03 3.1263E-03 3.0985E-03 3.0649E-03\n', ' 3.0075E-03 2.9502E-03 2.9607E-03 2.9350E-03 2.9096E-03 2.8821E-03 2.8311E-03 2.7796E-03 2.7583E-03 2.7611E-03\n', ' 2.7295E-03 2.6707E-03 2.6291E-03 2.6345E-03 2.6062E-03 2.5963E-03 2.5672E-03 2.5447E-03 2.4997E-03 2.4905E-03\n', ' 2.4835E-03 2.4659E-03 2.3853E-03 2.3637E-03 2.3618E-03 2.3600E-03 2.3076E-03 2.2975E-03 2.2652E-03 2.2580E-03\n', ' 2.2591E-03 2.2327E-03 2.2344E-03 2.2002E-03 2.1638E-03 2.1359E-03 2.0953E-03 2.0861E-03 2.0702E-03 2.0468E-03\n', ' 2.0228E-03 1.9948E-03 1.9968E-03 1.9793E-03 1.9369E-03 1.9242E-03 1.8957E-03 1.8783E-03 1.8986E-03 1.8529E-03\n', ' 1.8327E-03 1.7979E-03 1.7969E-03 1.7697E-03 1.7726E-03 1.7304E-03 1.7385E-03 1.7116E-03 1.6935E-03 1.6875E-03\n', ' 1.6572E-03 1.6438E-03 1.6302E-03 1.6136E-03 1.5967E-03 1.5825E-03 1.5641E-03 1.5628E-03 1.5536E-03 1.5403E-03\n', ' 1.5359E-03 1.5081E-03 1.4791E-03 1.4639E-03 1.4470E-03 1.4157E-03 1.4050E-03 1.4019E-03 1.3899E-03 1.3760E-03\n', ' 1.3686E-03 1.3614E-03 1.3509E-03 1.3426E-03 1.3354E-03 1.3160E-03 1.3000E-03 1.2933E-03 1.2744E-03 1.2754E-03\n', ' 3.0636E-01 2.3067E-02 1.0104E-02 6.7223E-03 4.7879E-03 4.4974E-03 3.9843E-03 3.7891E-03 3.5955E-03 3.5142E-03\n', ' 3.3559E-03 3.3439E-03 3.2640E-03 3.2311E-03 3.1302E-03 3.0504E-03 3.0634E-03 3.0229E-03 3.0359E-03 2.9668E-03\n', ' 2.9471E-03 2.9103E-03 2.8339E-03 2.8224E-03 2.8151E-03 2.7428E-03 2.7436E-03 2.7491E-03 2.7130E-03 2.6586E-03\n', ' 2.6498E-03 2.6161E-03 2.5706E-03 2.5646E-03 2.5351E-03 2.4974E-03 2.4919E-03 2.4849E-03 2.4570E-03 2.4317E-03\n', ' 2.4473E-03 2.4043E-03 2.3686E-03 2.3428E-03 2.3553E-03 2.3151E-03 2.2943E-03 2.2631E-03 2.2336E-03 2.2316E-03\n', ' 2.2172E-03 2.1971E-03 2.1744E-03 2.1710E-03 2.1443E-03 2.1000E-03 2.0922E-03 2.0759E-03 2.0387E-03 2.0362E-03\n', ' 1.9895E-03 1.9942E-03 1.9948E-03 1.9549E-03 1.9333E-03 1.9122E-03 1.8853E-03 1.8499E-03 1.8436E-03 1.8509E-03\n', ' 1.8216E-03 1.7942E-03 1.7863E-03 1.7658E-03 1.7584E-03 1.7364E-03 1.7184E-03 1.6866E-03 1.6766E-03 1.6682E-03\n', ' 1.6489E-03 1.6516E-03 1.6458E-03 1.6172E-03 1.5927E-03 1.5700E-03 1.5644E-03 1.5571E-03 1.5555E-03 1.5459E-03\n', ' 1.5212E-03 1.5014E-03 1.4774E-03 1.4688E-03 1.4505E-03 1.4173E-03 1.4149E-03 1.4019E-03 1.3890E-03 1.3674E-03\n', ' 1.3654E-03 1.3564E-03 1.3445E-03 1.3314E-03 1.3347E-03 1.3286E-03 1.3149E-03 1.2993E-03 1.2791E-03 1.2619E-03\n', ' 3.0682E-01 2.3070E-02 1.0015E-02 6.6121E-03 4.7106E-03 4.3606E-03 3.8993E-03 3.6727E-03 3.4450E-03 3.3637E-03\n', ' 3.2508E-03 3.2452E-03 3.2126E-03 3.0648E-03 2.9456E-03 2.9585E-03 2.9646E-03 2.9385E-03 2.9370E-03 2.8502E-03\n', ' 2.8172E-03 2.7795E-03 2.7574E-03 2.7382E-03 2.7056E-03 2.6796E-03 2.6479E-03 2.6971E-03 2.6592E-03 2.6076E-03\n', ' 2.5822E-03 2.5349E-03 2.4951E-03 2.5063E-03 2.5053E-03 2.4324E-03 2.4227E-03 2.4149E-03 2.4058E-03 2.3940E-03\n', ' 2.3859E-03 2.3804E-03 2.3559E-03 2.3242E-03 2.3176E-03 2.2903E-03 2.2557E-03 2.2321E-03 2.2233E-03 2.2019E-03\n', ' 2.1705E-03 2.1483E-03 2.1279E-03 2.1332E-03 2.1202E-03 2.0658E-03 2.0555E-03 2.0506E-03 2.0387E-03 2.0139E-03\n', ' 1.9850E-03 1.9849E-03 1.9527E-03 1.9422E-03 1.9101E-03 1.8871E-03 1.8887E-03 1.8506E-03 1.8300E-03 1.8329E-03\n'] Any assistance on this is greatly appreciated!
This is behaving as expected. From the [docs](http://docs.python.org/2/library/stdtypes.html#file.readlines): > readlines: Read until EOF using readline() and return a list containing the lines thus read. If the optional sizehint argument is present, instead of reading up to EOF, whole lines totalling approximately sizehint bytes (possibly after rounding up to an internal buffer size) are read. Objects implementing a file-like interface may choose to ignore sizehint if it cannot be implemented, or cannot be implemented efficiently. > > readline: Read one entire line from the file. A trailing newline character is kept in the string (but may be absent when a file ends with an incomplete line). If you don't want the trailing line, `strip` on each line will get rid of it. If you just want to skip past some opening lines, there are a number of ways to do that. Personally I'm partial to [itertools.islice](http://docs.python.org/2/library/itertools.html#itertools.islice): ``` for line in itertools.islice(infile, 9, None): print line ```
I'm not sure that `readlines` specifies that newlines will be removed, just that they are used as a delimiter. you can easily fix this, though: ``` lines = [line.strip() for line in file.readlines()] ```
Open with 'rU' not recognizing endlines
[ "", "python", "readlines", "" ]
Can I select specific columns by the number of the columns in SQL? Something like ``` SELECT columns(0), columns(3), columns(5), columns(8) FROM TABLE ```
You have to use dynamic SQL to do this: ``` DECLARE @strSQL AS nvarchar(MAX) DECLARE @strColumnName AS nvarchar(255) DECLARE @iCounter AS integer DECLARE @curColumns AS CURSOR SET @iCounter = 0 SET @strSQL = N'SELECT ' SET @curColumns = CURSOR FOR ( SELECT * FROM ( SELECT TOP 99999 COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'T_Markers' AND ORDINAL_POSITION < 4 ORDER BY ORDINAL_POSITION ASC ) AS tempT ) OPEN @curColumns FETCH NEXT FROM @curColumns INTO @strColumnName WHILE @@FETCH_STATUS = 0 BEGIN -- PRINT @strColumnName IF @iCounter = 0 SET @strSQL = @strSQL + N' [' + @strColumnName + N'] ' ELSE SET @strSQL = @strSQL + N' ,[' + @strColumnName + N'] ' SET @iCounter = @iCounter + 1 FETCH NEXT FROM @curColumns INTO @strColumnName END CLOSE @curColumns DEALLOCATE @curColumns SET @strSQL = @strSQL + N' FROM T_Markers ' PRINT @strSQL ```
``` SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'myTable' AND ORDINAL_POSITION = '3' ``` This statement returns the third column of your table You would need to write a transact SQL statement like ``` DECLARE @columnname nvarchar(100), @sql nvarchar(500) SELECT @columnname = ORDINAL_POSITION FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'myTable' AND ORDINAL_POSITION = '3' SET @sql = 'SELECT ' + @columnname + ' FROM mytable' EXEC @sql ```
sql server select column by number
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am having trouble installing lxml to my Mac OS. I am having the following error when building it. This is the error I have when using `pip install lxml` > /private/var/folders/9s/s5hl5w4x7zjdjkdljw9cnsrm0000gn/T/pip-build-khuevu/lxml/src/lxml/includes/etree\_defs.h:9:10: fatal error: 'libxml/xmlversion.h' file not found I have installed libxml2 with brew: ``` brew install libxml2 brew link libxml2 --force ``` I'm new to Mac. In Ubuntu, it would mean libxml2-dev package must be installed. Updated: here is the pip.log: > "~/.pip/pip.log" 124L, 8293C > requirement\_set.install(install\_options, global\_options, root=options.root\_path) File > "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/req.py", > line 1185, in install > requirement.install(install\_options, global\_options, \*args, \*\*kwargs) File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/req.py", > line 592, in install > cwd=self.source\_dir, filter\_stdout=self.\_filter\_install, show\_stdout=False) File > "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/util.py", > line 662, in call\_subprocess > % (command\_desc, proc.returncode, cwd)) InstallationError: Command /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python > -c "import setuptools;**file**='/private/var/folders/9s/s5hl5w4x7zjdjkdljw9cnsrm0000gn/T/pip-build-khuevu/lxml/setup.py';exec(compile(open(**file**).read().replace('\r\n', > '\n'), **file**, 'exec'))" install --record > /var/folders/9s/s5hl5w4x7zjdjkdljw9cnsrm0000gn/T/pip-nsV0iT-record/install-record.txt > --single-version-externally-managed failed with error code 1 in /private/var/folders/9s/s5hl5w4x7zjdjkdljw9cnsrm0000gn/T/pip-build-khuevu/lxml Any idea ? Thanks a lot
Turn out xmlversion.h is not included in compilation path even though it is in PATH. Modify the C\_INCLUDE\_PATH env fix the error for me: > C\_INCLUDE\_PATH=/usr/local/Cellar/libxml2/2.9.1/include/libxml2:$C\_INCLUDE\_PATH
If you are running Mavericks with Xcode installed, you can also use: ``` export C_INCLUDE_PATH=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/include/libxml2:$C_INCLUDE_PATH ```
Fail to install lxml in MacOS 10.8.4
[ "", "python", "macos", "lxml", "libxml2", "" ]
I have the following problem: First of all, I have a list of tuples like the following: ``` [(1,2),(2,3),(4,5),(5,6),(5,7),(5,8),(6,9),(6,10),(7,11),(12,14)] ``` To make things easier let's say that the first number in each tuple 'controls' the second (for those who are familiar with dependency parsing, the first number represents the index of the head, while the second the index of the dependent). Now what I want to create is function that takes as argument an `int` and the list above. The function has to look for all the tuples that have as first number the integer argument and return the second number. The function should then recursively take each of these second number, look what are the tuples where it appears as first number and return the second number. This should go on until no other second numbers can be retrieved. I will use an example to explain it better: let's say that this function takes as input the number 5. The tuples having 5 as first number are `(5,6),(5,7),(5,8)`; as first result the function should then take 6,7,8 and append it to a `list`. Now the function should consider 6,7,8 , look for the tuples where they appear as first numbers (`(6,9),(6,10),(7,11)`) and return the second numbers (9,10,11). Since 8 does not appear as first number in any of the tuples, its journey ends at this stage. The final list returned should then be `[6,7,8,9,10,11]`. I have tried something like that but it doesn't work: ``` def foo(start_index, lista_tuples,list_to_return=list()): indeces=[x[1] for x in lista_tuples if x[0]==start_index] list_to_return.extend(indeces) for index in indeces: foo(index,lista_tuples,list_to_return) return list_to_return ``` but it doesn't work. Can someone help me?
In your code you always iterate over all the "second values" you found. This can generate infinite recursion. To avoid it, remove from `indeces` all the values that are already in `list_to_return`: ``` def foo(start_index, lista_tuples,list_to_return=list()): indeces=[x[1] for x in lista_tuples if x[0]==start_index] new_values = list(set(indeces) - set(list_to_return)) list_to_return.extend(indeces) for index in new_values: foo(index,lista_tuples,list_to_return) return list_to_return ``` The double conversion list->set->list is a little overkill, but it took three seconds to write down :D **EDIT**: In fact, you should actually use a set. This will avoid duplicates.
``` >>> L =[(1,2),(2,3),(4,5),(5,6),(5,7),(5,8),(6,9),(6,10),(7,11),(12,14)] >>> def foo(start, L, answer=None): ... if answer is None: ... answer = [] ... answer += [i[1] for i in L if i[0]==start] ... for i in (i[1] for i in L if i[0]==start): ... foo(i, L, answer) ... return answer ... >>> print foo(5, L) [6, 7, 8, 9, 10, 11] ```
Recursive functions on a list of tuples
[ "", "python", "recursion", "tuples", "" ]
I have to find a pair of students who take exactly the same classes from table that has `studentID` and `courseID`. ``` studentID | courseID 1 1 1 2 1 3 2 1 3 1 3 2 3 3 ``` Query should return `(1, 3)`. The result also should not have duplicate rows such as `(1,3)` and `(3,1)`.
Given sample data: ``` CREATE TABLE student_course ( student_id integer, course_id integer, PRIMARY KEY (student_id, course_id) ); INSERT INTO student_course (student_id, course_id) VALUES (1, 1), (1, 2), (1, 3), (2, 1), (3, 1), (3, 2), (3, 3) ; ``` ## Use array aggregation One option is to use a CTE to join on the ordered lists of courses each student is taking: ``` WITH student_coursearray(student_id, courses) AS ( SELECT student_id, array_agg(course_id ORDER BY course_id) FROM student_course GROUP BY student_id ) SELECT a.student_id, b.student_id FROM student_coursearray a INNER JOIN student_coursearray b ON (a.courses = b.courses) WHERE a.student_id > b.student_id; ``` `array_agg` is actually part of the SQL standard, as is the `WITH` common-table expression syntax. Neither are supported by MySQL so you'll have to express this a different way if you want to support MySQL. ## Find missing course pairings per-student Another way to think about this would be "for every student pairing, find out if one is taking a class the other is not". This would lend its self to a `FULL OUTER JOIN`, but it's pretty awkward to express. You have to determine the pairings of student IDs of interest, then for each pairing do a full outer join across the set of classes each takes. If there are any null rows then one took a class the other didn't, so you can use that with a `NOT EXISTS` filter to exclude such pairings. That gives you this monster: ``` WITH student_id_pairs(left_student, right_student) AS ( SELECT DISTINCT a.student_id, b.student_id FROM student_course a INNER JOIN student_course b ON (a.student_id > b.student_id) ) SELECT left_student, right_student FROM student_id_pairs WHERE NOT EXISTS ( SELECT 1 FROM (SELECT course_id FROM student_course WHERE student_id = left_student) a FULL OUTER JOIN (SELECT course_id FROM student_course b WHERE student_id = right_student) b ON (a.course_id = b.course_id) WHERE a.course_id IS NULL or b.course_id IS NULL ); ``` The CTE is optional and may be replaced by a `CREATE TEMPORARY TABLE AS SELECT ...` or whatever if your DB doesn't support CTEs. ## Which to use? I'm very confident that the array approach will perform better in all cases, particularly because for a really large data set you can take the `WITH` expression, create a temporary table from the query instead, *add an index on `(courses, student_id)` to it* and do crazy-fast equality searching that'll well and truly pay off the cost of the index creation time. You can't do that with the subquery joins approach.
``` select courses,group_concat(studentID) from (select studentID, group_concat(courseID order by courseID) as courses from Table1 group by studentID) abc group by courses having courses like('%,%'); ``` ***[fiddle](http://www.sqlfiddle.com/#!2/f59f8/6)***
Find pair of students who take exactly the same classes
[ "", "mysql", "sql", "postgresql", "aggregate-functions", "relational-division", "" ]
Must be missing something obvious, but why does this simple loop fail to modify the list items? ``` for artist in artists: artist = artist.replace(': ', '') artists = [': Terence Trent D Arby', ": Guns N' Roses", ': Sinead O Connor' ...] ```
The loop control variable is just a local variable, referencing the elements of the list. If you re-assign that variable to any other object, it will no longer reference the original object in the list. So, assigning the `artist` to another object, doesn't make the reference in the `list` also to point to the new object. To do what you want, you can create a new list with modified value, and assign it to original list reference. A list comprehension would be useful here: ``` artists = [artist.replace(': ', '') for artist in artists] ```
Following statement just make `artist` to reference change value; this does not change list value. ``` artist = artist.replace(': ', '') ``` Try following code which use list comprehension: ``` artist = [artist.replace(': ', '') for artist in artists] ```
Modifying list items with for loop
[ "", "python", "loops", "iteration", "" ]
I am basically writing a simple function in which the user enters a sentence (**strng**), a letter (**letter**) and another letter (**replace**) to replace the first letter with. Here's what I have: ``` def letter_replace(strng, letter, replace): replace = str(replace) for char in strng: if char == letter.upper() or char == letter.lower(): strng.replace(char, replace) return strng else: return "Sorry, the letter could not be replaced." ``` I can't figure out why this won't work. Sorry if it's a completely obvious mistake, I am fairly new to Python. Thanks
`strings` are immutable, you need to assign it to a new variable and return that. `replace()` returns a new string and does not change it in place. ``` >>> def letter_replace(strng, letter, replace): replace = str(replace) for char in strng: if char == letter.upper() or char == letter.lower(): strng = strng.replace(char, replace) return strng # Or just do return strng.replace(char, replace) else: return "Sorry, the letter could not be replaced." >>> letter_replace('abc', 'a', 'f') 'fbc' ```
``` strng.replace(char, replace) ``` This does the replacement, creating a new string, and then *throws away* the changed string because you don't assign it to a variable. Since you're just going to return it anyway, you can simply write: ``` return strng.replace(char, replace) ```
Python - Replacing letters in a string?
[ "", "python", "" ]
Im playing with recursion, and ended up with this : ``` n = 0 def func(f) : print("setting p to 1") #NEW# p = 1 #local var global n n+=1 print(n) if n > 5 : print("returning") return print("calling f(f) with id() of " + str(id(f))) f(f) print("incrementing p") #NEW# p +=1 print("p = " + str(p)) print(str(id(func))) func(func) ``` Ok my question is, if f is always the same id (which it is), and therefore always the same object (I bet thats where Im wrong somehow), shouldnt p be the same p and therefore be incremented past 2? Instead its treating each p as if it were local to a different object. output : ``` 178374636 1 calling f(f) with id() of 178374636 2 calling f(f) with id() of 178374636 3 calling f(f) with id() of 178374636 4 calling f(f) with id() of 178374636 5 calling f(f) with id() of 178374636 6 returning p = 2 p = 2 p = 2 p = 2 p = 2 ``` NEW OUTPUT WITH NEW COMMENTARY ``` 178374572 setting p to 1 1 calling f(f) with id() of 178374572 setting p to 1 2 calling f(f) with id() of 178374572 setting p to 1 3 calling f(f) with id() of 178374572 setting p to 1 4 calling f(f) with id() of 178374572 setting p to 1 5 calling f(f) with id() of 178374572 setting p to 1 6 returning incrementing p p = 2 incrementing p p = 2 incrementing p p = 2 incrementing p p = 2 incrementing p p = 2 ```
p is a local variable and that has nothing to do with the fact that func always has the same id. each call to the func will create a new stack frame with new instances of local variables
It seems you have a misconception about how functions and local variables work. You are correct that `f` is always the same object, but that doesn't imply that `p` keeps its value when you keep calling `f`. A local variable is local to one particular execution of a function, not to the function itself. Take a simple function like this: ``` def plus1(x): y = x + 1 return y ``` `plus1` doesn't "contain" a value for `x` or `y`. If it did, what value would it have before I called the function? Instead the data defining `plus1` is a set of instructions for what to do when it is given a value for `x`. It only contains `x` as a way of referring to the argument value (which it hasn't been given yet), and `y` as a way of referring to a value it will create during execution. When you actually call `plus1(5)`, then the code of `plus1` is executed with `x` bound to `5`. But that binding is only relevant inside that particular call of the function, and once the call is done the value is thrown away. At any given time there could be 0, 1, or any other number of calls to a function currently being executed, and each will have its own local variable bindings. Since your function calls itself (indirectly), this does in fact happen in your program. Before you call `func` there are 0 "versions" of `p` in existence. Then there are 1, 2, 3, 4, 5, and finally 6 versions (the 6th one is never printed, because `func` returns when `n > 5`). This then drops back to 5, 4, 3, 2, 1, 0 versions. That's how local variables work, and why Python complains that you have to assign a local variable before you can read it. Outside of a particular call it's meaningless to ask for the value of `p`, because there could be zero or many values which might be called `p`. That means calling `func` also can't start from `p` as already acted on by other calls, because which `p` should it start from?
why does p never go over 2
[ "", "python", "python-3.x", "" ]
I have a Pygame for a Pong-like game, and it looks pretty awesome right now, but there are a few things I can't figure out. The first problem is a problem with the sides. I've set the bat to move around using the left and right arrow keys and stop when it reaches the side (using positioning), but if you press the left key on the left side or the right key on the right side, it will go outside of the playing area by a little bit. The second problem, like the first is about the sides too. I've set the ball to move around and bounce of the walls (using positioning), but if the bat is on the left or right side, and I am the holding the corresponding key, the ball will go right through the wall/ceiling! The final problem is that I've set the ball to speed up over time (using velocity). If you were good at the game, it would speed up until the ball was so fast it had the power to break right through the bat. This is my code: ``` import pygame, time, sys, random from pygame.locals import * pygame.init() screen = pygame.display.set_mode((600, 500)) pygame.display.set_caption ("Pong Squash") def gameplay1(): global game_over_display, lives, points, lives_remaining, game_over1, lives1, points1, lives_remaining1, font1, font2, white, black, green, yellow, lives_number, lives_count, position_x, position_y, velocity_x1, velocity_y1, position1, position2, velocity1, velocity2, color, width, position, player, ball, points, points_count, new_points, your_score1, space1, esc1, munrosmall, new_positionx, beep_list game_over_display = "game_over1.png" lives = "lives.png" points = "points.png" lives_remaining = "lives_remaining.png" your_score = "score_intro.png" space = "space.png" esc = "esc.png" munrosmall = "munrosmall.ttf" beep1 = "beep1.wav" beep2 = "beep2.wav" beep3 = "beep3.wav" beep4 = "beep4.wav" game_over1 = pygame.image.load(game_over_display).convert() lives1 = pygame.image.load(lives).convert() points1 = pygame.image.load(points).convert() lives_remaining1 = pygame.image.load(lives_remaining).convert() your_score1 = pygame.image.load(your_score).convert() space1 = pygame.image.load(space).convert() esc1 = pygame.image.load(esc).convert() font1 = pygame.font.Font((munrosmall), 40) font2 = pygame.font.Font(None, 40) white = 255,255,255 black = 0, 0, 0 green = 0, 250, 0 yellow = 255, 255, 0 points = 0 points_count = font1.render(str(points), True, white) lives_count = font1.render(('3'), True, white) position_x = 175 position_y = 375 velocity_x1 = 0 velocity_y1 = 0 position1 = 275 position2 = 150 velocity1 = 2 velocity2 = 2 while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == KEYDOWN: if event.key == pygame.K_SPACE: if waiting: waiting = False reset_ball() if event.key == pygame.K_LEFT: velocity_x1 = (velocity_x1 - 3) elif event.key == pygame. K_RIGHT: velocity_x1 = (velocity_x1 + 3) elif event.type == KEYUP: if event.key == K_LEFT or event.key == K_RIGHT: velocity_x1 = 0 screen.fill((0, 0, 0)) color = 255 width = 0 position = position_x, position_y, 250, 25 position_x += velocity_x1 position_y += velocity_y1 position1 += velocity1 position2 += velocity2 player = pygame.draw.rect(screen, (color, color, color), position, width) ball = pygame.draw.rect(screen, (color, color, color), (position1, position2, 15, 15), width) if player.colliderect(ball): velocity2 = - velocity2 beep_list = [beep1, beep2, beep3, beep4] beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() if position_x > 350 or position_x < 0: velocity_x1 = 0 elif position1 > 575 or position1 < 0: velocity1 = - velocity1 beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() elif position2 < 0: velocity2 = - velocity2 velocity2 += 0.1 points += 100 beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() pygame.display.update() elif position2 > 365: new_points = points newposition_x = position_x change_level1() screen.blit(lives1, (450, 455)) screen.blit(points1,(0, 459)) screen.blit(lives_count,(560,453)) points_count = font1.render(str(points), True, white) screen.blit(points_count, (150, 456)) pygame.display.update() def gameplay2(): global game_over_display, lives, points, lives_remaining, game_over1, lives1, points1, lives_remaining1, font1, white, black, green, yellow, lives_number, lives_count, position_x, position_y, velocity_x1, velocity_y1, position1, position2, velocity1, velocity2, color, width, position, player, ball, points, points_count, new_points, your_score1, munrosmall, newposition_x, beep_list game_over_display = "game_over1.png" lives = "lives.png" points = "points.png" lives_remaining = "lives_remaining.png" beep1 = "beep1.wav" beep2 = "beep2.wav" beep3 = "beep3.wav" beep4 = "beep4.wav" game_over1 = pygame.image.load(game_over_display).convert() lives1 = pygame.image.load(lives).convert() points1 = pygame.image.load(points).convert() lives_remaining1 = pygame.image.load(lives_remaining).convert() white = 255,255,255 black = 0, 0, 0 green = 0, 250, 0 yellow = 255, 255, 0 points_count = font1.render(str(new_points), True, white) lives_count = font1.render(('2'), True, white) velocity_x1 = 0 velocity_y1 = 0 position1 = 275 position2 = 150 velocity1 = 2 velocity2 = 2 while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == KEYDOWN: if event.key == pygame.K_SPACE: if waiting: waiting = False reset_ball() if event.key == pygame.K_LEFT: velocity_x1 = (velocity_x1 - 3) elif event.key == pygame. K_RIGHT: velocity_x1 = (velocity_x1 + 3) elif event.type == KEYUP: if event.key == K_LEFT or event.key == K_RIGHT: velocity_x1 = 0 screen.fill((0, 0, 0)) color = 255 width = 0 position = position_x, position_y, 250, 25 position_x += velocity_x1 position1 += velocity1 position2 += velocity2 player = pygame.draw.rect(screen, (color, color, color), position, width) ball = pygame.draw.rect(screen, (color, color, color), (position1, position2, 15, 15), width) if player.colliderect(ball): velocity1 = - velocity1 velocity2 = - velocity2 beep_list = [beep1, beep2, beep3, beep4] beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() if position_x > 350 or position_x < 0: velocity_x1 = 0 elif position1 > 575 or position1 < 0: velocity1 = - velocity1 beep_list = [beep1, beep2, beep3, beep4] beep = random.shuffle((beep_list)) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() elif position2 < 0: velocity2 = - velocity2 velocity2 += 0.1 new_points += 100 beep_list = [beep1, beep2, beep3, beep4] beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() pygame.display.update() elif position2 > 365: change_level2() screen.blit(lives1, (450, 455)) screen.blit(points1,(0, 459)) screen.blit(lives_count,(560,453)) points_count = font1.render(str(new_points), True, white) screen.blit(points_count, (150, 456)) pygame.display.update() def gameplay3(): global game_over_display, lives, points, lives_remaining, game_over1, lives1, points1, lives_remaining1, font1, white, black, green, yellow, lives_number, lives_count, position_x, position_y, velocity_x1, velocity_y1, position1, position2, velocity1, velocity2, color, width, position, player, ball, new_points, new_points2, your_score1, munrosmall, newposition_x, beep_list game_over_display = "game_over1.png" lives = "lives.png" points = "points.png" lives_remaining = "lives_remaining.png" beep1 = "beep1.wav" beep2 = "beep2.wav" beep3 = "beep3.wav" beep4 = "beep4.wav" game_over1 = pygame.image.load(game_over_display).convert() lives1 = pygame.image.load(lives).convert() points1 = pygame.image.load(points).convert() lives_remaining1 = pygame.image.load(lives_remaining).convert() white = 255,255,255 black = 0, 0, 0 green = 0, 250, 0 yellow = 255, 255, 0 points_count = font1.render(str(new_points), True, white) lives_count = font1.render(('1'), True, white) velocity_x1 = 0 velocity_y1 = 0 position1 = 275 position2 = 150 velocity1 = 2 velocity2 = 2 while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == KEYDOWN: if event.key == pygame.K_SPACE: if waiting: waiting = False reset_ball() if event.key == pygame.K_LEFT: velocity_x1 = (velocity_x1 - 3) elif event.key == pygame. K_RIGHT: velocity_x1 = (velocity_x1 + 3) elif event.type == KEYUP: if event.key == K_LEFT or event.key == K_RIGHT: velocity_x1 = 0 screen.fill((0, 0, 0)) color = 255 width = 0 position = position_x, position_y, 250, 25 position_x += velocity_x1 position1 += velocity1 position2 += velocity2 player = pygame.draw.rect(screen, (color, color, color), position, width) ball = pygame.draw.rect(screen, (color, color, color), (position1, position2, 15, 15), width) if player.colliderect(ball): velocity1 = - velocity1 velocity2 = - velocity2 beep_list = [beep1, beep2, beep3, beep4] beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() if position_x > 350 or position_x < 0: velocity_x1 = 0 elif position1 > 575 or position1 < 0: velocity1 = - velocity1 beep = random.shuffle(beep_list) pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() elif position2 < 0: velocity2 = - velocity2 velocity2 += 0.1 new_points += 100 pygame.mixer.music.load((beep_list[1])) pygame.mixer.music.play() pygame.display.update() elif position2 > 365: game_over() screen.blit(lives1, (450, 455)) screen.blit(points1,(0, 459)) screen.blit(lives_count,(560,453)) points_count = font1.render(str(new_points), True, white) screen.blit(points_count, (150, 456)) pygame.display.update() def change_level1(): global game_over_display, lives, points, lives_remaining, game_over1, lives1, points1, lives_remaining1, font1, white, black, green, yellow, lives_number, lives_count, position_x, position_y, velocity_x1, velocity_y1, position1, position2, velocity1, velocity2, color, width, position, player, ball gameplay2() pygame.display.update() def change_level2(): global game_over_display, lives, points, lives_remaining, game_over1, lives1, points1, lives_remaining1, font1, white, black, green, yellow, lives_number, lives_count, position_x, position_y, velocity_x1, velocity_y1, position1, position2, velocity1, velocity2, color, width, position, player, ball gameplay3() pygame.display.update() def reset_ball(): global game_over_display, lives, points, lives_remaining, game_over1, lives1, points1, lives_remaining1, font1, white, black, green, yellow, lives_number, lives_count, position_x, position_y, velocity_x1, velocity_y1, position1, position2, velocity1, velocity2, color, width, position, player, ball velocity1 = 2 velocity2 = 2 pygame.display.update() def game_over(): global game_over1, your_score1 pygame.display.flip() screen.fill(black) screen.blit(game_over1, (200,100)) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == KEYDOWN: if event.key == K_ESCAPE: pygame.quit() sys.exit() elif event.key == K_RETURN: gameplay1() pygame.display.flip() time.sleep(1) screen.blit(your_score1, (100, 175)) pygame.display.flip() time.sleep(1) points_count = font1.render(str(new_points), True, white) screen.blit(points_count, (400, 170)) pygame.display.flip() time.sleep(1) screen.blit(space1, (115, 250)) screen.blit(esc1, (175, 300)) pygame.display.flip() gameplay1() ``` I really need help with these things. Thanks in advance! UPDATE: I kind of fixed the final problem by decreasing the velocity added every time it hits the back wall, but I still need help with the first two. I don't understand the fact that velocityx1 is in anyway related the velocity1. Is it something with the wall position, or is it collision? Please help!
I fixed it! I just had to add a single line of code: ``` if position_x > 350: position_x = 350 velocity_x1 = 0 if position_x < 0: position_x = 0 velocity_x1 = 0 ``` Position\_x and velocity\_x1 are the bat position/ velocity. Telling the system that it absolutely can't go through (using the 'position\_x = 0') that certain position fixed both bugs.
You need more iterations on collision detection. P.S. Box2D - big library with 2d phisics, Euler's equations etc. It has python bindings.
Positioning and Velocity Problems in Pygame
[ "", "python", "position", "pygame", "" ]
I have single table where I have accepted images, rejected images, and changed images in single column with action column having action 2 accepted images, action 3 -rejected images , action 4- change images. I am able to run on "Accepted Images" but I want a different column for rejected images also. but not able to get that. I am new to mysql please help me. how can I get different column for rejected images ``` select message, date(datetime) as dateonly ,count(message) from customer_1.audit_trail where message in ('Accepted Images') group by dateonly order by dateonly asc limit 100; message, dateonly, count(message) "Accepted Images",2007-08-07, 79 "Accepted Images",2007-08-08,52 ```
Your above query is wrong. You might have never get message column output because in your group by column you have only dateonly. ``` select date(`datetime`) as dateonly , count(case when message= 'Accepted Images' then 1 else 0 end ) as accepted_image_count, count(case when message= 'Rejected Images' then 1 else 0 end ) as rejected_image_count from customer_1.audit_trail where message in ('Accepted Images','Rejected Images') group by dateonly order by dateonly asc limit 100; ```
If it doesn't need to be in a seperate column (which is not really clear from your description) then you can do this: ``` select message, date(datetime) as dateonly, count(message) from customer_1.audit_trail group by message, dateonly order by dateonly asc limit 100; ``` otherwise you'll have to pivot the message column
how can I retrieve data from single table using group by clause
[ "", "mysql", "sql", "group-by", "" ]
I'm just trying to append new tweets that come in to a new line in a file.... So far nothing i'm trying works on OS X Python. ``` class CustomStreamListener(tweepy.StreamListener): def on_status(self, status): print status.text with open("myNewFile", "a") as file: file.write('\n') file.write("\n" + status.text + "\n") file.write('\n') ``` Any ideas?
You have an issue with indentation: ``` with open("myNewFile", "a") as file: file.write('\n') file.write("\n" + status.text + "\n") file.write('\n') ``` If you want to be inside the `with` context, you should indent the following three lines to the right. Further, you can use `format()` to prepare the string you want to write, for efficiency and readibility: ``` import os with open("myNewFile", "a") as file: file.write('{0}{0} {1} {0}{0}'.format(os.linesep, status.text) #file.write('\n') #file.write("\n" + status.text + "\n") #file.write('\n') ``` Note the `os.linesep` to insert an OS independent new line :). You can also write two `linesep` by repeating them twice (multiply the string by 2): ``` file.write('{0} {1} {0}'.format(os.linesep * 2, status.text) ``` Which is cleaner.
Your indentation is wrong in your `with` staement ``` class CustomStreamListener(tweepy.StreamListener): def on_status(self, status): print status.text with open("myNewFile", "a") as file: file.write('\n') #move this over 1 indentation file.write("\n" + status.text + "\n") #move this over 1 indentation file.write('\n') #move this over 1 indentation ``` Also try `'\r\n'` instead of just `'\n'` because UNIX handles newlines differently than windows. Another option is to open the file with [universal newline support](http://docs.python.org/2/library/functions.html#open) like this ``` with open("myNewFile", "u") as file: ``` Note that "u" mode is deprecated in 3.x since it is default **edit 2** It seems that your newline characters are showing up in the output. See [this related question](https://stackoverflow.com/questions/10420337/new-line-and-tab-characters-in-python-on-mac)
How to write to a new line every time in python?
[ "", "python", "io", "" ]
I have following directory structure ``` outdir |--lib |--- __init__.py |--- abc.py |--indir |--- __init__.py |---- import_abc.py ``` How to import `lib` in `import_abc.py`? when i try to import lib in import\_abc.py I get following error ``` Traceback (most recent call last): File "import_abc.py", line 1, in <module> import lib ImportError: No module named lib ```
Add a `__init__.py` file in `outdir` and then do: ``` #import_abc.py import sys sys.path.append('/home/monty/py') #path to directory that contains outdir from outdir.lib import abc ```
Take a minute to look at what you're trying to achieve: you want to import the module abc.py which is a part of the package lib, so in order to import it correctly you need to specify in which package it is: ``` from lib import abc ``` or ``` import lib.abc as my_import ``` On a side note, have a look at [the Python Tutorial chapter on modules](http://docs.python.org/2/tutorial/modules.html). Given what @Noelkd commented, I forgot to tell you about PYTHONPATH which is an environment variable that contains the folder where Python will look for the module (like the Java CLASSPATH). You need to put your root folder inside the PYTHONPATH to avoid doing some fiddling with sys.path.append. In Bash: ``` export PYTHONPATH=<abs. path to outdir> ``` For example, I've put outdir in my Desktop: ``` export PYTHONPATH=~/Desktop/outdir ``` And the import works like a charm. You can find some great explanations of the import mechanism and PYTHONPATH in [this blog post](http://www.stereoplex.com/blog/understanding-imports-and-pythonpath). N.B.: If you're using an IDE (PyDev for example), generally it will set up the PYTHONPATH automatically for each project. But if you want to do it all by yourself, you will need to set the PYTHONPATH environment variable.
Python : how to import module in other module
[ "", "python", "import", "packaging", "" ]
I'm trying to figure out how to split a string, keeping the delimiters, except when the delimiter is followed by a space. I seem to be most of the way there, except that the character immediately following the delimiter is retained with the delimiter. What I have so far is the following: ``` >>> s='\nm222 some stuff \n more stuff' >>> re.split('(\n[^ ])',s) ['', '\nm', '222 some stuff \n more stuff'] ``` The result i need is ``` ['', '\n', 'm222 some stuff \n more stuff'] ``` What am I missing here? Thanks for the help.
Use a negative lookahead: ``` >>> s='\nm222 some stuff \n more stuff' >>> re.split(r'(\n(?! ))', s) ['', '\n', 'm222 some stuff \n more stuff'] ``` Your code, ``` re.split('(\n[^ ])',s) ``` Doesn't work because `(\n[^ ])` puts the "not a space" character in the same capturing group as `\n`, giving you `\nm`. `(\n(?! ))` avoids consuming the "not a space" character, placing it in the next capturing group but still using it to split. You can read more about lookaheads on the [python regex documentation page](http://docs.python.org/2/library/re.html).
Use `\n(?! )`. This is a [negative lookahead](http://www.regular-expressions.info/lookaround.html) This will ensure the `\n` is not followed by a space --- If you wanted, you could even use `\n(?!\s)`. `\s` includes a variety of whitespace characters like * `' '` (a single space) * `\t` (tab) * `\n` (newline) * `\r` (carriage return)
Split on delimiter except when followed by space
[ "", "python", "regex", "" ]
I have two tables in my Access-database. They look something like this: ``` Table1 +--------------+----------+----------+----------+ | Kabelnummer | Column1 | Column2 | Column3 | +--------------+----------+----------+----------+ | 1 | x | x | x | +--------------+----------+----------+----------+ | 2 | x | x | x | +--------------+----------+----------+----------+ | 3 | x | x | x | +--------------+----------+----------+----------+ | 4 | x | x | x | +--------------+----------+----------+----------+ table2 +--------------+----------+----------+----------+ | Kabelnummer | Column1 | Column2 | Column3 | +--------------+----------+----------+----------+ | 1 | x | x | x | +--------------+----------+----------+----------+ | 2 | x | x | x | +--------------+----------+----------+----------+ | 3 | x | x | x | +--------------+----------+----------+----------+ | 4 | x | x | x | +--------------+----------+----------+----------+ ``` I need a query that gives me 1 table with the data from table1 added to the data from table2: ``` TableTotal +--------------+----------+----------+----------+ | Kabelnummer | Column1 | Column2 | Column3 | +--------------+----------+----------+----------+ | 1 | x | x | x | +--------------+----------+----------+----------+ | 2 | x | x | x | +--------------+----------+----------+----------+ | 3 | x | x | x | +--------------+----------+----------+----------+ | 4 | x | x | x | +--------------+----------+----------+----------+ | 1 | x | x | x | +--------------+----------+----------+----------+ | 2 | x | x | x | +--------------+----------+----------+----------+ | 3 | x | x | x | +--------------+----------+----------+----------+ | 4 | x | x | x | +--------------+----------+----------+----------+ ``` The names "Column1", "Column2" and "Column3" are the same in both tables
``` SELECT * FROM Table1 UNION SELECT * FROM table2; ```
The question asks for non-distinct values while the current answers provide distinct values. The method below provides non-distinct values such that ``` SELECT * FROM Table1 UNION ALL SELECT * FROM table2; ``` which is often more efficient than the union method, particularly with large data sets (not having to compute the distinct).
MS-Access: Merge two tables "below" each other
[ "", "sql", "ms-access", "ms-access-2007", "" ]
I am trying to generate a list of 7 letter words in python that satisfy the following conditions: 1. Words are fixed length (7 letters) 2. Words are only uppercase (I'm using ascii\_uppercase) 3. Is of the form ?D?R?T? where the ? acts as placeholders for letters. Hence the following are valid examples: * ADPRETE * BDFRUTW * JDQRQTA * ZDZRZTZ * QDQRQTQ I'm using the following piece of code, but wondering how to generate words meeting the 3rd criteria. Any help/pointers would be awesome! ``` from string from itertools import product for n in range (7,8): for arr in product(string.ascii_uppercase, repeat=n): print ''.join(arr) ```
Generic solution. Just create a mask, and it will do the rest for you :) ``` from string import ascii_uppercase from itertools import product def gen_words(mask): replace = mask.count('?') mask = mask.replace('?', '{}') for letters in product(ascii_uppercase, repeat=replace): yield mask.format(*letters) ``` Example: ``` >>> list(gen_words('?Z')) ['AZ', 'BZ', 'CZ', 'DZ', 'EZ', 'FZ', 'GZ', 'HZ', 'IZ', 'JZ', 'KZ', 'LZ', 'MZ', 'NZ', 'OZ', 'PZ', 'QZ', 'RZ', 'SZ', 'TZ', 'UZ', 'VZ', 'WZ', 'XZ', 'YZ', 'ZZ'] ```
``` import string import itertools as IT for arr in IT.product(string.ascii_uppercase, repeat=4): print ''.join('{}D{}R{}T{}'.format(*arr)) ```
Generate list of 7 letter words meeting specific conditions - Python
[ "", "python", "list", "words", "alphabet", "" ]
``` UPDATE base_resources SET value = value + ? WHERE resource_id = ? AND base_id = ? ``` I have this query. Is it possible to somehow add a max value for the the value? i.e. The new value of "value" cannot be higher than x, if it is, set it to x. For example: If the value entered is (1500) and the max value allowed (that conditional I want to add in my query is) 1000, the value will be 1000 instead. How can I do that?
Something like this: ``` UPDATE base_resources SET value = least(value + ?, 1500) WHERE resource_id = ? AND base_id = ? ```
Maybe `case` syntax? ``` UPDATE base_resources SET value = case when value + ? > X then X else value + ? end WHERE resource_id = ? AND base_id = ? ```
Updating a double value, but not higher than x value?
[ "", "mysql", "sql", "" ]
Im making a density plot with matplotlib and I would also like to get rug plot under it. good example to make density plot is here [How to create a density plot in matplotlib?](https://stackoverflow.com/questions/4150171/how-to-create-a-density-plot-in-matplotlib) but I couldn't find any good example for rug plot. in R it can be done easly by rug(data).
You can find an example [here](https://gist.github.com/rgommers/1534517)! ``` ax = fig.add_subplot(111) ax.plot(x1, np.zeros(x1.shape), 'b+', ms=20) # rug plot x_eval = np.linspace(-10, 10, num=200) ax.plot(x_eval, kde1(x_eval), 'k-', label="Scott's Rule") ax.plot(x_eval, kde1(x_eval), 'r-', label="Silverman's Rule") ``` Seems to be the core of it!
You can plot markers at each datapoint. ``` from scipy import stats import numpy as np import matplotlib.pyplot as plt sample = np.hstack((np.random.randn(30), np.random.randn(20)+5)) density = stats.kde.gaussian_kde(sample) fig, ax = plt.subplots(figsize=(8,4)) x = np.arange(-6,12,0.1) ax.plot(x, density(x)) ax.plot(sample, [0.01]*len(sample), '|', color='k') ``` ![enter image description here](https://i.stack.imgur.com/rFHKM.png)
how to make rug plot in matplotlib
[ "", "python", "matplotlib", "plot", "" ]
I get the error message when I run the following query in `MSSQL Server 2005`. Error Message is `Incorrect syntax near ','`. I think query is ok. But I don't know why I get error. ``` INSERT INTO PERSON (ID, EMP_NAME) VALUES ('E001', 'AAA'), ('E002', 'BBB'); ``` SQL Server does not support?
If your DB is lower than `SQL Server 2008` ``` INSERT INTO PERSON (ID, EMP_NAME) VALUES ('E001', 'AAA'); INSERT INTO PERSON (ID, EMP_NAME) VALUES ('E002', 'BBB'); ```
Try to use `UNION ALL` - ``` INSERT INTO Person (id, EMP_NAME) SELECT id = 'E001', EMP_NAME = 'AAA' UNION ALL SELECT 'E002', 'BBB' ```
Insert SQL for multiple record
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I tried to execute following SQL in MS Access. Basically `TAB3` use as a translate table ``` SELECT * FROM TAB1 T1 INNER JOIN TAB2 T2 ON T1.MemNo = T2.MemID AND (T1.SID = (SELECT x.Col1 FROM TAB3 x WHERE x.Col2 = T2.SVID)) ``` But it gives me a syntax error What could be the possible issue updated: ``` TAB1 MemNo SID 116537 S110 116537 D011 575788 D012 214438 S110 434675 D114 214438 D011 208368 D012 208368 S110 TAB2 MemID SVID 116537 110 116537 11 214438 11 434675 114 214438 110 575788 12 208368 12 208368 110 TAB3 Col1 Col2 D011 11 S110 110 D114 114 D012 12 ``` Thanks
Why not move the subquery to the WHERE statement of your query? ``` SELECT * FROM TAB1 T1 INNER JOIN TAB2 T2 ON T1.MemNo = T2.MemID WHERE EXISTS ( SELECT 1 FROM TAB3 x WHERE x.Col2 = T2.SVID AND x.Col1 = T1.SID ) ``` Try the following with LEFT JOIN: ``` SELECT * FROM TAB1 T1 LEFT JOIN TAB2 T2 ON T1.MemNo = T2.MemID WHERE EXISTS ( SELECT 1 FROM TAB3 x WHERE x.Col2 = COALESCE(T2.SVID, x.Col2) AND x.Col1 = T1.SID ) ```
Try with exists: ``` SELECT * FROM TAB1 T1 INNER JOIN TAB2 T2 ON T1.MemNo = T2.MemID WHERE EXISTS (SELECT * FROM TAB3 x WHERE x.Col2 = T2.SVID AND x.Col1 = T1.SID) ```
Is it possible to have sub query filter in JOIN
[ "", "sql", "ms-access-2007", "" ]
I tried to find the available methods but couldn't find it. There is no `contains`. Should I use `index`? I just want to know if the item exists, don't need the index of it.
You use `in`. ``` if element in thetuple: #whatever you want to do. ```
``` if "word" in str(tuple): # You can convert the tuple to str too ``` i has the same problem and only worked to me after the convert str()
How to check if a tuple contains an element in Python?
[ "", "python", "collections", "tuples", "" ]
How can I return a random key value from this list tuple? I'm only concerned with returning 'r', 'p', or 's' from moves. ``` # Snippet moves = [('r', "rock"), ('p', "paper"), ('s', "scissors")] view_all(moves): print "Player moves:" for move in moves: print " => ".join((move[0], move[1])) ```
Using [`random.choice`](http://docs.python.org/3/library/random.html#random.choice). ``` >>> import random >>> moves = [('r', "rock"), ('p', "paper"), ('s', "scissors")] >>> random.choice(moves) ('s', 'scissors') ``` If only the first value of the tuple is wanted: ``` random.choice(moves)[0] ```
Use [`random.choice`](http://docs.python.org/2/library/random.html#random.choice). ``` >>> import random >>> moves = [('r', "rock"), ('p', "paper"), ('s', "scissors")] >>> print random.choice(moves)[0] 's' ```
Return random value from list tuple
[ "", "python", "" ]
I'm modifying [this script](https://stackoverflow.com/questions/257409/download-image-file-from-the-html-page-source-using-python/258511#258511) to scrape pages [like this](http://bookre.org/reader?file=1077091&pg=1) for the book page images. Using the script directly from stackoverflow, it returns all the images correctly except the one image I want. The page is returned as empty file with a title like this: img.php?dir=39d761947ad84e71e51e3c300f7af8ff&file=1.png. In my modified version below I'm only pulling the book page image. Here's my script: ``` from bs4 import BeautifulSoup as bs import urlparse from urllib2 import urlopen from urllib import urlretrieve import os import sys out_folder = '/Users/Craig/Desktop/img' def main(url, out_folder): soup = bs(urlopen(url)) parsed = list(urlparse.urlparse(url)) for image in soup.findAll('img', id='page_image'): print "Image: %(src)s" % image filename = image["src"].split("/")[-1] parsed[2] = image["src"] outpath = os.path.join(out_folder, filename) if image["src"].lower().startswith("http"): urlretrieve(image["src"], outpath) else: urlretrieve(urlparse.urlunparse(parsed), outpath) def _usage(): print "usage: python dumpimages.py http://example.com [outpath]" if __name__ == "__main__": url = sys.argv[-1] if not url.lower().startswith("http"): out_folder = sys.argv[-1] url = sys.argv[-2] if not url.lower().startswith("http"): _usage() sys.exit(-1) main(url, out_folder) ``` Any ideas?
The issue here is that the url you are using to retrieve the image is: `http://bookre.org/loader/img.php?dir=39d761947ad84e71e51e3c300f7af8ff&file=1.png?file=1077091&pg=1` When you actually want it to be: `http://bookre.org/loader/img.php?dir=39d761947ad84e71e51e3c300f7af8ff&file=1.png` Here's something I hacked together in 2 minutes to download the image you required from the website you listed: ``` import urllib import urllib2 import urlparse from bs4 import BeautifulSoup def main(url): html = urllib2.urlopen(url) soup = BeautifulSoup(html.read()) parsed = list(urlparse.urlparse(url)) for image in soup.find_all(id="page_image"): if image["src"].lower().startswith("http"): urllib.urlretrieve(image["src"], "image.png") else: new = (parsed[0], parsed[1], image["src"], "", "", "") urllib.urlretrieve(urlparse.urlunparse(new), "image.png") if __name__ == '__main__': main("http://bookre.org/reader?file=1077091&pg=1") ``` The script saves the image as `"image.png"` in the directory the script is located in. Hope this is what you were after; let us know if you run into any difficulties.
In your: ``` else: urlretrieve(urlparse.urlunparse(parsed), outpath) ``` You need to replace some of the elements in parsed with those from image["src"]
Scraping a page for images but files are returned as empty
[ "", "python", "parsing", "scripting", "web-scraping", "" ]
I have a SQL query that brings back 2 columns of data both of TYPE TEXT. What i am trying to do is: ``` UPDATE [DBNAME} SET [3 15] = SUBSTR([3 15], -1)) where [3 15] LIKE '%;' ``` Where [3 15] is the column name, i would like to pull the data from that column which has ends in a ';' and then remove the trailing ';' This would be easy if the column type was string but its not. Running on a: Microsoft SQL Server 2005 - Developer Edition
I would change that text field to varchar(max) as soon as possible. However here is a solution that should work: ``` declare @a table([3 15] text) insert @a values('aba;;;') insert @a values('123abc;;') insert @a values('abdc;') insert @a values('abkjfshc') insert @a values(';;;;') ;with a as ( SELECT [3 15], cast([3 15] as varchar(max)) v -- replace @a with your actual tablename FROM @a ) UPDATE a SET [3 15] = left(v, len(v)-patindex('%;[^;]%', reverse(v) + '+') + 1) WHERE [3 15] LIKE '%;;' -- notice the change select * from @a ``` Result: ``` 3 15 aba; 123abc; abdc; abkjfshc ; ``` EDIT: To replace all combinations of '%;;%' within the text: ``` declare @a table([3 15] text) insert @a values('aba;;;') insert @a values('123abc;;') insert @a values('abdc;') insert @a values('ab;;kjf;;;;;shc') insert @a values(';;;;') -- if any combinations of ';;' still exists repeat until they are gone while exists (select 1 from @a where [3 15] like '%;;%') with a as ( SELECT [3 15], cast([3 15] as varchar(max)) v -- replace @a with your actual tablename FROM @a WHERE [3 15] LIKE '%;;%' -- notice the change ) UPDATE a -- to speed things up i used several replace SET [3 15] = replace(replace(replace(replace(v, ';;;;', ';'), ';;;', ';'), ';;;', ';'), ';;', ';') ```
Hope this sql will help you ``` SELECT LEFT('CHAMLY', LENGTH('CHAMLY') -1); ``` for further reading use <http://dev.mysql.com/doc/refman/5.0/en/string-functions.html>
SQL Substring - Remove Last Character(s)
[ "", "sql", "sql-server-2005", "text", "sql-update", "substr", "" ]
I have a file of the form: ``` car1 auto1 automobile1 machine4 motorcar1 bridge1 span5 road1 route2 ``` But I want to remove the integers so that my file looks like: ``` car auto automobile machine motorcar bridge span road route ``` I am trying to read the file character by character, and if a character is a digit, skip it. But I am printing them in a new file. How can I make changes in the input file itself?
Using [regular expressions](http://www.regular-expressions.info/tutorial.html): ``` import re import fileinput for line in fileinput.input("your_file.txt", inplace=True): print re.sub("\d+", "", line), ``` note: fileinput is a nice module for working with files. Edit: for better performance/less flexibility you can use: ``` import fileinput import string for line in fileinput.input("your_file.txt", inplace=True): print line.translate(None, string.digits), ``` For multiple edits/replaces: ``` import fileinput import re for line in fileinput.input("your_file.txt", inplace=True): #remove digits result = ''.join(i for i in line if not i.isdigit()) #remove dollar signs result = result.replace("$","") #some other regex, removes all y's result = re.sub("[Yy]+", "", result) print result, ```
``` with open('input.txt', 'r') as f1, open('output.txt', 'w') as f2: f2.write("".join([c for c in f1.read() if not c.isdigit()])) ```
Removing digits from a file
[ "", "python", "" ]
I tried to create a trigger with `OLD.num` and `NEW.num` as shown below: ``` CREATE TRIGGER my_trigger AFTER INSERT ON test FOR EACH ROW SET @old_num = OLD.num, @new_num = NEW.num; -- ↑ Here -- ↑ Here ``` But, I got the error below: > ERROR 1363 (HY000): There is no OLD row in on INSERT trigger So, how can I solve it?
The most likely explanation for getting an error ``` "There is no OLD row in on INSERT trigger" ``` is that you are executing a statement that's creating an `AFTER INSERT` trigger, rather than creating an `AFTER UPDATE` trigger. The reason that you can't reference OLD values from the row, as the row existed prior to the INSERT, is that the row did not exist prior to the INSERT.
In an [`UPDATE TRIGGER`](http://dev.mysql.com/doc/refman/5.0/en/trigger-syntax.html), you can use the `OLD` keyword to access the row data which is being replaced by the update. The `NEW` keyword allows accessing the incoming row data which will replace the old row, if successful. An example of an `UPDATE` trigger is: ``` CREATE TRIGGER upd_check AFTER UPDATE ON SomeTable FOR EACH ROW BEGIN IF (OLD.LastChangedBy <> NEW.LastChangedBy) THEN INSERT INTO AuditSomeTable(ID, LastChangedBy) VALUES (OLD.ID, OLD.LastChangedBy); END IF; END; ``` [SQLFiddle](http://sqlfiddle.com/#!2/122e9) here Depending on the type of trigger created, the `OLD` and `NEW` rows may not be available to you: *INSERT TRIGGER* * Access to the `NEW` pseudo rows only. *UPDATE TRIGGER* * Access to the `NEW` and `OLD` pseudo rows *DELETE TRIGGER* * Access only to the `OLD` pseudo rows i.e. there is no `OLD` row on an `INSERT` trigger, and no `NEW` row on a `DELETE` trigger. **OP's Question** OP hasn't provided the actual code, and the error message referred to in the comments: > There is no OLD row in on INSERT trigger indicates that the OP had inadvertently created an `INSERT TRIGGER` and not an `UPDATE TRIGGER` as was indicated in the question. An `INSERT` trigger has no `OLD` pseudo table.
"There is no OLD row in on INSERT trigger" error in MySQL
[ "", "sql", "mysql", "database", "triggers", "mysql-error", "" ]
I know parts of this question have been asked before, but I have some related questions. I'm trying to execute ``` mysqldump -u uname -ppassword --add-drop-database --databases databaseName | gzip > fileName ``` I'm potentially dumping a very large (200GB?) db. Is that in itself a dumb thing to do? I then want to send the zipped file over the network for storage, delete the local dump, and purge a couple of tables. Anyway, I was using subprocess like this, because there doesn't seem to be a way to execute the entire original call without subprocess considering | to be a table name.: ``` from subprocess import Popen, PIPE f = open(FILENAME, 'wb') args = ['mysqldump', '-u', 'UNAME', '-pPASSWORD', '--add-drop-database', '--databases', 'DB'] p1 = Popen(args, stdout=PIPE) P2 = Popen('gzip', stdin=p1.stdout, stdout=f) p2.communicate() ``` but then I read that communicate caches the data in memory, which wouldn't work for me. Is this true? What I ended up doing for now is: ``` import gzip subprocess.call(args, stdout=f) f.close() f = open(filename, 'rb') zipFilename = filename + '.gz' f2 = gzip.open(zipFilename, 'wb') f2.writelines(f) f2.close() f.close() ``` of course this takes a million years, and I hate it. My Questions: 1. Can I use my first approach on a very large db? 2. Could I possibly pipe the output of mysqldump to a socket and fire it across the network and save it when it arrives, rather than sending a zipped file? Thanks!
You don't need communicate(). Its only there as a convenience method if you want to read stdout/stderr to completion. But since you are chaining the commands, they are doing that for you. Just wait for them to complete. ``` from subprocess import Popen, PIPE args = ['mysqldump', '-u', 'UNAME', '-pPASSWORD', '--add-drop-database', '--databases', 'DB'] with open(FILENAME, 'wb', 0) as f: p1 = Popen(args, stdout=PIPE) p2 = Popen('gzip', stdin=p1.stdout, stdout=f) p1.stdout.close() # force write error (/SIGPIPE) if p2 dies p2.wait() p1.wait() ```
You are quite close to where you want: ``` from subprocess import Popen, PIPE f = open(FILENAME, 'wb') args = ['mysqldump', '-u', 'UNAME', '-pPASSWORD', '--add-drop-database', '--databases', 'DB'] p1 = Popen(args, stdout=PIPE) ``` Till here it is right. ``` p2 = Popen('gzip', stdin=p1.stdout, stdout=PIPE) ``` This one takes `p1`'s output and processes it. Afterwards we can (and should) immediately `p1.stdout.close()`. Now we have a `p2.stdout` which can be read from and, without using a temporary file, send it via the network: ``` s = socket.create_connection(('remote_pc', port)) while True: r = p2.stdout.read(65536) if not r: break s.send(r) ```
python subprocess and mysqldump
[ "", "python", "subprocess", "mysql", "" ]
I apologize for how simplistic this may be, but I am a little confused looking at one part of this code. ``` # Geek Translator # Demonstrates using dictionaries geek = {"404": "clueless. From the web error message 404, meaning page not found.", "Googling": "searching the Internet for background information on a person.", "Keyboard Plague": "the collection of debris found in computer keyboards.", "Link Rot" : "the process by which web page links become obsolete.", "Percussive Maintainance" : "the act of striking an electronic device to make it work.", "Uninstalled" : "being fired. Especially popular during the dot-bomb era."} choice = None while choice != "0": print( """ Geek Translator 0 - Quit 1 - Look Up a Geek Term 2 - Add a Geek Term 3 - Redefine a Geek Term 4 - Delete a Geek Term """ ) choice = input("Choice: ") print() # exit if choice == "0": print("Good-bye.") # get a definition elif choice == "1": term = input("What term do you want me to translate?: ") if term in geek: definition = geek[term] print("\n", term, "means", definition) else: print("\nSorry, I don't know", term) # add a term-definition pair elif choice == "2": term = input("What term do you want me to add?: ") if term not in geek: definition = input("\nWhat's the definition?: ") geek[term] = definition print("\n", term, "has been added.") else: print("\nThat term already exists! Try redefining it.") # redefining an existing term elif choice == "3": term = input("What term do you want me to redefine?: ") if term in geek: definition = input("What's the new definition?: ") geek[term] = definition print("\n", term, "has been redefined.") else: print("\nThat term doesn't exist! Try adding it.") # delete a term-definition pair elif choice == "4": input("What term do you want me to delete?") if term in geek: del geek[term] print("\nOkay, I deleted", term) else: print("\nI can't do that!", term, "doesn't exist in the dictionary.") # some unknown choice else: print("\nSorry, but", choice, "isn't a valid choice.") input("\n\nPress the enter key to exit.") ``` I understand how all of this works with the exception of the print() function after `choice = input(Choice: ")` Why is that there? If I remove it, nothing changes (as far as I can tell), so I was curious about its significance.
`print()` with no parameters prints a newline. The point is to show a blank line in the terminal output.
It prints a new line (which is visible as an empty line in the console output).
Why is this print function here?
[ "", "python", "python-3.x", "" ]
I've been trying to add the `python2` path to command line on Windows 7. I have tried to do it this way: ``` C:\>set python2 = C:\Python27\python.exe ``` But `cmd` told me that: ``` 'python2' is not recognized as an internal or external command. ``` So how can I add `python2` path to command line on Windows 7? Thanks.
This answer copied from [my own answer](https://stackoverflow.com/a/17836439/2225682), and customized to this question. Try following command. ``` set path=%path%;c:\python27 ``` `PATH` is set only for the cmd.exe in which you run the above command. To permanently set PATH: 1. Right click **My computer** in the desktop. 2. Click **Advanced System Settings** on the left. 3. Click **Environmental Variable**. 4. Add or Update `PATH` variable: * If it does not exist, create one, and set value as `C:\python27` * If it exist, append `;C:\Python27` to existing value. 5. Restart cmd.exe. (PATH will not change for already launched cmd.exe) **UPDATE** If you want to use **python2**, copy the executable. ``` CD C:\python27 COPY python.exe python2.exe ```
Had the same problem and fixed it... I have `C:\Python27`, which i have added to my environment variables which gave me access to "python" though the CMD.. but for installing node-sass though npm, which was my problem, it continues to say that "python2" is not found. A friend the told me i could just rename name executable, which i didn't belive, but it worked :-) The rename was from: `C:\Python27\python.exe` > `C:\Python27\python2.exe` It works for me, even though it find it weird just renaming a file.. but then, i am not hardcore in the windows CMD.
Add “python2” path to command line on Windows 7
[ "", "python", "windows", "windows-7", "path", "cmd", "" ]
I have this simple code which fetches page via `urllib`: ``` browser_list= ['Chrome','Mozilla','Safari','Internet Explorer','Opera'] user_string_url="http://www.useragentstring.com/pages/" for eachBrowser in browser_list: result= urllib2.urlopen(urljoin(user_string_url,eachBrowser)) ``` Now I can read the result via `result.read()` but I was wondering if all this functionality can be done outside the `for` loop. Because other URLs to be fetched will wait until all the result has been processed. I want to process `result` outside the `for` loop. Can this be done?
One of the ways to do this maybe to have result as a dictionary. What you can do is: result = {} ``` for eachBrowser in browser_list: result[eachBrowser]= urllib2.urlopen(urljoin(user_string_url,eachBrowser)) ``` and use result[BrowserName] outside the loop. Hope this helps.
If you simply wants to access all results outside the loop just append all results to a array or dictionary as above answer. Or if you trying to speed up your task try [multithreading](http://docs.python.org/library/threading.html%E2%80%8E). ``` import threading class myThread (threading.Thread): def __init__(self, result): threading.Thread.__init__(self) self.result=result def run(self): // process your result(as self.result) here browser_list= ['Chrome','Mozilla','Safari','Internet Explorer','Opera'] user_string_url="http://www.useragentstring.com/pages/" for eachBrowser in browser_list: result= urllib2.urlopen(urljoin(user_string_url,eachBrowser)) myThread(result).start() // it will start processing result on another thread and continue loop without any waiting ``` Its a simple way of multithrading. It may break depending on your result processing. Consider reading the [documentation](http://docs.python.org/library/threading.html%E2%80%8E) and some [examples](http://www.tutorialspoint.com/python/python_multithreading.htm) before you try.
Processing Result outside For Loop in Python
[ "", "python", "" ]
i am working on a project and stuck on a `sql` `query`, query does not give any error nor returning any result tell me where the problem is ``` SELECT barcode , Date , timein , timeout , totaltime , leave , remarks FROM TimeSheet WHERE barcode = @barcode AND Date LIKE '@year-07-%' ``` i am passing 2 values in variable at runtime `@barcode` `@year` but if i run the query with values type explicitly in sql editor it works fine and return values if run this ``` SELECT barcode , Date , timein , timeout , totaltime , leave , remarks FROM TimeSheet WHERE barcode = 123456 AND Date LIKE '2013-07-%' ``` it return values
SQL Server does not expand the variable in `'@year-07-%'`. Assuming the `@year` parameter is a `varchar` and the `[date]` column is a `date`, you could try this instead: ``` where convert(varchar(10), [date], 120) like @year + '-07-%' ``` Or even better: ``` where datepart(year, [date]) = cast(@year as int) and datepart(month, [date]) = 7 ```
The following will be efficient if there is an index on DATE column ``` SELECT barcode , Date , timein , timeout , totaltime , leave , remarks FROM TimeSheet WHERE barcode = @barcode AND Date >=dateadd(month,6,dateadd(year,@year-1900,0)) AND Date <dateadd(month,7,dateadd(year,@year-1900,0)) ```
Wildcard sql date query
[ "", "asp.net", "sql", "sql-server", "" ]
I have a quick question regarding sorting rows in a csv files using Pandas. The csv file which I have has the data that looks like: ``` quarter week Value 5 1 200 3 2 100 2 1 50 2 2 125 4 2 175 2 3 195 3 1 10 5 2 190 ``` I need to sort in following way: sort the quarter and the corresponding weeks. So the output should look like following: ``` quarter week Value 2 1 50 2 2 125 2 3 195 3 1 10 3 2 100 4 2 175 5 1 200 5 2 190 ``` My attempt: ``` df = df.sort('quarter', 'week') ``` But this does not produce the correct result. Any help/suggestions? Thanks!
> **Note**: `sort` has been deprecated in favour of [`sort_values`](http://pandas.pydata.org/pandas-docs/version/0.19/generated/pandas.DataFrame.sort_values.html#pandas.DataFrame.sort_values), which you should use in Pandas 0.17+. Typing `help(df.sort)` gives: ``` sort(self, columns=None, column=None, axis=0, ascending=True, inplace=False) method of pandas.core.frame.DataFrame instance Sort DataFrame either by labels (along either axis) or by the values in column(s) Parameters ---------- columns : object Column name(s) in frame. Accepts a column name or a list or tuple for a nested sort. [...] Examples -------- >>> result = df.sort(['A', 'B'], ascending=[1, 0]) [...] ``` and so you pass the columns you want to sort as a list: ``` >>> df quarter week Value 0 5 1 200 1 3 2 100 2 2 1 50 3 2 2 125 4 4 2 175 5 2 3 195 6 3 1 10 7 5 2 190 >>> df.sort(["quarter", "week"]) quarter week Value 2 2 1 50 3 2 2 125 5 2 3 195 6 3 1 10 1 3 2 100 4 4 2 175 0 5 1 200 7 5 2 190 ```
New answer, as of 14 March 2019 ``` df.sort_values(by=["COLUMN"], ascending=False) ``` This returns a new sorted data frame, doesn't update the original one. Note: You can change the ascending parameter according to your needs, without passing it, it will default to `ascending=True`
Sorting rows in csv file using Python Pandas
[ "", "python", "csv", "pandas", "" ]
I have a list: ``` greeting = ['hello','my','name','is','bob','how','are','you'] ``` I want to define a function that will find the first and last index of a sublist in this list. Thus: ``` find_sub_list(['my','name','is'], greeting) ``` should return: ``` 1, 3 ``` Suggestions?
If you want multiple matches, this works: ``` greeting = ['hello','my','name','is','bob','how','are','you','my','name','is'] def find_sub_list(sl,l): results=[] sll=len(sl) for ind in (i for i,e in enumerate(l) if e==sl[0]): if l[ind:ind+sll]==sl: results.append((ind,ind+sll-1)) return results print find_sub_list(['my','name','is'], greeting) # [(1, 3), (8, 10)] ``` Or if you just want the first match: ``` greeting = ['hello','my','name','is','bob','how','are','you','my','name','is'] def find_sub_list(sl,l): sll=len(sl) for ind in (i for i,e in enumerate(l) if e==sl[0]): if l[ind:ind+sll]==sl: return ind,ind+sll-1 print find_sub_list(['my','name','is'], greeting) # (1, 3) ```
Slice the list: ``` >>> greeting[0:3] ['hello', 'my', 'name'] >>> greeting[1:4] ['my', 'name', 'is'] >>> greeting[1:4] == ['my','name','is'] True ``` This should get your started: ``` for n in range(len(greeting) - len(sub_list) + 1): ... ```
Find starting and ending indices of sublist in list
[ "", "python", "list", "search", "sublist", "" ]
I'm experimenting with [stdnet](http://lsbardel.github.io/python-stdnet/) and I'm having a challenge with what should be a relatively simple case. If I populate my model (see below the `<hr>`) without specifying the value for the primary key, I get: > stdnet.utils.exceptions.FieldValueError: {"author\_id": "Field 'author\_id' is required for '**main**.book'."} ``` author1 = models[Author](name='Jeff Doyle') ``` However, adding a value for `id` makes the code work... ``` author1 = models[Author](name='Jeff Doyle', id=1) ``` `Author.id` is `odm.AutoIdField()`. Since this is a hierarchical data model, I might be able to understand the requirement to manually add `id = odm.AutoIdField()` to my model. However, the documentation says that the [odm.AutoIdField](http://lsbardel.github.io/python-stdnet/api/fields.html#stdnet.odm.AutoIdField) automatically generates the value for the primary keys. My question: *Why do I need to specify a value for `id` manually when I populate my models?* I'm running [stdnet](http://lsbardel.github.io/python-stdnet/) 0.8.2 w/ Cython, on Python 2.6.6 and Debian 6.0 (kernel 2.6.32-5-amd64). --- **Working Example:** ``` from stdnet import odm class Author(odm.StdModel): id = odm.AutoIdField(primary_key=True, unique=True) name = odm.SymbolField() def __unicode__(self): return self.name class Book(odm.StdModel): id = odm.AutoIdField(primary_key=True, unique=True) title = odm.CharField() author = odm.ForeignKey(Author, related_name='books') def __unicode__(self): return "<Book '%s' by %s>" % (self.title, self.author) if __name__=='__main__': models = odm.Router('redis://localhost:6379?db=0') models.register(Author) models.register(Book) session = models.session() session.begin() author1 = models[Author](name='Jeff Doyle', id=1) session.add(author1) book1 = models[Book](title='Routing TCP/IP, Volume 1', id=2, author=author1) session.add(book1) session.commit() ``` --- **Traceback:** ``` Traceback (most recent call last): File "stdnet_example.py", line 31, in <module> session.commit() ... stdnet.utils.exceptions.FieldValueError: {"author_id": "Field 'author_id' is required for '__main__.book'."} ```
I believe the problem is that you are trying to insert both the book and the author in the same session. So at the time of the commit, the author hasn't been saved to the database and thus doesn't have an id for the book to reference in the foreign key. I think it should work if you commited the author before trying to add the book. Something like this: ``` session.begin() author1 = models[Author](name='Jeff Doyle') session.add(author1) session.commit() session.begin() book1 = models[Book](title='Routing TCP/IP, Volume 1', author=author1) session.add(book1) session.commit() ``` I see there was an issue raised on GitHub by someone having a similar problem to you, and there is [a response in the comments](https://github.com/lsbardel/python-stdnet/issues/62#issuecomment-20230530) from the author of stdnet. I think he is basically saying the same thing as me, but you can read it yourself.
The best (most efficient) way is to code as follow: ``` with models.session().begin() as t1: for author in ['Leo Tolstoy', 'unknown ghostwriter']: t1.add(models.author(name=author)) publisher = t1.add(models.publisher(name='Penguin Books')) ``` At this point, authors and publisher are saved, in addition authors can be retrieved from the transaction `saved` attribute: ``` authors = t1.saved['author'] ``` You can now create the books: ``` with models.session().begin() as t2: for title in ['Anna Karenina', 'War and Peace']: t2.add(models.book(name=title, publisher=publisher)) ``` And now you can add the authors via the last transaction: ``` books = t2.saved['book'] with models.session().begin() as t3: for book in books: # add the book to the transaction t3.add(book) for author in authors: book.authors.push_back(author) ``` --- All the code above assumes you're using this Model definition: ``` import sys from stdnet import odm class Publisher(odm.StdModel): name = odm.SymbolField() def __unicode__(self): return self.name class Author(odm.StdModel): name = odm.SymbolField() def __unicode__(self): return self.name class Book(odm.StdModel): name = odm.CharField() authors = odm.ListField(model=Author) publisher = odm.ForeignKey(Publisher, related_name='publishers') def __unicode__(self): return "<Book '%s' by %s>" % (self.name, self.authors) ```
redis / stdnet data model traceback
[ "", "python", "redis", "nosql", "" ]
I've seen this idiom in [Dive Into Python 3](http://www.diveinto.org/python3/): ``` l = list(open('strcpy.c')) ``` The question is, how can I `close` the file? Is something happening behind the scenes? I couldn't find this information in the book.
The file will be closed when its object is garbage-collected. In CPython, this happens pretty much immediately after that line is executed, because the file is never assigned to a variable. In other Pythons, such as Jython or IronPython, this may not happen right away (or at all), though all open files are always closed when the process exits. For this reason, a better approach is to close the file explicitly using 'with': ``` with open("strcpy.c") as infile: l = list(infile) ``` An advantage of this is that the file will be properly closed even if an exception occurs in reading it; you don't have to manually write code for this case using a `try/except` block. A `with` statement can be written on one line if you want to stick with the concise one-liner. :-) That said, I do sometimes use this idiom myself in short-running scripts where having the file open a wee bit longer than it strictly needs to be isn't a big deal. An advantage is that you don't clutter things up with a variable (`infile` in this case) pointing to a closed file.
From [doc](http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files): > It is good practice to use the with keyword when dealing with file > objects. This has the advantage that the file is properly closed after > its suite finishes, even if an exception is raised on the way. You can use it like this: ``` with open('strcpy.c') as f: l = list(f) ```
Close a file when creating a list from open()
[ "", "python", "file", "list", "" ]
From time to time I would remove or replace substring of one long string. Therefore, I would determine one start patern and one end patern which would determine start and end point of substring: ``` long_string = "lorem ipsum..white chevy..blah,blah...lot of text..beer bottle....and so to the end" removed_substr_start = "white chevy" removed_substr_end = "beer bott" # this is pseudo method down STRresult = long_string.replace( [from]removed_substr_start [to]removed_substr_end, "") ```
I guess you want something like that, without regex: ``` def replace_between(text, begin, end, alternative=''): middle = text.split(begin, 1)[1].split(end, 1)[0] return text.replace(middle, alternative) ``` Not tested and you should protected the first line from exception (if begin or end is not found), but the idea is here :)
You can use `regex`: ``` >>> import re >>> strs = "lorem ipsum..white chevy..blah,blah...lot of text..beer bottle....and so to the end" >>> sub_start = "white chevy" >>> sub_end = "beer bott" >>> re.sub(r'{}.*?{}'.format(re.escape(sub_start),re.escape(sub_end)),'',strs) 'lorem ipsum..le....and so to the end' ``` If you only want to remove the sub-string between `"white chevy"` and `"beer bott"` but not these words: ``` >>> re.sub(r'({})(.*?)({})'.format(re.escape(sub_start), re.escape(sub_end)),r'\1\3',strs) 'lorem ipsum..white chevybeer bottle....and so to the end' ```
How to remove or replace substring in Python determined by start and end point?
[ "", "python", "regex", "string", "" ]
I've got a list which I want to print in random order. Here is the code I've written so far: ``` import random words=["python","java","constant","immutable"] for i in words: print(i, end=" ") input("") #stops window closing ``` I've tried a variety of things to print them out randomly, such as making a variable which selects only one of them randomly and then deleting the randomly. I would then repeat this step until they are all deleted and within another variable. Then I would put the variables in a list then print them out. This kept on generating errors though. Is there another way this can be done?
Use [`random.shuffle()`](http://docs.python.org/3/library/random.html#random.shuffle) to shuffle a list, in-place: ``` import random words = ["python", "java", "constant", "immutable"] random.shuffle(words) print(*words) input('') ``` Demo: ``` >>> import random >>> words = ["python", "java", "constant", "immutable"] >>> random.shuffle(words) >>> words ['python', 'java', 'constant', 'immutable'] ``` If you wanted to preserve `words` (maintain the order), you can use `sorted()` with a random key to return a new randomized list: ``` words = ["python", "java", "constant", "immutable"] print(*sorted(words, key=lambda k: random.random())) ``` This leaves `words` unaltered: ``` >>> words = ["python", "java", "constant", "immutable"] >>> sorted(words, key=lambda k: random.random()) ['immutable', 'java', 'constant', 'python'] >>> words ['python', 'java', 'constant', 'immutable'] ```
Try this: ``` import random words2 = words[::] random.shuffle(words2) for w in words2: print(w, end=" ") ``` Notice that I copied the original list first, in case you want to preserve it. If you don't mind shuffling it, this should do the trick: ``` import random random.shuffle(words) for w in words: print(w, end=" ") ```
How can a print elements in a list randomly in python?
[ "", "python", "python-3.x", "random", "" ]
How would I convert string of namedtuples to a list? The problem is I have to store a list of namedtuples in a column in SQLite, which (obviously) doesn't support the format. I thought of just converting it into a string. However, since my tuple is a namedtuple, I don't know how to go from the string to list again. ``` >>> Point = namedtuple("Point", "x y", verbose = False) >>> p = Point(3, 5) >>> points = [] >>> points.append(Point(4, 7)) >>> points.append(Point(8, 9)) >>> points.append(p) >>> p.x 3 >>> print points [Point(x=4, y=7), Point(x=8, y=9), Point(x=3, y=5)] ``` My list of named tuples is something like this^^^^, but it has 6 arguments instead of the 2 shown above. Edit - the arguments are booleans, ints, and strings. I tried mapping, but i got the following error: ``` >>> string = str(points) >>> l = string.strip("[]") >>> p = map(Point._make, l.split(", ")) Traceback (most recent call last): File "<pyshell#31>", line 1, in <module> p = map(Point._make, l.split(", ")) File "<string>", line 17, in _make TypeError: Expected 2 arguments, got 9 ``` I'm open to other simpler ways to do this.
Ultimately, how to do this may be a matter of taste. # JSON Json can be a good use because, unlike pickle, it is usable beyond python. Your object is serialized in a widely supported, easily repurposed format. ``` >>> import json # simple json is better bit I didn't want to force an install >>> from collections import namedtuple >>> Point = namedtuple("Point", "x y", verbose = False) >>> p = Point(3,4) >>> json.dumps(p._asdict()) '{"x": 3, "y": 4}' >>> s = json.dumps(p._asdict()) >>> json.loads(s) # not there yet cause thisis a dict {u'y': 4, u'x': 3} # but it is a dict that can create a Point >>> Point(**json.loads(s)) Point(x=3, y=4) ``` # Pickle pickle will not work unless you define a attribute state (see `__getstate__` [in the docs](http://docs.python.org/2/library/pickle.html)). This is "Nicer" in the load phase, following from above: ``` import pickle # Point.__getstate__=lambda self: self._asdict() # not needed as per @simon's comment thx simon >>> pickle.dumps(p) "ccopy_reg\n_reconstructor\np0\n(c__main__\nPoint\np1\nc__builtin__\ntuple\np2\n(I3\nI4\ntp3\ntp4\nRp5\nccollections\nOrderedDict\np6\n((lp7\n(lp8\nS'x'\np9\naI3\naa(lp10\nS'y'\np11\naI4\naatp12\nRp13\nb." s = pickle.dumps(p) >>> pickle.loads(s) Point(x=3, y=4) ``` # eval I would discourage any use of eval or exec. If you do go down that route check out [`ast.literal_eval()`](http://docs.python.org/2/library/ast.html) and checkout some of the SO related answers like [safety of python eval](https://stackoverflow.com/questions/1112665/safety-of-python-eval-for-list-deserialization)
Based on Phil Cooper answer, you can store your objects in json format: ``` >>> import json >>> points_str = json.dumps([x._asdict() for x in points]) [{"x": 4, "y": 7}, {"x": 8, "y": 9}, {"x": 1, "y": 2}] >>> points2 = [Point(**x) for x in json.loads(points_str)] [Point(x=4, y=7), Point(x=8, y=9), Point(x=1, y=2)] ``` another strange way to do it is to use [exec](http://docs.python.org/2/reference/simple_stmts.html#exec): ``` >>> points_str = repr(points) '[Point(x=4, y=7), Point(x=8, y=9), Point(x=1, y=2)]' >>> exec "points2 = %s" % points >>> points2 [Point(x=4, y=7), Point(x=8, y=9), Point(x=1, y=2)] ```
String of namedtuples to list
[ "", "python", "list", "sqlite", "type-conversion", "namedtuple", "" ]
I was trying to use subprocess calls to perform a copy operation (code below): ``` import subprocess pr1 = subprocess.call(['cp','-r','./testdir1/*','./testdir2/'], shell = True) ``` and I got an error saying: ``` cp: missing file operand Try `cp --help' for more information. ``` When I try with `shell=False` , I get ``` cp: cannot stat `./testdir1/*': No such file or directory ``` How do I get around this problem? I'm using RedHat Linux GNOME Deskop version 2.16.0 and bash shell and Python 2.6 P.S. I read the question posted in [Problems with issuing cp command with Popen in Python](https://stackoverflow.com/questions/12267774/problems-with-issuing-cp-command-with-popen-in-python), and it suggested using `shell = True` option, which is not working for me as I mentioned :(
When using `shell=True`, pass a string, not a list to `subprocess.call`: ``` subprocess.call('cp -r ./testdir1/* ./testdir2/', shell=True) ``` [The docs say](http://docs.python.org/2/library/subprocess.html#subprocess.Popen): > On Unix with shell=True, the shell defaults to /bin/sh. If args is a > string, the string specifies the command to execute through the shell. > This means that the string must be formatted exactly as it would be > when typed at the shell prompt. This includes, for example, quoting or > backslash escaping filenames with spaces in them. If args is a > sequence, the first item specifies the command string, and any > additional items will be treated as additional arguments to the shell > itself. So (on Unix), when a list is passed to `subprocess.Popen` (or `subprocess.call`), the first element of the list is interpreted as the command, all the other elements in the list are interpreted as arguments for the *shell*. Since in your case you do not need to pass arguments to the shell, you can just pass a string as the first argument.
This is an old thread now, but I was just having the same problem. The problem you were having with this call: ``` subprocess.call(['cp','-r','./testdir1/*','./testdir2/'], shell = False) ``` was that each of the parameters after the first one are quoted. So to the shell sees the command like this: ``` cp '-r' './testdir1/*' './testdir2/' ``` The problem with that is the wildcard character (`*`). The filesystem looks for a file with the literal name '`*`' in the `testdir1` directory, which of course, is not there. The solution is to make the call like the selected answer using the `shell = True` option and none of the parameters quoted.
Python Subprocess Error in using "cp"
[ "", "python", "subprocess", "cp", "" ]
I have a script that creates JSON from the result of an SQL query. The problem I'm having is that the (epoch millisecond) timestamp of the records being output is a long, which is getting the standard Python `long` representation with the appended L, and not a 'proper' JSON number: `{'status': 'default', 'ID': '7717''recordTimestamp': 1372651201000L, 'Latitude': 50.836689, 'Longitude': -53.879143}` I am using `json.dumps(record)` to generate this, but I cannot figure out how to make the `long` a JSON-formatted number. Does anyone have a quick solution to this? Thanks! Edit: using Python 2.7.4 on Ubuntu
`json.dumps` does not append trailing `L`. ``` >>> json.dumps({'status': 'default', 'ID': '7717', 'recordTimestamp': 1372651201000L, 'Latitude': 50.836689, 'Longitude': -53.879143}) '{"status": "default", "Latitude": 50.836689, "Longitude": -53.879143, "ID": "7717", "recordTimestamp": 1372651201000}' >>> json.dumps(98765432109876543210L) '98765432109876543210' ```
You can use `str()` function: ``` str(longInt) ``` to remove the trailing `L`
How to force Python JSON output to exclude 'L' from longs
[ "", "python", "json", "long-integer", "" ]
I have a list which looks like this: ``` [[3, 4.6575, 7.3725], [3, 3.91, 5.694], [2, 3.986666666666667, 6.6433333333333335], [1, 3.9542857142857137, 5.674285714285714],....] ``` I would like to sum (in fact take the mean ... but it is a detail) all the values of the rows together where the value of the first element are equal. This would mean that in the example above the first two rows would be summed together. ``` [[3, 8.5675, 13.0665], [2, 3.986666666666667, 6.6433333333333335], [1, 3.9542857142857137, 5.674285714285714],....] ``` This means the first values should be unique. I thought of doing this by finding all the "rows" where the first value is equal to for example to 1 and sum them together. My question is now, how can I find all the rows where the first value is equal to a certain value.
This should work: ``` lst = [[3, 4.6575, 7.3725], [3, 3.91, 5.694], [2, 3.986666666666667, 6.6433333333333335], [1, 3.9542857142857137, 5.674285714285714]] # group the values in a dictionary import collections d = collections.defaultdict(list) for item in lst: d[item[0]].append(item) # find sum of values for key, value in d.items(): print [key] + map(sum, zip(*value)[1:]) ``` Or, a bit cleaner, using `itertools.groupby`: ``` import itertools groups = itertools.groupby(lst, lambda i: i[0]) for key, value in groups: print [key] + map(sum, zip(*value)[1:]) ``` Output, in both cases: ``` [1, 3.9542857142857137, 5.674285714285714] [2, 3.986666666666667, 6.6433333333333335] [3, 8.567499999999999, 13.0665] ``` If you want to calculate the mean instead of the sum, just define your own `mean` function and pass that one instead of the `sum` function to `map`: ``` mean = lambda x: sum(x) / float(len(x)) map(mean, zip...) ```
There are many ways to do something like this in Python. If your list is called `a`, you can make a list comprehension to get the row indices where first column is equal to `value`: ``` rows = [i for i in range(0,len(a)) if a[i][0]==value] ``` However, I'm sure there are whole libraries that parse arrays or lists in X dimensions to retreive all kinds of statistical data out there. The high number of libraries available is one of the many thing that make developing with Python such a fantastic experience.
Get all elements in a list where the value is equal to certain value
[ "", "python", "" ]
I'm using vb.net and sql. Below is the example on how my data in my db look like. I want to multiply the value inside the column UPH with 24 and insert into a new column which is UPD. ![enter image description here](https://i.stack.imgur.com/hicWx.jpg)
``` UPDATE yourtable SET UPD = UPH*24; ``` You could also create an insert trigger on the database ``` CREATE TRIGGER times24 BEFORE INSERT ON yourtable FOR EACH ROW SET @UPD = @UPH * 24; ``` See <http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html> and <http://dev.mysql.com/doc/refman/5.0/en/trigger-syntax.html> for more on that.
Issue an `update` statement like this: ``` UPDATE TableName SET UPD = UPH * 24 ```
how to multiply a value inside the column in sql
[ "", "sql", "vb.net", "" ]
Why is numpy giving this result: ``` x = numpy.array([1.48,1.41,0.0,0.1]) print x.argsort() >[2 3 1 0] ``` when I'd expect it to do this: > [3 2 0 1] Clearly my understanding of the function is lacking.
According to [the documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html#numpy-argsort) > Returns the indices that would sort an array. * `2` is the index of `0.0`. * `3` is the index of `0.1`. * `1` is the index of `1.41`. * `0` is the index of `1.48`.
`[2, 3, 1, 0]` indicates that the smallest element is at index 2, the next smallest at index 3, then index 1, then index 0. There are [a number of ways](https://stackoverflow.com/q/5284646/190597) to get the result you are looking for: ``` import numpy as np import scipy.stats as stats def using_indexed_assignment(x): "https://stackoverflow.com/a/5284703/190597 (Sven Marnach)" result = np.empty(len(x), dtype=int) temp = x.argsort() result[temp] = np.arange(len(x)) return result def using_rankdata(x): return stats.rankdata(x)-1 def using_argsort_twice(x): "https://stackoverflow.com/a/6266510/190597 (k.rooijers)" return np.argsort(np.argsort(x)) def using_digitize(x): unique_vals, index = np.unique(x, return_inverse=True) return np.digitize(x, bins=unique_vals) - 1 ``` --- For example, ``` In [72]: x = np.array([1.48,1.41,0.0,0.1]) In [73]: using_indexed_assignment(x) Out[73]: array([3, 2, 0, 1]) ``` --- This checks that they all produce the same result: ``` x = np.random.random(10**5) expected = using_indexed_assignment(x) for func in (using_argsort_twice, using_digitize, using_rankdata): assert np.allclose(expected, func(x)) ``` These IPython `%timeit` benchmarks suggests for large arrays `using_indexed_assignment` is the fastest: ``` In [50]: x = np.random.random(10**5) In [66]: %timeit using_indexed_assignment(x) 100 loops, best of 3: 9.32 ms per loop In [70]: %timeit using_rankdata(x) 100 loops, best of 3: 10.6 ms per loop In [56]: %timeit using_argsort_twice(x) 100 loops, best of 3: 16.2 ms per loop In [59]: %timeit using_digitize(x) 10 loops, best of 3: 27 ms per loop ``` For small arrays, `using_argsort_twice` may be faster: ``` In [78]: x = np.random.random(10**2) In [81]: %timeit using_argsort_twice(x) 100000 loops, best of 3: 3.45 µs per loop In [79]: %timeit using_indexed_assignment(x) 100000 loops, best of 3: 4.78 µs per loop In [80]: %timeit using_rankdata(x) 100000 loops, best of 3: 19 µs per loop In [82]: %timeit using_digitize(x) 10000 loops, best of 3: 26.2 µs per loop ``` --- Note also that [`stats.rankdata`](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.stats.rankdata.html) gives you more control over how to handle elements of equal value.
Numpy argsort - what is it doing?
[ "", "python", "numpy", "" ]
I have table with two FK `UserProfile_Id` and `Service_Id`. This table contains bit field which value I need to change. I have two temporary tables: First table **#temp2**: ``` EmailAddress, UserProfile_Id ``` Second table **#temp**: ``` EmailAddress, Service_Id ``` This statement does not work: ``` UPDATE MailSubscription SET BitField=1 where UserProfile_id IN ( SELECT UserProfile_Id from #temp2 ) and Service_id IN ( SELECT ServiceId from #temp) ``` I know why it does not work, but have no idea how to fix it to work fine. I need to change `bitField` for `MailSubscription` where **tuple**(UserProfile\_Id,Service\_Id) is in joined #temp and #temp2, but I can not write it like this in mssql.
``` UPDATE M SET M.BitField=1 from MailSubscription M inner join #temp2 t2 on M.UserProfile_id=t2.UserProfile_Id inner join #temp t on M.Service_id=t.ServiceId and t.EmailAddress=t2.EmailAddress ```
``` UPDATE MailSubscription SET BitField=1 FROM #temp2 JOIN #temp on #temp2.EmailAddress=#temp.EmailAddress WHERE MailSubscription.Service_id = #temp.ServiceId AND MailSubscription.UserProfile_id = #temp2.UserProfile_Id ```
Select where tuple in statement
[ "", "sql", "sql-server", "" ]
After executing this SQL in oracle 10g: ``` SELECT SYSDATE, CURRENT_TIMESTAMP FROM DUAL ``` I receive this strange output: ![Toad output for query](https://i.stack.imgur.com/FiPgB.png) What is cause of the difference in time? The server time is equal of SYSDATE value
[`CURRENT_DATE`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions044.htm) and [`CURRENT_TIMESTAMP`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions045.htm) return the current date and time in the session time zone. [`SYSDATE`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions191.htm) and [`SYSTIMESTAMP`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions192.htm) return the system date and time - that is, of the system on which the database resides. If your client session isn't in the same timezone as the server the database is on (or says it isn't anyway, via your NLS settings), mixing the `SYS*` and `CURRENT_*` functions will return different values. They are all correct, they just represent different things. It looks like your server is (or thinks it is) in a +4:00 timezone, while your client session is in a +4:30 timezone. You might also see small differences in the time if the clocks aren't synchronised, which doesn't seem to be an issue here.
`SYSDATE`, `SYSTIMESTAMP` returns the Database's date and timestamp, whereas `current_date`, `current_timestamp` returns the date and timestamp of the location from where you work. For eg. working from India, I access a database located in Paris. at 4:00PM IST: `select sysdate,systimestamp from dual;` This returns me the date and Time of Paris: > RESULT ``` 12-MAY-14 12-MAY-14 12.30.03.283502000 PM +02:00 ``` `select current_date,current_timestamp from dual;` This returns me the date and Time of India: > RESULT ``` 12-MAY-14 12-MAY-14 04.00.03.283520000 PM ASIA/CALCUTTA ``` *Please note the 3:30 time difference.*
Different CURRENT_TIMESTAMP and SYSDATE in oracle
[ "", "sql", "oracle", "plsql", "oracle10g", "" ]
I'd like to perform division in a SELECT clause. When I join some tables and use aggregate function I often have either null or zero values as the dividers. As for now I only come up with this method of avoiding the division by zero and null values. ``` (CASE(COALESCE(COUNT(column_name),1)) WHEN 0 THEN 1 ELSE (COALESCE(COUNT(column_name),1)) END) ``` I wonder if there is a better way of doing this?
Since **`count()` never returns `NULL`** (unlike other aggregate functions), you only have to catch the `0` case (which is the only problematic case anyway). So, your query simplified: ``` CASE count(column_name) WHEN 0 THEN 1 ELSE count(column_name) END ``` Or simpler, yet, with `NULLIF()`, [like Yuriy provided](https://stackoverflow.com/a/17681467/939860). [Quoting the manual about aggregate functions:](https://www.postgresql.org/docs/current/functions-aggregate.html) > It should be noted that except for `count`, these functions return a > null value when no rows are selected.
You can use [NULLIF](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-NULLIF) function e.g. ``` something/NULLIF(column_name,0) ``` If the value of `column_name` is 0 - result of entire expression will be NULL
Avoid division by zero in PostgreSQL
[ "", "sql", "postgresql", "null", "aggregate-functions", "divide-by-zero", "" ]
Under python on my machine I can run `datetime.now()` to get the local time. If I inspect `time.daylight` flag it is set to `1` because we are currently in July (hence daylight saving). But if I run `datetime.now()` on Google App Engine (including the dev server) it doesn't account for daylight saving (british summer time) and returns the wrong time (13:47 instead of 14:47). If I inspect `time.daylight` in GAE it is set to `0`. How can I get the correct local time? Do I need to change the timezone on my app?
Google App Engine's "time zone" is always set to UTC. You can adjust to your local or desired time zone using a library like pytz. Check out the following project on Google Code for an App Engine optimized version of pytz. [Google Code - gae-pytz](https://code.google.com/p/gae-pytz/)
I've been able to get this working: ``` import pytz import datetime tz = pytz.timezone('Europe/London') print datetime.datetime.now(tz) ``` It appears Google App Engine already imports a few modules by default, including `pytz` and `datetime`, so perhaps there is no need to explicitly import them.
How to get current UK time in Google App Engine
[ "", "python", "google-app-engine", "datetime", "time", "" ]
I am trying to use `matplotlib.ArtistAnimation` to animate two subplots. I want the x-axis to increase in value as the animation progresses, such that the total length of the animation is 100 but at any time the subplot is only presenting me with the time values from 0-24 and then iterates up to 100. A great example is given [here](http://www.roboticslab.ca/matplotlib-animation/). The link uses `FuncAnimation` and updates the x-axis labels in a rolling fashion using `plot().axes.set_xlim()` and incrementing the x-values. The code is available via the link below the YouTube video in the link provided. I have appended code below that shows my attempts to replicate these results but the x-limits seem to take on their final values instead of incrementing with time. I have also tried incrementing the solution (as opposed to the axis) by only plotting the values in the window that will be seen in the subplot, but that does not increment the x-axis values. I also tried to implement autoscaling but the x-axis still does not update. I also found [this question](https://stackoverflow.com/questions/15438367/updating-axis-limits-while-working-with-artistanimation) which is virtually the same problem, but the question was never answered. Here is my code: ``` import matplotlib.pylab as plt import matplotlib.animation as anim import numpy as np #create image with format (time,x,y) image = np.random.rand(100,10,10) #setup figure fig = plt.figure() ax1=fig.add_subplot(1,2,1) ax2=fig.add_subplot(1,2,2) #set up viewing window (in this case the 25 most recent values) repeat_length = (np.shape(image)[0]+1)/4 ax2.set_xlim([0,repeat_length]) #ax2.autoscale_view() ax2.set_ylim([np.amin(image[:,5,5]),np.amax(image[:,5,5])]) #set up list of images for animation ims=[] for time in xrange(np.shape(image)[0]): im = ax1.imshow(image[time,:,:]) im2, = ax2.plot(image[0:time,5,5],color=(0,0,1)) if time>repeat_length: lim = ax2.set_xlim(time-repeat_length,time) ims.append([im, im2]) #run animation ani = anim.ArtistAnimation(fig,ims, interval=50,blit=False) plt.show() ``` I only want the second subplot (`ax2`) to update the x-axis values. Any help would be much appreciated.
If you *don't* need blitting ``` import matplotlib.pylab as plt import matplotlib.animation as animation import numpy as np #create image with format (time,x,y) image = np.random.rand(100,10,10) #setup figure fig = plt.figure() ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) #set up viewing window (in this case the 25 most recent values) repeat_length = (np.shape(image)[0]+1)/4 ax2.set_xlim([0,repeat_length]) #ax2.autoscale_view() ax2.set_ylim([np.amin(image[:,5,5]),np.amax(image[:,5,5])]) #set up list of images for animation im = ax1.imshow(image[0,:,:]) im2, = ax2.plot([], [], color=(0,0,1)) def func(n): im.set_data(image[n,:,:]) im2.set_xdata(np.arange(n)) im2.set_ydata(image[0:n, 5, 5]) if n>repeat_length: lim = ax2.set_xlim(n-repeat_length, n) else: # makes it look ok when the animation loops lim = ax2.set_xlim(0, repeat_length) return im, im2 ani = animation.FuncAnimation(fig, func, frames=image.shape[0], interval=30, blit=False) plt.show() ``` will work. If you need to run faster, you will need to play games with the bounding box used for blitting so that the axes labels are updated.
If you are using blitting, you can call `pyplot.draw()` to redraw the entire figure, each time you change y/x axis. This updates whole figure, so is relatively slow, but it's acceptable if you don't call it many items.
Updating the x-axis values using matplotlib animation
[ "", "python", "animation", "matplotlib", "" ]
I am writing a code in Python to design a software application. I want to add following functionality in my code: When user starts using software, a `ping()` function runs and tries to connect to internet to fetch some data. If network is not available then it should continuously check network availability to fetch the data. While that `ping()` is checking network continuously, I want my software application to do usual things. My code is: ``` def ping(): global lst1 if network_connection() is True: links = lxml.html.parse("http://google.com").xpath("//a/@href") for url in links: lst1.append(url) else: ping() def network_connection(): network=False try: response = urllib2.urlopen("http://google.com", None, 2.5) network=True except urllib2.URLError, e: pass return network ``` When application starts I call `ping()` and inside `ping()` function I am doing a recursive call. It is obviously wrong because in that case my code will stuck in this recursive call until network connection is available and application will not be initiated. Is there anyway that I can do both things side by side: checking network availability and running rest of code at the same time?
First of all, there is no need for continuous pinging, even that happens in a separate process/thread. You should use an [exponential backoff algorithm](http://en.wikipedia.org/wiki/Exponential_backoff). And have the pinging process update a shared variable in the event of network failure to make sure your main process waits until the network is up/ or do offline work. For this, your code has to check the variable periodically during execution. I would suggest the [`multiprocessing` module for this task.](http://docs.python.org/2/library/multiprocessing.html)
You can easily use `while` loop to constantly check the network availability inside the `ping()` function. I'm totally agree with Blue Ice's answer about CPU usage when the code continuously check for internet connection. A single `time.sleep()` call from module `time` is enough to throttle down the loop. ``` import time def ping(): global lst1 status = True while(status): if network_connection() is True: links = lxml.html.parse("http://google.com").xpath("//a/@href") status = False for url in links: lst1.append(url) else: time.sleep(0.1) #use 100 milliseconds sleep to throttke down CPU usage ```
Checking network availability continuously while rest of code is running separately
[ "", "python", "" ]
I have to deal with an old Database from my company department. We use this DB for hardware managing assignment and tracking. I'm about to build up a new fronted in c#, as the current MS Access is getting way to slow for the task. (I'm translated the names of the tables and rows into English, for better understanding) * tbl\_hardware * tbl\_hardware\_assignment * tbl\_accounts * tbl\_typebradmodel (not important for now, and/or self explanatory) tbl\_hardware contains the columns HW\_ID, serialnumber, type,model,brand, etc (othe necessary information of the Hardware) tbl\_hardware\_assignment contains the columns ID, HW\_ID (matching with the tbl\_hardware.ID), nameID(matching with tbl\_accounts.PersID, and since (a int value formated Date [YYYYMMDD] when the entry was created (was not, my idea...)) tbl\_account contains the columns PersID, Login, etc (other internal information) This is my current SQL Statement ``` SELECT tbl_hardware.HW_ID, tbl_hardware.Aktiv, tbl_hardware.typebradmodelID, tbl_type.tabel AS Type, tbl_brand.tabel AS Brand, tbl_model.tabel AS Model, tbl_accounts.Login, tbl_hardware_assignment.since FROM tbl_hardware LEFT OUTER JOIN tbl_typebradmodel ON tbl_hardware.typebradmodelID = tbl_typebradmodel.typebradmodelID LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID LEFT OUTER JOIN tbl_hardware_assignment ON tbl_hardware.HW_ID = tbl_hardware_assignment.HW_ID LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID WHERE tbl_hardware.Aktiv = 1 AND tbl_hardware.typebradmodelID in (SELECT tbl_typebradmodel.typebradmodelID FROM tbl_typebradmodel LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID WHERE tbl_typebradmodel.MarkeID = (SELECT tbl_brand.MarkeID FROM tbl_brand WHERE tbl_brand.tabel LIKE 'Samsung') ) AND tbl_hardware.HW_ID in (SELECT tbl_hardware_assignment.HW_ID FROM tbl_hardware_assignment, (SELECT MAX(tbl_hardware_assignment.since) AS lastchange, tbl_hardware_assignment.HW_ID FROM tbl_hardware_assignment GROUP BY tbl_hardware_assignment.HW_ID) lastentry WHERE tbl_hardware_assignment.namenID = (SELECT tbl_accounts.PersID FROM tbl_accounts WHERE tbl_accounts.Login = 'MY_USERNAME') AND tbl_hardware_assignment.HW_ID = lastentry.HW_ID AND tbl_hardware_assignment.since = lastentry.lastchange ) ``` RESULT: ``` 9778 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER1 20100218 9778 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER2 20100218 10497 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER3 20100810 10498 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER3 20100810 10498 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER4 20100819 10497 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER4 20100819 10497 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM MY_USERNAME 20120601 10498 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM MY_USERNAME 20120601 9778 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER3 20130502 9778 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER5 20130507 9778 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM USER3 20130619 9778 1 2868 Monitor 24" TFT Samsung SyncMaster 2494HM MY_USERNAME 20130725 ``` But I get too many results, AND wrong/multiple user mappings with one hardware. Any idea where my mistake is? BTW.: This statement alone returns the correct values ``` SELECT tbl_hardware_assignment.HW_ID FROM tbl_hardware_assignment, (SELECT MAX(tbl_hardware_assignment.since) AS lastchange, tbl_hardware_assignment.HW_ID FROM tbl_hardware_assignment GROUP BY tbl_hardware_assignment.HW_ID) lastentry WHERE tbl_hardware_assignment.namenID = (SELECT tbl_accounts.PersID FROM tbl_accounts WHERE tbl_accounts.Login = 'MY_USERNAME') AND tbl_hardware_assignment.HW_ID = lastentry.HW_ID AND tbl_hardware_assignment.since = lastentry.lastchange ``` RESULT: ``` 10497 20120601 10498 20120601 11554 20120601 12353 20120601 13665 20120918 13196 20121129 14616 20130701 15073 20130705 9778 20130725 ``` *(As I should not port company stuff outside the office, I hope that I didn't mess up any result or SQL statements.)* **Here are are more example outputs** [PASTEBIN](http://pastebin.com/BC06w8Ks)
You don't need to join subqueries back onto the tables that they are sourced from, and you can JOIN directly onto them. Rather than JOINING a whole bunch of tabels directly, you could look at forming subqueries that get the correct constituent parts Something like the following may be what you are after: ``` SELECT tbl_hardware.HW_ID, tbl_hardware.Aktiv, tbl_hardware.typebradmodelID, typebradmodel.Type, typebradmodel.Brand, typebradmodel.Model, lastentry.Login, lastentry.since FROM (SELECT tbl_typebradmodel.typebradmodelID, tbl_type.tabel AS Type, tbl_brand.tabel AS Brand, tbl_model.tabel AS Model FROM tbl_typebradmodel LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID ) typebradmodel LEFT JOIN tbl_hardware ON tbl_hardware.typebradmodelID = typebradmodel.typebradmodelID LEFT JOIN (SELECT MAX(tbl_hardware_assignment.since) AS lastchange, tbl_hardware_assignment.HW_ID, tbl_accounts.Login FROM tbl_hardware_assignment LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login ) lastentry ON tbl_hardware.HW_ID = lastentry.HW_ID WHERE tbl_hardware.Aktiv = 1 AND typebradmodel.Brand LIKE 'Samsung' AND lastentry.Login = 'MY_USERNAME' ``` **Update** The critical part here is getting the lastchange subquery correct, i.e. using all the columns that describe the relation between tbl\_hardware\_assignment and tbl\_accounts ``` SELECT MAX(tbl_hardware_assignment.since) AS lastchange, tbl_hardware_assignment.HW_ID, tbl_accounts.Login FROM tbl_hardware_assignment LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID AND MAX(tbl_hardware_assignment.since) = tbl_accounts.lastchange GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login ``` does this get the right ID's? and if it doesn't, are you able to find out what the relation between these two tables should involve?
I would have put this in a comment but I've not got the required rep points. Have you tried changing your joins around? Looking at the output I'd suggest maybe trying an INNER JOIN.
SQL Joining 4 Tables
[ "", "sql", "database", "join", "" ]
I need to read in a file which has multiple data line as follows: ``` 1 D 65.33383 BAZ 308.1043 Year 2001 Month 01 Day 01 Lat 6.90 Long 126.58 Mag 6.4 Origin Time 06:57:04.2 ``` I need to split the file into lines, which I have done, then split each line into variables at each space. So far I am using a nested loop that looks like: ``` for line in open("filename", 'r').readlines(): variable = string.split(line) values = [variable] for value in values value = string.split(' ') year, month = value[0], value [1] ``` My problem is that I don't know what the parts in the second for loop need to be? i.e for ... in ... I am quite new to programming in python.
I am not fully sure what exactly you are trying to accomplish, one thing that is especially unclear is your expression: "then split each line into variables at each space". But assuming you need to get the output which consists of a list of dictionaries each containing parsed data from the line the following should be useful for you: ``` data = [] with open("file.txt") as f: for line in f: lineData = {} lineSplit = line.split() for i in range(1,len(lineSplit)-1,2): lineData[lineSplit[i]] = lineSplit[i+1] data.append(lineData) print data ``` This will get you the output which will look like this: ``` [{'Origin': 'Time', 'D': '65.33383', 'BAZ': '308.1043', 'Long': '126.58', 'Month': '01', 'Mag': '6.4', 'Year': '2001', 'Lat': '6.90', 'Day': '01'}] ``` The dictionary is unsorted so keys and values appear in random order. Notice that Origin time became keys and values because you wanted to split the line on space and there is a space between origin and times. Cheers!
``` with open('data.txt', 'r') as data: for _input in data: line = _input.split(' ') data = {'Index':line[0], 'Origin Time':line[-3:][-1].strip() } data.update(dict(zip(line[1:-3][0::2], line[1:-3][1::2]))) print data ```
Python nested for loop to split a file into lines then each line into variables
[ "", "python", "loops", "for-loop", "split", "nested", "" ]
I need an activerecord query to Match ALL items in a params array. Lets say user has\_many roles. and each role has a name. when i pass ['actor', 'producer', 'singer']. I expect the query to return me the users with all the those three roles or more. But my method implementation below would return users with having atleast one role name matching to those in the array passed My current method gives results based on finding any of the tags, not "MATCH ALL" ``` class User < ActiveRecord::Base has_many :roles def self.filter_by_roles(roles) User.joins(:roles).includes(:roles).where(:roles => {:name => roles}) end end ``` I don't want to do any array operations after the query checking if the returned result objects contain all the roles or not. This is because I need the Active Record Relation object to be returned from this. Thanks in advance.
Try this. ``` User.joins(:roles).includes(:roles).where(:roles => {:name => roles}).group('usermail').having("COUNT(DISTINCt role_id) = 3") ``` assuming that field `usermail` is using to identify users.
You could try this: ``` def self.filter_by_roles(roles) scope = User.joins(:roles).includes(:roles) roles.each do |role| scope = scope.where(roles: {name: role}) end scope end ``` It's untested, so I'm not sure whether it works.
match all in active record relations in a query
[ "", "sql", "ruby-on-rails", "ruby-on-rails-3", "activerecord", "associations", "" ]
I have been trying to use a filter on a query, but for some reason the filtering does not seem to be working. For example, if I run the command: ``` Curriculum_Version.query.filter(Course.course_code == 'PIP-001').all() ``` I get the same results as if I run: ``` Curriculum_Version.query.filter(Course.course_code == 'FEWD-001').all() ``` (Both return): ``` [#1 Version Number: 1, Date Implemented: 2013-07-23 00:00:00, #2 Version Number: 2, Date Implemented: 2013-07-24 00:00:00] ``` If I run: ``` Curriculum_Version.query.get(1).course ``` I get: ``` from main import app, db from flask import Flask, request, g, redirect, url_for from flaskext.auth import Auth, AuthUser, login_required, get_current_user_data from flaskext.auth.models.sa import get_user_class import datetime from flask.ext.sqlalchemy import SQLAlchemy import pdb class User(db.Model, AuthUser): __tablename__ = 'users' id = db.Column(db.Integer, primary_key=True) tf_login = db.Column(db.String(255), unique=True, nullable=False) # can assume is an email password = db.Column(db.String(120), nullable=False) salt = db.Column(db.String(80)) role = db.Column(db.String(80)) # for later when have different permission types zoho_contactid = db.Column(db.String(20), unique=True, nullable=False) created_asof = db.Column(db.DateTime, default=datetime.datetime.utcnow) firstname = db.Column(db.String(80)) lastname = db.Column(db.String(80)) def __init__(self, zoho_contactid, firstname, lastname, tf_login, password, role, *args, **kwargs): super(User, self).__init__(tf_login=tf_login, password=password, *args, **kwargs) if (password is not None) and (not self.id): self.created_asof = datetime.datetime.utcnow() # Initialize and encrypt password before first save. self.set_and_encrypt_password(password) self.zoho_contactid = zoho_contactid # TODO self.firstname = firstname self.lastname = lastname self.tf_login = tf_login # TODO -- change to tf_login self.role = role def __repr__(self): return '#%d tf_login: %s, First Name: %s Last Name: %s created_asof %s' % (self.id, self.tf_login, self.firstname, self.lastname, self.created_asof) def __getstate__(self): return { 'id': self.id, 'tf_login': self.tf_login, 'firstname': self.firstname, 'lastname': self.lastname, 'role': self.role, 'created_asof': self.created_asof, } def __eq__(self, o): return o.id == self.id @classmethod def load_current_user(cls, apply_timeout=True): data = get_current_user_data(apply_timeout) if not data: return None return cls.query.filter(cls.email == data['email']).one() class Enrollment(db.Model, AuthUser): __tablename__ = 'enrollments' id = db.Column(db.Integer, primary_key=True) user_id = db.Column(db.Integer, db.ForeignKey('users.id')) user = db.relationship('User', backref='enrollments') curriculum_version_id = db.Column(db.Integer, db.ForeignKey('curriculum_versions.id')) curriculumversion = db.relationship('Curriculum_Version', backref='enrollments') cohort_id = db.Column(db.Integer, db.ForeignKey('cohorts.id')) cohort = db.relationship('Cohort', backref='enrollments') def __repr__(self): return '#%d User ID: %s Version ID: %s, Cohort ID: %s' % (self.id, self.user_id, self.curriculum_version_id, self.cohort_id) class Cohort(db.Model, AuthUser): __tablename__ = 'cohorts' id = db.Column(db.Integer, primary_key=True) start_date = db.Column(db.DateTime) course_id = db.Column(db.Integer, db.ForeignKey('courses.id')) course = db.relationship('Course', backref='cohorts') def __repr__(self): return '#%d Start Date: %s, Course: %s' % (self.id, self.start_date, self.course.course_code) class Curriculum_Version(db.Model, AuthUser): __tablename__ = 'curriculum_versions' id = db.Column(db.Integer, primary_key=True) version_number = db.Column(db.String(6)) date_implemented = db.Column(db.DateTime) course_id = db.Column(db.Integer, db.ForeignKey('courses.id')) course = db.relationship('Course', backref='curriculum_versions') def __repr__(self): return '#%d Version Number: %s, Date Implemented: %s' % (self.id, self.version_number, self.date_implemented) class Course(db.Model, AuthUser): __tablename__ = 'courses' id = db.Column(db.Integer, primary_key=True) course_code = db.Column(db.String(20)) course_name = db.Column(db.String(50)) def __repr__(self): return '#%d Course Code: %s, Course Name: %s' % (self.id, self.course_code, self.course_name) def __eq__(self, o): return o.id == self.id ``` How I'm Creating the Entry in the DB for Curriculum\_Versions: ``` def update_courses(): course_code = request.form['course_code'] start_date = request.form['start_date'] course_date = datetime.strptime(start_date, '%m/%d/%Y') curr_version = Curriculum_Version.query.filter(Course.course_code == course_code) \ .order_by(desc('version_number')).first() if curr_version is None: next_version = 1 else: next_version = int(curr_version.version_number)+1 existing = Curriculum_Version.query.filter(Course.course_code == course_code) \ .filter(Curriculum_Version.date_implemented == course_date) if len(existing.all()) > 0: return "You tried to make version %d of the curriculum, but version \ %s of the curriculum already exists for %s for class %s." \ %(next_version, existing.first().version_number, start_date, course_code) course_object = Course.query.filter(Course.course_code == course_code).first() if course_object is None: return "The course %s does not yet exist!" % (course_code) new_version = Curriculum_Version(version_number=next_version, date_implemented=course_date, course=course_object) db.session.add(new_version) db.session.commit() return 'Created version %d for course %s starting on %s.' \ %(next_version, course_code, start_date) ```
I think you need to join before you filter and use just one query: ``` # Get all the versions of a single course. versions = Curriculum_Version.query.join(Curriculum_Version.course).filter( Course.course_code == "PIP-001").all() ``` Otherwise sqlalchemy will not know to use the relationship before filtering. If you just specify a filter then sqlalchemy does not know to perform a join and you end up with sql similar to this: ``` SELECT curriculum_versions.* FROM curriculum_versions, courses WHERE courses.course_code = "PIP-001" ``` Which does not make a whole lot of sense but is valid SQL. When you use a join it leverages the filter against the correct table like this: ``` SELECT curriculum_versions.* FROM curriculum_versions JOIN courses ON curriculum_versions.course_id = courses.id WHERE courses.course_code = "PIP-001" ``` Note that sqlalchemy knows to use the condition `curriculum_versions.course_id = courses.id` because you pass in `Curriculum_Version.course` to `query.join()` and you specified that relationship on your `Curriculum_Version` class as the course property and it automatically knows to use the only foreign key available between the `curriculum_versions` and `courses` tables (which you also had to specify on the `curriculum_versions.course_id` column). You can read more about joins here: <http://docs.sqlalchemy.org/en/rel_0_7/orm/tutorial.html#querying-with-joins>
You can query as follows: ``` course_id = Course.query.filter(course_code="PIP-001").first().id curriculum = Curriculum_Version.query.filter(course_id=course_id).all() ```
SQLAlchemy Filtering Not Working
[ "", "python", "sqlalchemy", "flask", "flask-sqlalchemy", "" ]
***my working table, Table name: sales*** **Here Is MY TABLE, [sl\_no is primary key] table structure:** ``` CREATE TABLE SALES ( SL_NO NUMBER PRIMARY KEY, REGION VARCHAR2(10) NOT NULL, MONTH VARCHAR2(20) NOT NULL, YEAR NUMBER NOT NULL, SALES_AMOUNT NUMBER NOT NULL ) ``` **and here is table data:** ``` SQL> select * from sales; SL_NO REGION MONTH YEAR SALES_AMOUNT ---------- ---------- -------------------- ---------- ------------ 1 east december 2011 750000 2 east august 2011 800000 3 west january 2012 640000 5 east march 2012 1200000 6 west february 2011 580000 4 west april 2011 555000 6 rows selected. ``` I have tried this query to view total sales amount of those[2011,2012] year; ``` SELECT year, SUM(sales_amount) FROM sales GROUP BY year; YEAR SUM(SALES_AMOUNT) ---------- ----------------- 2011 2685000 2012 1840000 ``` **MY GOAL:> I want to find out the year of maximum sales amount.** I tried this,and work perfectly...but when ***i want to display that year also***, it gives an Error. ``` SQL> select max(sum(sales_amount)) from sales group by year; MAX(SUM(SALES_AMOUNT)) ---------------------- 2685000 SQL> select year, max(sum(sales_amount)) from sales group by year; select year, max(sum(sales_amount)) from sales group by year * ERROR at line 1: ORA-00937: not a single-group group function ``` **Extra addition: if multiple rows have same value means....when sales amount of both year[2011,2012] remain same, Then....** plZ help me to Solve this problem.
Here is my Query where multiple row can select ``` SELECT year,MAX(total_sale) as max_total FROM (SELECT year,SUM(sales_amount) AS total_sale FROM sales GROUP BY year) GROUP BY year HAVING MAX(total_sale) = (SELECT MAX(total_sale) FROM (SELECT SUM(sales_amount) AS total_sale FROM sales GROUP BY year)); ```
I think the simplest way is to order the results and take the first row: ``` select year, sales_amount from (SELECT year, SUM(sales_amount) as sales_amount FROM sales GROUP BY year order by sum(sales_amount) desc ) t where rownum = 1; ``` EDIT: If you need to display all the matching rows (which isn't mentioned in the question), I would suggest using the `dense_rank()` analytic function: ``` select year, sales_amount from (SELECT year, SUM(sales_amount) as sales_amount, dense_rank(over order by SUM(sales_amount) desc) as seqnum FROM sales GROUP BY year order by sum(sales_amount) desc ) t where seqnum = 1; ``` Or, you might like the `max()` version instead: ``` select year, sales_amount from (SELECT year, SUM(sales_amount) as sales_amount, max(sum(sales_amount)) over () as maxsa FROM sales GROUP BY year order by sum(sales_amount) desc ) t where sales_amount = maxsa; ```
Oracle SQL Query:Find out which year total sales amount is maximum
[ "", "sql", "oracle", "sum", "max", "" ]
I'm trying to split the lines from a .txt file stored locally. What I do to make a for loop but I want to start the loop on an specific index on the array. For example: ``` file_content = open('files/filefinal1_test.txt') counter = 0 total_lines = 0 global l index = 0 if l == "" else file_content.index(l) for line in file_content[index:]: l = line array_line =line.split('", "') array_line[0] = array_line[0].replace('"', '') array_line[1] = array_line[1].replace('"', '') array_line[2] = array_line[2].replace('"', '') array_line[3] = array_line[3].replace('"', '') if (send_req(papi, array_line) == 1): counter = counter + 1 total_lines = total_lines + 1 ``` This gives me errors at : file\_content[index:] It's there any way to start the loop at specific line from the file\_content? The fact it's that the code below works and loops the array : ``` for line in file_content: l = line array_line =line.split('", "') array_line[0] = array_line[0].replace('"', '') array_line[1] = array_line[1].replace('"', '') array_line[2] = array_line[2].replace('"', '') array_line[3] = array_line[3].replace('"', '') if (send_req(papi, array_line) == 1): counter = counter + 1 total_lines = total_lines + 1 ``` Could anyone help me please?
I've fount the answer! I didn't call the method .readlines() ``` file_content = file_content.readlines() for lines in file_content[0:]: #stuff ```
You can use `lines = open(filepath, 'rb').readlines()` to get a list of strings, where each string is a line in your file. You can then slice the list at any index you want to only get the lines you are interested in like this: `wanted_lines = lines[index:]` This will get you all the lines from index to the end of the file.
Python -- Reading lines from file and split it
[ "", "python", "arrays", "file", "split", "" ]
I want to part of a script to be something like this. ``` if list[1] is in list.pop n times: return True ```
Simply use: ``` list.count(element) ``` Example: ``` >>> [1,2,3,4,2,1].count(1) 2 ```
Number of items in a list: ``` len(myList) ``` Number of times the `i`th element occurs in a list: ``` myList.count(mylist[i]) ```
How to find number of instances of an item in a python list
[ "", "python", "list", "" ]
For example I have a list: ``` L = [1, 2, 2, 3, 1, 1, 6, 10, 1, 3] ``` And I want to remove all 1's from the list, so that I would get: ``` L = [2, 2, 3, 6, 10, 3] ``` I tried iterating over the list and then deleting the element if it equals the element I want to delete (1 in this case), but turns out you can't iterate and delete stuff from a list at the same time since it messes up the counting. The best thing I've come up with is just construct a new list L2 that doesn't contain any of the 1's and then put that into L, but is there a solution that only involves mutating L?
> but is there a solution that only involves mutating L? You can rather iterate over a copy of your List - `L[:]`, and remove element from `L`. That won't mess up counting. If you really don't want to create a new list, you would have to iterate in reverse using `range(len(L) - 1, -1, -1)`, but that won't be 'Pythonic' anymore. ``` >>> for x in L[:]: ... if x == 1: ... L.remove(x) ... >>> L [2, 2, 3, 6, 10, 3] ``` --- However, you can also use *List Comprehension*: ``` >>> L = [1, 2, 2, 3, 1, 1, 6, 10, 1, 3] >>> L[:] = [x for x in L if x != 1] >>> L [2, 2, 3, 6, 10, 3] ```
Using the filter built-in: ``` >>> L = [1, 2, 2, 3, 1, 1, 6, 10, 1, 3] >>> filter(lambda x: x is not 1, L) [2, 2, 3, 6, 10, 3] ``` Or you can assign it back to `L`: ``` >>> L = [1, 2, 2, 3, 1, 1, 6, 10, 1, 3] >>> L = filter(lambda x: x is not 1, L) >>> L [2, 2, 3, 6, 10, 3] ``` --- You can also wrap this concept into methods, to be able to specify a list of items to include/exclude: ``` def exclude(collection, exclude_list): return filter(lambda x: x not in exclude_list, collection) def include(collection, include_list): return filter(lambda x: x in include_list, collection) ``` --- ``` >>> L = [1, 2, 2, 3, 1, 1, 6, 10, 1, 3] >>> L = exclude(L, [1]) >>> L [2, 2, 3, 6, 10, 3] ```
Python - How to remove similar elements from a list?
[ "", "python", "list", "element", "" ]
I'm working on a function that, given a sequence, tries to find said sequence within a list and should then return the list item immediately after that sequence terminates. Currently this code does return the list item immediately after the end of the sequence, however I'm not to happy with having this many nested if-statements and would love to rewrite it but I can't figure out how to go about it as it is quite unlike anything I've ever written in the past and feel a bit out of practice. ``` def sequence_in_list(seq, lst): m, n = len(lst), len(seq) for i in xrange(m): for j in xrange(n): if lst[i] == seq[j]: if lst[i+1] == seq[j+1]: if lst[i+2] == seq[j+2]: return lst[i+3] ``` (My intention is to then extend this function so that if that sequence occurs more than once throughout the list it should return the subsequent item that has happened the most often after the sequence)
Since you are comparing consecutive indexes, *and assuming `lst` and `seq` are of the same type*, you can use slicing: ``` def sequence_in_list(seq, lst): m, n = len(lst), len(seq) for i in xrange(m): for j in xrange(n): if lst[i:i+3] == seq[j:j+3]: return lst[i+3] ``` If the sequences are of different kind you should convert to a common type before doing the comparison(e.g. `lst[i:i+3] == list(seq[j:j+3])` would work if `seq` is a string and `lst` is a list). Alternatively, if the sequences do not support slicing, you can use the built-in `all` to check for more conditions: ``` def sequence_in_list(seq, lst): m, n = len(lst), len(seq) for i in xrange(m): for j in xrange(n): if all(lst[i+k] == seq[j+k] for k in range(3)): return lst[i+3] ``` If you want to extend the check over 10 indices instead of 3, simply change `range(3)` to `range(10)`. Side note: your original code would raise an `IndexError` at some point, since you access `list[i+1]` where `i` may be `len(list) - 1`. The above code doesn't produce any errors, since slicing may produce a slice shorter than the difference of the indices, meainig that `seq[j:j+3]` can have less than 3 elements. If this is a problem you should adjust the indexes on which you are iterating over. Last remark: don't use the name `list` since it shadows a built-in name.
I would do this with a generator and slicing: ``` sequence = [1, 2, 3, 5, 1, 2, 3, 6, 1, 2, 3] pattern = [1, 2, 3] def find_item_after_pattern(sequence, pattern): n = len(pattern) for index in range(0, len(sequence) - n): if pattern == sequence[index:index + n]: yield sequence[index + n] for item in find_item_after_pattern(sequence, pattern): print(item) ``` And you'll get: ``` 5 6 ``` The function isn't too efficient and won't work for infinite sequences, but it's short and generic.
Rewriting nested if-statements in a more Pythonic fashion
[ "", "python", "list", "if-statement", "nested", "pattern-matching", "" ]
I'm having trouble understanding nested dictionary comprehensions in Python 3. The result I'm getting from the example below outputs the correct structure without error, but only includes one of the inner key: value pairs. I haven't found an example of a nested dictionary comprehension like this; Googling "nested dictionary comprehension python" shows legacy examples, non-nested comprehensions, or answers solved using a different approach. I may be using the wrong syntax. **Example:** ``` data = {outer_k: {inner_k: myfunc(inner_v)} for outer_k, outer_v in outer_dict.items() for inner_k, inner_v in outer_v.items()} ``` This example should return the original dictionary, but with the inner value modified by `myfunc`. Structure of the outer\_dict dictionary, as well as the result: ``` {outer_k: {inner_k: inner_v, ...}, ...} ```
`{inner_k: myfunc(inner_v)}` isn't a dictionary comprehension. It's just a dictionary. You're probably looking for something like this instead: ``` data = {outer_k: {inner_k: myfunc(inner_v) for inner_k, inner_v in outer_v.items()} for outer_k, outer_v in outer_dict.items()} ``` For the sake of readability, don't nest dictionary comprehensions and list comprehensions too much.
Adding some line-breaks and indentation: ``` data = { outer_k: {inner_k: myfunc(inner_v)} for outer_k, outer_v in outer_dict.items() for inner_k, inner_v in outer_v.items() } ``` ... makes it obvious that you actually have a single, "2-dimensional" dict comprehension. What you actually want is probably: ``` data = { outer_k: { inner_k: myfunc(inner_v) for inner_k, inner_v in outer_v.items() } for outer_k, outer_v in outer_dict.items() } ``` (which is exactly what Blender suggested in his answer, with added whitespace).
Nested dictionary comprehension python
[ "", "python", "syntax", "nested", "list-comprehension", "dictionary-comprehension", "" ]
I have a database table in which each row has a `first_name` and `last_name` column, like so: ``` id first_name last_name |----|------------|-----------| | 1 | ted | jones | | 2 | mike | johnson | | 3 | ted | jones | | 4 | jan | smith | | 5 | anna | white | | 6 | jan | smith | |-----------------------------| ``` I want to find all records that are duplicates, i.e., the first and last names are identical. Given the data above, I want a result set like: ``` id first_name last_name |----|------------|-----------| | 1 | ted | jones | | 3 | ted | jones | | 4 | jan | smith | | 6 | jan | smith | |----|------------|-----------| ``` (More specifically, I'd like to get a count of such duplicate records, e.g., `2` [or `4`, either would suffice] in this case.) Is there a way to do this via SQL?
A common way of finding duplicates is: ``` select first_name, last_name, count(*) as DupeCount from table group by first_name, last_name having count(*) > 1 ``` This will get you all the names. There are multiple ways of getting the associated Ids, but some of the better ones are server specific.
You join the table with itself and mismatch on id (PK) Assuming ur table name is **Names** ``` SELECT COUNT(1) CNT, N1.first_name,N1.last_name FROM NAMES N1 INNER JOIN NAMES N2 ON N1.first_name = N2.first_name AND N1.last_name = N2.last_name AND N1.ID <> N2.ID GROUP BY N1.first_name,N1.last_name ```
How can I return a result set of records with duplicate columns in SQL
[ "", "sql", "" ]
I create a table in mysql using the following script: ``` CREATE TABLE IF NOT EXISTS users_x_activities( id int NOT NULL auto_increment, id_user int unsigned NOT NULL, id_attivita int unsigned NOT NULL, PRIMARY KEY (id), FOREIGN KEY (id_user) REFERENCES utente(id), FOREIGN KEY (id_attivita) REFERENCES attivita(id) ) ENGINE = INNODB; ``` When I export the created table from phpMyAdmin, I obtain the following script ``` CREATE TABLE IF NOT EXISTS `users_x_activities` ( `id` int(11) NOT NULL AUTO_INCREMENT, `id_user` int(10) unsigned NOT NULL, `id_attivita` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `id_user` (`id_user`), KEY `id_attivita` (`id_attivita`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; ``` So the question are: where is my foreign key constraints? does `KEY` refer to FK? Seems that the two tables `utente` and `attivita` are no longer referenced in the new generated script. where am I doing wrong? **EDIT** In phpMyAdmin, configuring the export of the table I found the option "Display Foreign Key Relationship" If I flag this option I otain also this code in the script ``` -- -- RELATIONS FOR TABLE `users_x_activity`: -- `id_user` -- `utente` -> `id` -- `id_attivita` -- `attivita` -> `id` -- -- -- Constraints for dumped tables -- -- -- Constraints for table `users_x_activity` -- ALTER TABLE `users_x_activity` ADD CONSTRAINT `users_x_activities_ibfk_1` FOREIGN KEY (`id_user`) REFERENCES `utente` (`id`), ADD CONSTRAINT `users_x_activities_ibfk_2` FOREIGN KEY (`id_attivita`) REFERENCES `attivita` (`id`); ``` This means that if I add the option "Display Foreign Key Relationship" I obtain also the FK constrains? in other case not?
**So the question are: where is my foreign key constraints?** They are defined in the database. The output from `SHOW CREATE TABLE users_x_activities` will include the foreign key constraint definitions. The definitions of the foreign key constraints likely appear in separate ALTER TABLE statements at the end of the generated script. **does KEY refer to FK?** No. `KEY id_user (id_user)` here refers to an index. **Seems that the two tables utente and attivita are no longer referenced in the new generated script.** Yes, you are correct. The foreign key constraints are not included in the CREATE TABLE statement. **where am I doing wrong?** A MySQL `SHOW CREATE TABLE users_x_activities` will include the foreign key constraints. The foreign key constraints are likely included in the script generated by phpMyAdmin, but at the end of the script, in separate `ALTER TABLE` statements.
There are two type of constraints when you managing your tables with phpmyadmin: * internal: when you set constraints with phpmyadmin designer for example the constraints stored as internal,that will not be included in export. * innoDB: these constraints included in export check out linked video about it [Setting up a foreign key constraint](https://www.youtube.com/watch?v=A42FG4LzdbY)
Foreign Key constraints missing after phpmyadmin export
[ "", "mysql", "sql", "phpmyadmin", "foreign-keys", "key", "" ]
I use python2.7 mostly, but I wanted to use python3.3 for a specific task. I referred to the existing questions [Python 3x and python 2.x](https://stackoverflow.com/questions/341184/can-i-install-python-3-x-and-2-x-on-the-same-computer) The solution suggested there did not work for me. I have couple of questions based on the issues I have been facing. 1. Adding python3.3 to the path variable.Some post([add python27\_path](https://stackoverflow.com/questions/16925038/add-python27-to-windows-path-while-using-python-3-3)) suggested to copy the file and rename it. I tried renaming C:\Python3.3\python.exe to C:\Python3.3\python3.exe. But this did not work for me. 2. Adding libraries to PYTHONPATH: Had added C:\Python33\Lib\site-packages to the PYTHONPATH. When I ran the code with Python3.3, it was using libraries of python2.7. Is it possible for the libraries to coexist on the same machine and if I call python2.7 it should look only for its modules?
Those lovely people over at python have come up with the perfect solution for you as a part of python 3.3 - a launcher for windows that works all this out for you take a look about half way down [this](http://docs.python.org/3/using/windows.html) page. The other option is to have a switcher script which changes your path and pythonpath variables for you.
Well, you can explicitly specify which version of python to use by making sure that you add the appropriate python location to the beginning of the path before you invoke the python command. Something like this. Let's assume that you PATH variable in Windows is : **c:\windows\system32;c:\python27\;...** Execute your python scripts using 3.3 this way : ``` SET PATH = "c:\python33\";%PATH% python yourscript.py ``` Execute your python scripts using 2.7 this way : ``` SET PATH = "c:\python27\";%PATH% python yourscript.py ``` This is a good way to execute scripts without having to install too many third party software products. A simple BAT file can then solve your requirement.
installing python 3 and python2.7 on windows
[ "", "python", "installation", "python-3.3", "" ]
I'm trying to build a class based on built-in list type: ``` class MyList(list): def __getslice__(self, i, j): return MyList( super(MyList, self).__getslice__(i, j) ) def __add__(self,other): return MyList( super(MyList, self).__add__(other) ) def __mul__(self,other): return MyList( super(MyList, self).__mul__(other) ) def __getitem__(self, item): result = super(MyList, self).__getitem__(item) try: return MyList(result) except TypeError: return result ``` and I'm wondering if there's a way to make MyList class work with built-in functions like filter or map. By "work with" I mean to make filter and map return MyList class object instead of list type object. ``` >>> a = MyList([1, 2, 3, 4]) >>> type(a) <class '__main__.MyList'> >>> b = filter(lambda this: this > 2, a) >>> type(b) <type 'list'> ``` I'd like type(b) to return the same as type(a) returns. Any suggestions?
No, you'll have to cast the output of `filter()` or `map()` back to `MyList`. These functions are documented to produce a `list`, (almost) always. They do so if you pass *other* types of sequences as well. Quoting from the [`map()` documentation](http://docs.python.org/2/library/functions.html#map): > The *iterable* arguments may be a sequence or any iterable object; the result is always a list. In other words, `filter()` and `map()` don't care about the exact type of the sequences you pass in, that is not limited to your `MyList` type. The exception is `filter()`, which special-cases for `tuple()` and `str()`; quoting the [`filter()` documentation](http://docs.python.org/2/library/functions.html#filter): > If iterable is a string or a tuple, the result also has that type; otherwise it is always a list. This special handling is hard-coded and cannot be extended. In Python 3, this exception no longer applies; both `map()` and `filter()` return a *generator* there instead.
You "could" (not saying you should) do the following ``` _filter = filter def filter(a,b): return MyList(filter(a,b)) ```
Python: inherit from the built-in list type VS filter, map built-in functions
[ "", "python", "subclass", "" ]
I have the following test.py file in django. can you please explain this code? ``` from contacts.models import Contact ... class ContactTests(TestCase): """Contact model tests.""" def test_str(self): contact = Contact(first_name='John', last_name='Smith') self.assertEquals( str(contact), 'John Smith', ) ```
``` from contacts.models import Contact # import model Contact ... class ContactTests(TestCase): # start a test case """Contact model tests.""" def test_str(self): # start one test contact = Contact(first_name='John', last_name='Smith') # create a Contact object with 2 params like that self.assertEquals( # check if str(contact) == 'John Smith' str(contact), 'John Smith', ) ``` Basically it will check if str(contact) == 'John Smith', if not then assert equal is failed and the test is failed and it will notify you the error at that line. In other words, assertEquals is a function to check if two variables are equal, for purposes of automated testing: ``` def assertEquals(var1, var2): if var1 == var2: return True else: return False ``` Hope it helps.
`assertEquals` is a (deprecated) alias for `TestCase.assertEqual`, which is [a method on the `unittest.TestCase` class](http://docs.python.org/2/library/unittest.html#unittest.TestCase.assertEqual). It forms a test assertion; where `str(contact)` must be equal to `'John Smith'` for the test to pass. The form with `s` has been marked as deprecated [since 2010](https://bugs.python.org/issue9424), but they've not actually been removed, and there is no concrete commitment to remove them at this point. If you run your tests with deprecation warnings enabled (as [recommended in PEP 565](https://www.python.org/dev/peps/pep-0565/#recommended-filter-settings-for-test-runners)) you'd see a warning: ``` test.py:42: DeprecationWarning: Please use assertEqual instead. self.assertEquals( ```
What is actually assertEquals in Python?
[ "", "python", "django", "django-tests", "" ]
Suppose I have a dict: ``` x = { "a": ["walk", "the", "dog"], "b": ["dog", "spot"], "c":["the", "spot"] } ``` and want to have the new dict: ``` y = { "walk": ["a"], "the": ["a", "c"], "dog":["a", "b"], "spot":["b","c"] } ``` What is the most efficient way to do this? If a solution is a few lines and is somehow made simple by a pythonic construct what is it (even if it's not most efficient)? Note that this is different than other questions where the value is a single element and not a list.
You can use `defaultdict`: ``` from collections import defaultdict y = defaultdict(list) for key, values in x.items(): # .iteritems() in Python 2 for value in values: y[value].append(key) ```
``` y = {} for (k, v) in x.iteritems(): for e in v: y.setdefault(e, []).append(k) ``` I presented this as an alternative to @Blender's answer, since it's what I'm accustomed to using, but I think Blender's is superior since it avoids constructing a temporary `[]` on every pass of the inner loop.
efficiently swap a python dict's keys and values where the values contain one or more elements
[ "", "python", "algorithm", "dictionary", "" ]
I frequently do a static analysis of SQL databases, during which I have the luxury of nobody being able to change the data except me. However, I have not found a way to 'tell' this to SQL in order to prevent running the same query multiple times. Here is what I would like to do, first I start with a complicated query that has a very small output. ``` SELECT * FROM MYTABLE WHERE MYPROPERTY = 1234 ``` Then I run a simple query from the same window (Mostly using SQL server studio if that is relevant) ``` SELECT 1 ``` Now I suddenly realize that I forgot to save the results from my first complicated (slow) query. As I know the underlying data did not change (or even if it did) I would like to look one step back and simply get the result. However at the moment I don't know any trick to do this and I have to run the entire query again. So the question summary is: How can I (automatically store/)get the results from recently executed queries. I am particulary interested in simple select queries, and would be happy to allocate say 100MB memory for automated result storage. Would prefer a solution that works in SQL server studio with T-SQL, but other SQL solutions are also welcome. --- **EDIT:** I am not looking for a way to manually prevent this from happening. In the cases where I can anticipate the problem it will not happen.
This can't be done in Microsoft SQL Server. SQL Server does not cache results, instead it caches data pages that were accessed by your query. This should make your query go a lot faster the second time around so it won't be as painful to re-run it. In other databases, such as Oracle and MySQL, they do have a query caching mechanism that will allow you to retrieve the results directly the second time around.
I run into this frequently, I often just throw the results of longer-running queries into a temp table: ``` SELECT * INTO #results1 FROM MYTABLE WHERE MYPROPERTY = 1234 SELECT * FROM #results1 ``` If the query is very long-running I might use a 'real' table. It's a good way to save on re-run time. Downside is that it adds to your query. You can also send query results to a file in SSMS, info on formatting the output is here: [SSMS Results to File](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/2b4301a8-d7fd-46b3-9c60-e1f32c1d21d1/ssms-results-to-file)
Get last few query results in SQL
[ "", "sql", "caching", "" ]
I am trying to read in a csv file with `numpy.genfromtxt` but some of the fields are strings which contain commas. The strings are in quotes, but numpy is not recognizing the quotes as defining a single string. For example, with the data in 't.csv': ``` 2012, "Louisville KY", 3.5 2011, "Lexington, KY", 4.0 ``` the code ``` np.genfromtxt('t.csv', delimiter=',') ``` produces the error: > ValueError: Some errors were detected ! > Line #2 (got 4 columns instead of 3) The data structure I am looking for is: ``` array([['2012', 'Louisville KY', '3.5'], ['2011', 'Lexington, KY', '4.0']], dtype='|S13') ``` Looking over the documentation, I don't see any options to deal with this. Is there a way do to it with numpy, or do I just need to read in the data with the `csv` module and then convert it to a numpy array?
You can use [pandas](https://github.com/pydata/pandas/pull/4384) (the becoming default library for working with dataframes (heterogeneous data) in scientific python) for this. It's [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html) can handle this. From the docs: > quotechar : string > > ``` > The character to used to denote the start and end of a quoted item. Quoted items > can include the delimiter and it will be ignored. > ``` The default value is `"`. An example: ``` In [1]: import pandas as pd In [2]: from StringIO import StringIO In [3]: s="""year, city, value ...: 2012, "Louisville KY", 3.5 ...: 2011, "Lexington, KY", 4.0""" In [4]: pd.read_csv(StringIO(s), quotechar='"', skipinitialspace=True) Out[4]: year city value 0 2012 Louisville KY 3.5 1 2011 Lexington, KY 4.0 ``` The trick here is that you also have to use `skipinitialspace=True` to deal with the spaces after the comma-delimiter. Apart from a powerful csv reader, I can also strongly advice to use pandas with the heterogeneous data you have (the example output in numpy you give are all strings, although you could use structured arrays).
The problem with the additional comma, `np.genfromtxt` does not deal with that. One simple solution is to read the file with `csv.reader()` from python's [csv](http://docs.python.org/2/library/csv.html) module into a list and then dump it into a numpy array if you like. If you really want to use `np.genfromtxt`, note that it can take iterators instead of files, e.g. `np.genfromtxt(my_iterator, ...)`. So, you can wrap a `csv.reader` in an iterator and give it to `np.genfromtxt`. That would go something like this: ``` import csv import numpy as np np.genfromtxt(("\t".join(i) for i in csv.reader(open('myfile.csv'))), delimiter="\t") ``` This essentially replaces on-the-fly only the appropriate commas with tabs.
Using numpy.genfromtxt to read a csv file with strings containing commas
[ "", "python", "pandas", "numpy", "csv", "genfromtxt", "" ]
I am attempting to find objects on the screen, see if they exist, and if so, select them. Using the Sikuli library to run this little automation. ``` while True: if exist("image/one.png", "image/two.png", "image/three.png"): click ("image/one.png", or "image/two.png", or "image/three.png") break ``` I get `SyntaxError: mismatched input 'or' expecting RPAREN`a I've done a quick search but there is nothing I saw relevant to my particular issue. I've even tried ``` while True: if exist("image/one.png", or "image/two.png", or "image/three.png"): click ("image/one.png", or "image/two.png", or "image/three.png") break ``` And that results in the same error. @Stephan: New code snippet with error. ``` class gameImages(): imageFiles = ["one.png", "two.png", "three,png"] for imageFile in imageFiles: if exists(imageFile): click(imageFile) ``` The Error now, : ``` NameError: name 'imageFiles' is not defined ```
``` for imageFile in imageFiles: if exists(imageFile): click(imageFile) ``` Your `while` loop isn't doing anything, and neither is your `break` statement. This might do what you want, assuming I understand what you want to do.
Even easier, this is a perfect use of filter(ifexist,imageFiles). You then know that all >=0 elements in the return of filter can be used :). And it's more concise and clearly conveys your intent - much nicer to read then a chain of for's and if's ``` a = range(10) # [1,2,3,4,5,6,7,8,9] print filter(lambda x: x > 5, a) # [6,7,8,9] ``` Also the or is a logical operator: e.g. ``` a = 5 b = 6 c = 5 if( (a==c) or (b==c) ): print 'c is repeated' # c is repeated ``` your use of the or here makes no sense as it doesn't have operands to operate on - these can even be two objects, e.g. 1 or 2 since [anything](http://docs.python.org/release/2.5.2/lib/truth.html) can be cast to a boolean a concise way to do what you want is: //imagepaths = your list of imagepaths ``` map(lambda x: click(x.getTarget()), filter(exists, imagepaths)) ```
Selecting from multiple variables
[ "", "python", "sikuli", "" ]
I've wrote a script to print out lottery combinations. What i'm aiming for is this: There are 6 numbers in the lottery between 1-49 meaning there are 13,983,816 combinations. I want to print out all combinations IN ORDER whilst making sure there are NO duplicates. Here is my code so far: ``` import random numbers = [] for i in range(2): for j in range(6): numbers.append(random.randint(1,49)) for k in range(j): while numbers[j]==numbers[k]: numbers[j]=random.randint(1,49) print sorted(numbers) numbers = [] f = open('combinations.txt', 'w') f.write(str(sorted(numbers))) ``` The problems are: The output in the terminal is: ``` [18, 20, 27, 32, 44, 48] [5, 7, 10, 13, 33, 45] ``` I want to start with `[1,2,3,4,5,6]` and end up at `[44,45,46,47,48,49]`. So I need to order the results. Also, I've tried converting the list to a string so I can put the results in a big text file, but I'm just printing `[]` to the text file at the moment.
You were clearing your list **then** writing to the file. ``` from itertools import combinations f = open('combinations.txt', 'w') for comb in combinations(range(1,50), 6): f.write(str(comb)) f.write('\n') f.close() ``` ***But** make sure you have at least 350 Megabytes of disk space free!* and some time to spare. *(348168480 bytes I checked with:* ``` >>> s = 0 >>> for comb in combinations(range(1,50), 6): ... s += len(repr(comb))+2 ... >>> s 348168480 ``` *).*
Use [`itertools.combinations`](http://docs.python.org/2/library/itertools.html#itertools.permutations): ``` >>> from itertools import combinations >>> for comb in combinations(range(1,50), 6): ... print comb #Hit Enter at your own risk ``` For printing the combinations to the text file: ``` with open('combinations.txt', 'w') as f: for comb in combination: f.write(str(comb) + '\n') ```
Python - all lottery combinations - no duplicates, in order
[ "", "python", "" ]
Let's say for example I have 3 friends, John, Carter, Bill and I want to track places they have visited. The places are in a dictionary, if they have visited the place, it will have a value of 1, and 0 if they have not. ``` places = {"LA": 0, "San Jose": 0, "NY": 0} ``` Now setting this won't work because it's a shallow copy: ``` friends = {"John": places, "Carter": places, "Bill": places} ``` The simple solution I have right now is this: ``` friends = {"John": {"LA": 0, "San Jose": 0, "NY": 0}, "Carter": {"LA": 0, "San Jose": 0, "NY": 0}, "Bill": {"LA": 0, "San Jose": 0, "NY": 0}} ``` which doesn't seem very efficient. Is there a better way to do what I am trying to achieve in Python? Either for the syntax of populating *friends* or just a new method entirely?
If you have a static set of cities, then you could store in a map city name as a key and index of that city as a value. In addition, for each person you can store a list of visits, where i-th element of that list denotes the number of visits in i-th city: ``` places = {"LA": 0, "San Jose": 1, "NY": 2} friends = {"John": [0, 0, 0], "Carter": [0, 0, 0], "Bill": [0, 0, 0]} ``` then if John visits NY you do: ``` friends["John"][places["NY"]] += 1 ``` It's more memory efficient than your method, because it stores names of cities in one place
I believe you're looking for `copy.deepcopy()`. ``` friends = { "John": copy.deepcopy(places), "Carter": copy.deepcopy(places), "Bill": copy.deepcopy(places), } ```
Most efficient way to populate dictionary keys with dictionary
[ "", "python", "dictionary", "" ]
I am very new to SQL, was looking around for this answer on the internet but I just don't know the specific keywords to use. I tried using sum function and distinct function but I just couldn't figure this out. So I have a table like this ``` product_id Sales 11 32 11 28 12 20 12 22 12 10 ``` How do I group this table so that the same product\_id has a total sales (adding all of the sales). I am trying to achieve the result like this: ``` product_id Sales 11 60 12 52 ``` Many thanks!
``` select product_id, sum(Sales) from table group by product_id order by product_id ```
Pretty sure this should work... ``` SELECT product_id, SUM(Sales) FROM table_name GROUP BY product_id ```
How to add values that has the same id
[ "", "sql", "" ]
I am trying to write a program with python. I want to substitute whitespaces in a txt document with new lines. I have tried writing it myself but my output file gets filled with weird characters. Can you help? :)
Here you go: ``` lResults = list() with open("text.txt", 'r') as oFile: for line in oFile: sNewLine = line.replace(" ", "\n") lResults.append(sNewLine) with open("results.txt", "w") as oFile: for line in lResults: oFile.write(line) ``` Here an "optimized" version after the suggestions in the comments: ``` with open("text.txt", 'r') as oFile: lResults = [line.replace(" ", "\n") for line in oFile] with open("results.txt", "w") as oFile: oFile.writelines(lResults) ``` EDIT: Response to comment: > hey sebastian - I just tried your code, it keeps giving me the weird > characters in the output file! am i doing something wrong with it? – > Freddy 1 min ago What do you mean by "weird" characters? Do you have a non-ASCII file? Sorry, but for me it works perfectly fine, I just tested it. ![enter image description here](https://i.stack.imgur.com/6RW0j.png) ![enter image description here](https://i.stack.imgur.com/tUOB1.png)
Try this: ``` import re s = 'the text to be processed' re.sub(r'\s+', '\n', s) => 'the\ntext\nto\nbe\nprocessed' ``` Now, the "text to be processed" above will come from the input text file, that you previously read in a string - see this [answer](https://stackoverflow.com/questions/8369219/how-do-i-read-a-text-file-into-a-string-variable-in-python) for details on how to do this.
super simple python program - substituting whitespaces with newlines?
[ "", "python", "" ]
I am using Flask for my python wsgi server, and sqlalchemy for all my database access. I *think* I would like to use the Flask-Sqlalchemy extension in my application, but I do not want to use the declarative base class (db.Model), instead, I want to use the base from sqlalchemy.ext.declarative. Does this defeat the entire purpose of using the extension? --- My use case: I would like the extension to help me manage sessions/engines a little better, but I would like to handle all models separately. I actually wouldn't mind using the extension, but I want to write *strict* models. I am porting code from a non-flask application, and I will be pushing changes back to that project as I go. If flask-sqlalchemy allows me to cheat on Table **metadata** for instance, that is going to cause problems when the code is pushed back out. There are also portions of my code that do lots of type checking (polymorphic identities), and I also remember reading that type checking on Table is not recommended when using the extension.
SQLAlchemy themselves actually recommend you use the Flask wrapper (db.Model) for Flask projects. That being said I have used the declarative\_base model in several of my Flask projects where it made more sense. It does defeat the whole purpose of the SQLAlchemy class from flask-sqlalchemy. Here's some sample code: ``` from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, sessionmaker import datetime #set up sqlalchemy engine = create_engine('postgresql://<username>:<password>@localhost/flask_database') Base = declarative_base() metadata = Base.metadata metadata.bind = engine Session = sessionmaker(bind=engine, autoflush=True) session = Session() class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) api_owner_id = Column(Integer, ForeignKey('api.id')) email = Column(String(120), unique=True) username = Column(String(120), unique=True) first_name = Column(String(120)) last_name = Column(String(120)) business_name = Column(String(120)) account_type = Column(String(60)) mobile_phone = Column(String(120)) street = Column(String(120)) street2 = Column(String(120)) city = Column(String(120)) state = Column(String(120)) zip_code = Column(String(120)) country = Column(String(120)) creation_date = Column(DateTime, default=datetime.datetime.now()) password = Column(String(120)) #github stuffs github_link = Column(Boolean, default=False) github_usn = Column(String(120)) github_oauth_token = Column(String(160)) #balanced stuffs balanced_account_uri = Column(String(120)) ach_verified = Column(Boolean, default=False) active = Column(Boolean, default=True) profile_updated = Column(Boolean, default=False) account_balance = Column(Numeric(precision=10, scale=2), default=0.00) admin = Column(Boolean, default=False) devapp = relationship('DevApp', backref="user", lazy="dynamic") projects = relationship('Project', backref="user", lazy="dynamic") proposals = relationship('Proposal', backref="user", lazy="dynamic") transactions = relationship('Monies', backref="user", lazy="dynamic") def __repr__(self): return self.email ```
You can have Flask-SQLAlchemy expose your own base Model instead of it's built-in one. Just subclass `SQLAlchemy` and override `make_declarative_base`. ``` from flask.ext.sqlalchemy import SQLAlchemy class CustomAlchemy(SQLAlchemy): def make_declarative_base(self): base = declarative_base(...) ... return base db = CustomAlchemy() ```
using flask-sqlalchemy without the subclassed declarative base
[ "", "python", "sqlalchemy", "flask", "flask-sqlalchemy", "" ]
For example if you declare the following function: ``` >>> def fun(a, b): ... a.pop() ... a.pop() ... b.add(a.pop()) ... return b ... ``` This function would receive any type of data, but let's supose I'd like to receive only arguments of type 'set', because for some reason inside this function I would need to use the methods add() and pop() which is found in any 'set'. How could I say that the function only accepts arguments of type 'set'?
You can use [`isinstance()`](http://docs.python.org/2/library/functions.html#isinstance) to check for type: ``` >>> def fun(a, b): ... if not (isinstance(a, set) and isinstance(b, set)): ... raise TypeError("Please only call this function with sets as the arguments") ... a.pop() ... a.pop() ... b.add(a.pop()) ... return b ```
I think if you want to assign a type. You can do it in the function: ``` if not isinstance(item, set): raise TypeError() ``` If not, your code will raise an exception when illegal operation happens. It is reasonable. Or you can wrap your code in a `try...except` if you don't want an exception: ``` try: ... except: ... ```
How to specify that a function parameter should be of a specific data type?
[ "", "python", "types", "python-3.x", "parameters", "arguments", "" ]
I have some code: ``` report['ipconfig'] = [line.decode('cp866') for line in report['ipconfig']] ``` Can I make this code more simple?
Seomthing like this ? ``` for i, line in enumerate(report['ipconfig']): report['ipconfig'][i] = line.decode('cp866') ```
I don't know if this is more simple (what does that even mean?), but it's a different way of doing it: ``` report['ipconfig'] = map(lambda x : x.decode('cp866'), report['ipconfig']) ```
Is there more simple way to change list elements
[ "", "python", "list", "decode", "" ]
I am using **psycopg2** module in python to read from postgres database, I need to some operation on all rows in a column, that has more than 1 million rows. I would like to know would `cur.fetchall()` fail or cause my server to go down? (since my RAM might not be that big to hold all that data) ``` q="SELECT names from myTable;" cur.execute(q) rows=cur.fetchall() for row in rows: doSomething(row) ``` what is the smarter way to do this?
`fetchall()` fetches up to the [`arraysize`](http://www.python.org/dev/peps/pep-0249/#arraysize) limit, so to prevent a massive hit on your database you can either fetch rows in manageable batches, or simply step through the cursor till its exhausted: ``` row = cur.fetchone() while row: # do something with row row = cur.fetchone() ```
The solution Burhan pointed out reduces the memory usage for large datasets by only fetching single rows: > row = cursor.fetchone() However, I noticed a significant slowdown in fetching rows one-by-one. I access an external database over an internet connection, that might be a reason for it. Having a server side cursor and fetching bunches of rows proved to be the most performant solution. You can change the sql statements (as in alecxe answers) but there is also pure python approach using the feature provided by psycopg2: ``` cursor = conn.cursor('name_of_the_new_server_side_cursor') cursor.execute(""" SELECT * FROM table LIMIT 1000000 """) while True: rows = cursor.fetchmany(5000) if not rows: break for row in rows: # do something with row pass ``` you find more about server side cursors in the [psycopg2 wiki](http://wiki.postgresql.org/wiki/Using_psycopg2_with_PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor)
python postgres can I fetchall() 1 million rows?
[ "", "python", "postgresql", "psycopg2", "fetchall", "" ]
In SQL Server, what's the simplest way to join a table of IPv4 addresses... ``` IPAddress ------------ 10.70.80.34 10.70.81.60 10.70.81.205 ``` To a table of IP ranges in CIDR notation... ``` IPRange Description --------------- ----------- 10.70.80.0/24 Sydney 10.70.81.0/25 Melbourne 10.70.81.128/25 Perth ``` There will be fewer than 100 rows in the IP range table.
I wrote the following function which is OK for a small number of rows. For larger tables the IPs should be stored [in binary form](https://stackoverflow.com/questions/1385552/datatype-for-storing-ip-address-in-sql-server). ``` CREATE FUNCTION IPAddressInRange ( @IPAddress NVARCHAR(MAX), @IPRange NVARCHAR(MAX) ) RETURNS BIT AS BEGIN DECLARE @SlashPos INT = CHARINDEX('/', @IPRange); DECLARE @Network NVARCHAR(MAX) = SUBSTRING(@IPRange, 1, @SlashPos - 1); DECLARE @PrefixBits INT = CAST(SUBSTRING(@IPRange, @SlashPos + 1, 2) AS INT); DECLARE @IPAddressInt BIGINT = PARSENAME(@IPAddress, 4) * POWER(CAST(2 AS BIGINT), 24) + PARSENAME(@IPAddress, 3) * POWER(CAST(2 AS BIGINT), 16) + PARSENAME(@IPAddress, 2) * POWER(CAST(2 AS BIGINT), 8) + PARSENAME(@IPAddress, 1); DECLARE @NetworkInt BIGINT = PARSENAME(@Network, 4) * POWER(CAST(2 AS BIGINT), 24) + PARSENAME(@Network, 3) * POWER(CAST(2 AS BIGINT), 16) + PARSENAME(@Network, 2) * POWER(CAST(2 AS BIGINT), 8) + PARSENAME(@Network, 1); DECLARE @Mask BIGINT = POWER(CAST(2 AS BIGINT), 32) - POWER(CAST(2 AS BIGINT), 32 - @PrefixBits); RETURN CASE WHEN @IPAddressInt & @Mask = @NetworkInt THEN 1 ELSE 0 END; END ``` Example usage: ``` SELECT * FROM IPAddressTable a JOIN IPRangeTable r ON dbo.IPAddressInRange(a.IPAddress, r.IPRange) = 1 ```
The [`ParseName()`](http://msdn.microsoft.com/en-us/library/ms188006.aspx) function may be useful for you in this scenario ``` DECLARE @ips table ( ip_address varchar(15) ); INSERT INTO @ips (ip_address) VALUES ('10.70.80.34') , ('10.70.81.60') , ('10.70.81.205'); SELECT ip_address , ParseName(ip_address, 4) As first_octet , ParseName(ip_address, 3) As second_octet , ParseName(ip_address, 2) As third_octet , ParseName(ip_address, 1) As fourth_octet FROM @ips ``` ## Results ``` ip_address first_octet second_octet third_octet fourth_octet --------------- ------------- -------------- ------------ ------------- 10.70.80.34 10 70 80 34 10.70.81.60 10 70 81 60 10.70.81.205 10 70 81 205 ```
SQL Server join IP address to IP range
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I got two strings retrieved from a cookie ``` name = 'rack.session' val = 'CookieVal' ``` Using them I would like to build a dictionary ``` cookies = dict(rack.session=val) ``` but `SyntaxError: keyword can't be an expression` So I tried to escape the (.) dot ``` re.escape(name) ``` ... but it raises the same error How is this possible? According to Python `type()` name is a string: ``` type(name) <class 'str'> ``` Why is Python mixing up strings and expressions?
The problem with `rack.session` is that python thinks that you're trying to use the value of expression `rack.session` and pass it to `dict()`, which is incorrect because `dict()` expects you to pass variables names when you're using keyword arguments, these variables name are then converted to strings when the dict is created. Simple example: ``` >>> dict('val' = 'a') File "<ipython-input-21-1cdf9688c191>", line 1 SyntaxError: keyword can't be an expression ``` So, you can't use an object on the left side of `=`, you can only use a valid identifier. Byte code makes it even more clear what happens with `rack.session`: ``` >>> import dis >>> dis.dis(lambda : dict(rack.session , val)) 1 0 LOAD_GLOBAL 0 (dict) 3 LOAD_GLOBAL 1 (rack) # load the object `rack` 6 LOAD_ATTR 2 (session)# use the value of it's attribute # `session` 9 LOAD_GLOBAL 3 (val) 12 CALL_FUNCTION 2 15 RETURN_VALUE ``` So, with `rack.session = val`, python will think that you're trying to use the value returned from `rack.session` and pass it to `dict`, which is incorrect. Secondly `rack.session` isn't a valid identifier as dots(`.`) are not allowed in python identifiers. This is applicable to any function in python not even `dict`, a keyword argument must be a valid identifier. From the [docs](http://docs.python.org/2/reference/expressions.html#calls): ``` keyword_item ::= identifier "=" expression ``` Valid examples: ``` >>> dict(foo = 1, bar = '2') {'foo': 1, 'bar': '2'} ``` For your example you can simply do: ``` >>> val = 'CookieVal' >>> name = 'rack.session' >>> dict(((name,val),)) {'rack.session': 'CookieVal'} #or >>> {name:val} {'rack.session': 'CookieVal'} ```
``` >>> name = 'rack.session' >>> val = 'CookieVal' >>> cookies = {name: val} >>> cookies {'rack.session': 'CookieVal'} ```
SyntaxError: keyword can't be an expression while creating a dictionary
[ "", "python", "dictionary", "" ]
I have an apache instance where I have the following ``` WSGIPythonPath /production/somelocation/django12/lib/python2.4/site-packages/ <VirtualHost 192.168.1.1:443> WSGIScriptAlias / /opt/project.wsgi ..... ``` My Django 1.5 app apache config looks like, ``` WSGIPythonPath /production/somelocation/django15/lib/python2.7/site-packages/ <VirtualHost 192.168.1.2:443> .... WSGIScriptAlias / /opt/project2.wsgi ``` My /opt/project.wsgi looks like ``` import os import sys # django1.2 virtualenv import site site.addsitedir("/production/somelocation/django12/lib/python2.4/site-packages") ..... ``` However when I go to the site I still get my default django (1.5) instance. What am I missing ?
The other answers mention setting the python path, however using WSGIPythonPath or WSGIPythonHome are not correct. The WSGIPythonPath / WSGIPythonHome can only be set **server-wide**, so no different paths per virtualhost. You would want to use the [**WSGIDaemonProcess**](http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess) **python-path** and **home** arguments to set the python path and your apps home directory per virtualhost. Also, within your code there is no need to adjust python paths; just make sure your virtualhost config is correct.
This is how I do with Pyramid: ``` <VirtualHost *:80> Servername hackintosh DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> ServerName modwebsocket.local ErrorLog "/PythonProjects/MOD_WEBSOCKET/logs/error_log" CustomLog "/PythonProjects/MOD_WEBSOCKET/logs/access_log" common WSGIDaemonProcess pyramid-modwebsocket user=apero group=staff threads=4 python-path=/PythonProjects/MOD_WEBSOCKET/lib/python2.7/site-packages WSGIProcessGroup pyramid-modwebsocket WSGIScriptAlias / /PythonProjects/MOD_WEBSOCKET/wsgi/pyramid.wsgi <Directory "/PythonProjects/MOD_WEBSOCKET/wsgi"> WSGIProcessGroup pyramid-modwebsocket Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName ai.local ErrorLog "/PythonProjects/AI/logs/error_log" CustomLog "/PythonProjects/AI/logs/access_log" common WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid-ai user=apero group=staff threads=4 python-path=/PythonProjects/AI/lib/python2.7/site-packages WSGIProcessGroup pyramid-wizard WSGIScriptAlias / /PythonProjects/AI/wsgi/pyramid.wsgi <Directory "/PythonProjects/AI/wsgi"> WSGIProcessGroup pyramid-ai Order allow,deny Allow from all </Directory> </VirtualHost> ```
Howto configure Apache WSGI for multiple separate Django instances?
[ "", "python", "django", "apache2", "mod-wsgi", "" ]
There are two tables, > table1 contains matchid,mdate(matchdate),stadium,team1,team2 > > table2 contains matchid,teamid,playername, goal time(time at which goal was > scored) Here, team1,team2,teamid are the same (abbrevations for country that's playing like 'GER', 'POL', RUS' etc) Now I need to list every match with the goals scored by each team in the form of a table3 > table3 will contain matchdate,team1,score1,team2,score2 where score1 is number of goals scored by team1 and score2 is number of goals scored by team2 respectively I tried using this query but it gives error. Any help will be appreciated > ``` > SELECT mdate,team1,score1,team2,score2 > > CASE WHEN teamid=team1 THEN count(gtime)=score1 ELSE 0 > > CASE WHEN teamid=team2 THEN count(gtime) ELSE 0 > END > ``` ``` FROM game JOIN goal ON table1.matchid = table2.matchid group by matchid ``` EDIT: The qestion is query 13from sqlzoo.net/wiki/The\_JOIN\_operation Also, I realized the mistake in my case when statement. Thanks all for the help
How about this: ``` SELECT game.mdate, game.team1, SUM(CASE WHEN goal.teamid=team1 THEN 1 ELSE 0 end) AS score1, game.team2, SUM(CASE WHEN goal.teamid=team2 THEN 1 ELSE 0 end) AS score2 FROM game JOIN goal ON goal.matchid = game.id GROUP BY game.mdate, game.team1, game.team2 ``` Makes you think what happens in case of an own-goal though.
if you want the result in 2 columns, you are missing a comma, otherwise your syntax is wrong. You dont need the second "case" word ``` SELECT CASE WHEN 1=1 THEN 'equal' when 1=3 THEN 'not equal' END ```
Use of CASE WHEN in SQL
[ "", "mysql", "sql", "" ]
I was solving the following question where I have to "List all the people who have worked with 'Art Garfunkel'." The question is no. 16 here : <http://sqlzoo.net/wiki/More_JOIN_operations> Edit: I wrote the following code to get the whole names list, but in vain ``` select name from actor left join casting on id=actorid where movieid IN ( select movieid from casting left join actor on id=actorid where id IN ( select id from actor where name ='Art Garfunkel')) ``` Kindly tell me where am I going wrong?
Try this: ``` SELECT a.name FROM actor a INNER JOIN casting b ON a.id = b.actorid INNER JOIN (SELECT b.movieid FROM casting b INNER JOIN actor c ON b.actorid=c.id AND c.name = 'Art Garfunkel' ) c ON b.movieid = c.movieid where a.name <> 'Art Garfunkel'; ```
I don't think you need any subqueries for this, you can do it with JOINS only.. ``` select distinct c1.name from movie a inner join casting b on a.id=b.movieid inner join actor c on b.actorid=c.id and c.name = 'Art Garfunkel' inner join actor c1 on b.actorid=c.id and c.name <> 'Art Garfunkel' ```
Subqueries and Join in SQL
[ "", "sql", "select", "join", "inner-join", "" ]
I'm trying to add a "Open file" file tab on my UI. Works ok, but a `---------` line is showing up at the top of the tab and I want to remove it. I don't know why that line is showing up and I can't find the line on the code. ![enter image description here](https://i.stack.imgur.com/oozVy.png) This is my code: ``` # -*- coding: utf-8 -*- from Tkinter import * import Image import ImageTk import tkFileDialog class Planificador(Frame): def __init__(self,master): Frame.__init__(self, master) self.master = master self.initUI() def initUI(self): self.master.title("test") menubar = Menu(self.master, tearoff=0) self.master.config(menu=menubar) fileMenu = Menu(menubar) fileMenu.add_command(label="Open config file", command=self.onOpen) menubar.add_cascade(label="File", menu=fileMenu) fileMenu.add_separator() fileMenu.add_command(label="Exit", command=root.quit) self.txt = Text(self) self.txt.pack(fill=BOTH, expand=1) def onOpen(self): ftypes = [('Python files', '*.py'), ('All files', '*')] dlg = tkFileDialog.Open(self, filetypes = ftypes) fl = dlg.show() if fl != '': text = self.readFile(fl) self.txt.insert(END, text) def readFile(self, filename): f = open(filename, "r") text = f.read() return text # Main if __name__ == "__main__": # create interfacE root = Tk() aplicacion = Planificador(root) root.mainloop() ``` I would like to know where I can remove that `-------` from the code. Thanks in advance
Set `tearoff` option of `fileMenu` to `False` (or `0`) ``` fileMenu = Menu(menubar, tearoff=False) ```
The best way to remove the *dashed line* is to use the `option_add` method to set the `*tearOff` option of `root` to `False` before you start creating any menus. Here you can do this in your `initUI` method like this: ``` def initUI(self): self.master.title("test") self.master.option_add('*tearOff', False) ``` This will remove the *dashed line* for each menus you will create so won't need to set the `tearoff=False` anytime you create a menus.
How to remove dashed line from my menu UI?
[ "", "python", "tkinter", "" ]
I have Python 2.7.1 on a Simplified-Chinese version of Windows XP, and I have a program like this(windows\_prn\_utf8.py): ``` #!/usr/bin/env python # -*- coding: utf8 -*- print unicode('\xE7\x94\xB5', 'utf8') ``` If I run it on Windows CMD console, it output the right Chinese character '电' ; however, if I try to redirect the command output to a file. I got error. ``` D:\Temp>windows_prn_utf8.py > 1.txt Traceback (most recent call last): File "D:\Temp\windows_prn_utf8.py", line 4, in <module> print unicode('\xE7\x94\xB5', 'utf8') UnicodeEncodeError: 'ascii' codec can't encode character u'\u7535' in position 0: ordinal not in range(128) ``` I realize there is a missing link here. There should be a way to determine, in case 1.txt is generated, whether the unicode character in 1.txt should be encoded in utf-8 or codepage-936 or other encodings. Then how to fix it? My preference is to have utf-8 encoding in 1.txt . Thank you. ![enter image description here](https://i.stack.imgur.com/5hgdj.png)
Set `PYTHONIOENCODING` environmental variable. ``` SET PYTHONIOENCODING=cp936 windows_prn_utf8.py > 1.txt ```
Seems like this was solved, but I think a bit more detail will help explain this actual problem. The 'utf8' in `unicode('\xE7\x94\xB5', 'utf8')` is telling the interpreter how to *decode* the 3 bytes you're providing in the other argument in order to represent the character internally as a unicode object: ``` In [6]: uobj = unicode('\xe7\x94\xb5','utf8') In [7]: uobj Out[7]: u'\u7535' ``` Another example would be creating the same character from its utf-16 representation (which is what python displays by default and shown in the `Out[7]` line above): ``` In [8]: uobj = unicode('\x35\x75','utf16') In [9]: uobj Out[9]: u'\u7535' ``` In your example after the object has been created it becomes an argument to `print` which tries to write it to standard out (console window, redirected to a file, etc). The complication is that `print` must re-encode that object into a byte stream before writing it. It looks like in your case the encoding it used by default was ACSII which cannot represent that character. (If a console will try to display the characters, they will be re-decoded and replaced in the window with the corresponding font glyphs--this is why your output and the console both need to be 'speaking' the same encoding.) From what I've seen cmd.exe in windows is pretty confusing when it comes to character encodings, but what I do on other OSes is explicitly encode the bytes before printing/writing them with the unicode object's `encode` function. This returns an encoded byte sequence stored in a `str` object: ``` In [10]: sobj = uobj.encode('utf8') In [11]: type(sobj) Out[11]: str In [12]: sobj Out[12]: '\xe7\x94\xb5' In [13]: print sobj 电 ``` Now that `print` is given a `str` instead of a `unicode`, it doesn't need to encode anything. In my case my terminal was decoding utf8, and its font contained that particular character, so it was displayed properly on my screen (and hopefully right now in your browser).
Print unicode string to console OK but fails when redirect to a file. How to fix?
[ "", "python", "python-2.7", "python-unicode", "" ]
I have a SQL query as below. What I want with the where part is for the result to not return rows where CategorySite returns more than two rows with the matched categoryid in ProductCategory regardless of SiteId. The problem is that I know that ProductCategory has more than one matching categories for some of the results I am receiving, so there is something wrong with my query and I can't figure out what. ``` select top 10 pp.* from ProductProperty pp inner join ProductCategory pc on pp.fkProductId = pc.fkProductId and pp.fkLocaleId = 1 inner join CategorySite cs on pc.fkCategoryId = cs.fkCategoryId and cs.fkSiteId = 2 inner join CategoryProperty cp on cs.fkCategoryId = cp.fkCategoryId and cp.fkLocaleId=1 where (select count(*) from CategorySite css where pc.fkCategoryId = css.fkCategoryId) = 1 ```
So I was able to do this using my intial solution, turns out the problem was elsewhere. Thanks anyway! :)
Why are you joining `CategoryProperty` if you don't use it in the predicate? try this. If you need CategoryProperty, try adding it at the end ``` ;WITH InsteadOfWhere AS ( SELECT fkCategoryId FROM CategorySite GROUP BY fkCategoryId HAVING COUNT(fkCategoryId) = 1 ) SELECT TOP 10 * FROM ProductProperty pp inner join ProductCategory pc on pp.fkProductId = pc.fkProductId and pp.fkLocaleId = 1 INNER JOIN InsteadOfWhere ON InsteadOfWhere.fkCategoryID = pc.fkCategoryId inner join CategorySite cs on InsteadOfWhere.fkCategoryId = cs.fkCategoryId and cs.fkSiteId = 2 ```
SQL where subquery not filtering like expected
[ "", "sql", "where-clause", "" ]
What is a fast/readable way to SELECT a relation from "nothing" that contains a list of numbers. I want to define which numbers by setting a start and end value. I am using Postgres SQL and SQLite, and would be interested in **generic** solutions that will work on both/many platforms. Desired output relation: ``` # x 0 1 2 3 4 ``` I know that I can SELECT a single row from "nothing": `SELECT 0,1,2,3,4` But this selects the values as columns instead of rows and requires to specify all values in the query instead of only using my start and end values: `0` and `4`. In Postgres you have a special `generate_series` function for this case: ``` SELECT * FROM generate_series(0,4) x; ``` This works nicely but is non-standard. I can also imagine some complicated solutions using temporary tables, but I would like to have something generic AND simple like: ``` SELECT * FROM [0..4] ``` Maybe using the `SEQUENCE` statement or some magic combination of `SELECT 0` and `SELECT 4`?
Thanks for all answers! Following the discussion I realized that using a [numbers table](http://searchoracle.techtarget.com/answer/The-integers-table) is not too complicated and works well and fast on both/many platforms: ``` CREATE TABLE integers (i integer); INSERT INTO integers (i) VALUES (0); INSERT INTO integers (i) VALUES (1); ... INSERT INTO integers (i) VALUES (9); SELECT (hundreds.i * 100) + (tens.i * 10) + units.i AS x FROM integers AS units CROSS JOIN integers AS tens CROSS JOIN integers AS hundreds ``` You just create this table once and can use it whenever you need a range of numbers.
Well in SQL server (and PostgreSQL) I would use recursive common table expression: [SQL Server](http://msdn.microsoft.com/en-us/library/ms175972.aspx), [PostgreSQL](http://www.postgresql.org/docs/9.2/static/queries-with.html) ``` with recursive Numbers as ( select 0 as Number union all select Number + 1 from Numbers where Number < 4 ) select Number from Numbers ``` [**SQL FIDDLE EXAMPLE**](http://sqlfiddle.com/#!12/d41d8/1091) But, as far as I know, there's no WITH in SQLite. So, the possible solutions could be * create a user defined function ([this](http://www.sqlite.org/capi3.html) could be helpful) * create a table with numbers from 0 to max number you'll ever need, and then just select from it like this: ``` select Number from Numbers where Number >= 0 and Number <= 4 ```
SQL: Select a list of numbers from "nothing"
[ "", "sql", "sqlite", "postgresql", "sequence", "" ]
I need to generate a Project wise surface area at each activity from below table PLE ``` Activity Posting Date Surface Area Project --------------------------------------------------------------------- Shearing 01-04-2013 2.34 A Bending 01-04-2013 2.34 A Assembly 02-04-2013 2.34 B PC 02-04-2013 5.34 B Infill 05-04-2013 5.34 C ``` I'm trying to do this. ``` SELECT DISTINCT Project,sum(Project.[Surface Area]) AS TotalShearing FROM PLE WHERE ([Posting Date] BETWEEN @StartDate AND @EndDate) AND (Activity = Shearing) GROUP BY Project ``` Now I want to display `TotalBending, TotalAssembly` and so on in columns right to `TotalShearing`. But no idea how to get them as WHERE condition is already used for Activity 'Shearing'. This may be simple task, but I'm new to SQL and hence need HELP!
Try something like this: ``` SELECT Project ,sum(CASE WHEN Activity = 'Shearing' THEN [Surface Area] ELSE 0 END) AS TotalShearing ,sum(CASE WHEN Activity = 'Bending' THEN [Surface Area] ELSE 0 END) AS TotalBending ,sum(CASE WHEN Activity = 'Assembly' THEN [Surface Area] ELSE 0 END) AS TotalAssembly ,sum(CASE WHEN Activity = 'PC' THEN [Surface Area] ELSE 0 END) AS TotalPC ,sum(CASE WHEN Activity = 'Infill' THEN [Surface Area] ELSE 0 END) AS TotalInfill WHERE ([Posting Date] BETWEEN @StartDate AND @EndDate) FROM PLE GROUP BY Project ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/c6d65/2)**
Use the `PIVOT` table operator: ``` SELECT * FROM ( SELECT Activity, [Surface Area], project FROM PLE ) AS t PIVOT ( sum([Surface Area]) FOR Activity IN ([Shearing], [Bending], [Assembly], [PC], [Infill]) ) AS p; ``` * [SQL Fiddle Demo](http://www.sqlfiddle.com/#!3/1a28b/8) --- If you want to do this dynamically for each number of `Activity` you have to use dynamic SQL like this: ``` DECLARE @cols AS NVARCHAR(MAX); DECLARE @query AS NVARCHAR(MAX); select @cols = STUFF((SELECT distinct ',' + QUOTENAME(Activity) FROM PLE FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') , 1, 1, ''); SELECT @query = 'SELECT * FROM ( SELECT Activity, [Surface Area], project FROM PLE ) AS t PIVOT ( sum([Surface Area]) FOR Activity IN (' + @cols + ') ) AS p'; execute(@query); ``` See this: * [Updated SQL Fiddle Demo](http://www.sqlfiddle.com/#!3/1a28b/10)
Select Single field multiple times with multiple criteria
[ "", "sql", "sql-server", "pivot", "" ]
I'm trying to unzip a file from an FTP site. I've tried it using 7z in a subprocess as well as using 7z in the older os.system format. I get closest however when I'm using the zipfile module in python so I've decided to stick with that. No matter how I edit this I seem to get one of two errors so here are both of them so y'all can see where I'm banging my head against the wall: ``` z = zipfile.ZipFile(r"\\svr-dc\ftp site\%s\daily\data1.zip" % item) z.extractall() ``` NotImplementedError: compression type 6 (implode) (I think this one is totally wrong, but figured I'd include.) I seem to get the closest with the following: ``` z = zipfile.ZipFile(r"\\svr-dc\ftp site\%s\daily\data1.zip" % item) z.extractall(r"\\svr-dc\ftp site\%s\daily\data1.zip" % item) IOError: [Errno 2] No such file or directory: '\\\\svr-dc...' ``` The catch with this is that it is actually giving me the first file name in the zip. I can see the file AJ07242013.PRN at the end of the error so I feel closer because it's at least getting to the point of reading the contents of the zip file. Pretty much any iteration of this that I try gets me one of those two errors, or a syntax error but that's easily addressed and not my primary concern. Sorry for being so long winded. I'd love to get this working, so let me know what you think I need to do. EDIT: So 7z has finally been added to the path and is running through without any errors with both the subprocess as well as os.system. However, I still can't seem to get anything to unpack. It looks to me, from all I've read in the python documentation that I should be using the subprocess.communicate() module to extract this file but it just won't unpack. When I use os.system it keeps telling me that it cannot find the archive. ``` import subprocess cmd = ['7z', 'e'] sp = subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE) sp.communicate('r"\C:\Users\boster\Desktop\Data1.zip"') ``` I don't think that sp.communicate is right but if I add anything else to it I have too many arguments.
Managed to get this to work without using the PIPE functionality as subprocess.communicate wouldn't unpack the files. Here was the solution using subprocess.call. Hope this can help someone in the future. ``` def extract_data_one(): for item in sites: os.chdir(r"\\svr-dc\ftp site\%s\Daily" % item) subprocess.call(['7z', 'e', 'data1.zip', '*.*']) ```
python's zipfile doesn't support compression type 6 (imploded) so its simply not going to work. In the first case, that's obvious from the error. In the second case, things are worse. The parameter for extractfile is an alternate unzip directory. Since you gave it the name of your zip file, a directory of the same name can't be found and zipfile gives up before getting to the not-supported problem. Make sure you can do this with 7z on the command line, try implementing subprocess again and ask for help on that technique if you need it. Here's a script that will look for 7z in the usual places: ``` import os import sys import subprocess from glob import glob print 'python version:', sys.version subprocess.call('ver', shell=True) print if os.path.exists(r'C:\Program Files\7-Zip'): print 'have standard 7z install' if '7-zip' in os.environ['PATH'].lower(): print '...and its in the path' else: print '...but its not in the path' print print 'find in path...' found = 0 for p in os.environ['PATH'].split(os.path.pathsep): candidate = os.path.join(p, '7z.*') for fn in glob(candidate): print ' found', fn found += 1 print if found: print '7z located, attempt run' subprocess.call(['7z']) else: print '7z not found' ```
Trouble extracting zip in python over ftp
[ "", "python", "subprocess", "7zip", "zip", "os.system", "" ]
I have a 123MB sql file which I need to execute in my local PC. But I am getting ``` Cannot execute script: Insufficient memory to continue the execution of the program ``` ![enter image description here](https://i.stack.imgur.com/hV4C6.png) How to solve this issue?
> use the command-line tool SQLCMD which is much leaner on memory. It is as simple as: > > ``` > SQLCMD -d <database-name> -i filename.sql > ``` > > You need valid credentials to access your SQL Server instance or even to access a database Taken from [here](https://stackoverflow.com/questions/11307435/getting-error-while-running-50-mb-script-on-sql-server-2008-r2/11307809#11307809).
It might help you! Please see below steps. > sqlcmd -S server-name -d database-name -i script.sql * Open cmd.exe as Administrator. * Create Documents directory. * Put your SQL Script file(script.sql) in the documents folder. * Type query with sqlcmd, server-name, database-name and script-file-name as like above highlighted query or below command line screen. [![enter image description here](https://i.stack.imgur.com/8Fbr2.png)](https://i.stack.imgur.com/8Fbr2.png)
Cannot execute script: Insufficient memory to continue the execution of the program
[ "", "sql", "t-sql", "sql-server-2012", "" ]
Suppose I have a table of customers and a table of sales order with the following schemas: 1. Customer = {id, name} 2. Sales\_order = {id, customer\_id, sales\_representer} With the following defintions : 1. id is a primary key in both tables. 2. customer\_id is a foriegn key references customer. I want to implement the following query : ``` For any customer whose sales_representer is 100, find the customer id, customer name and the number of his overall orders. ``` I built the following query: ``` select C.id, C.name, count(C.id) from customer C, sales_order S where C.id = S.customer_id and S.sales_represntor = '100' group by C.id, C.nname; ``` But as a result of count(C.id) I get only the number of sales whose the sales\_representer is 100. I know I can add another instance of sales\_order (i.e. S2) and count from it but It seems to me not efficent at all. Do anyone have a solution ? Thank you
You could use a correlated subquery to calculate the sales number. (In SQLite, subqueries are often as efficient as a join.) ``` SELECT id, name, (SELECT COUNT(*) FROM sales_order WHERE customer_id = customer.id) AS orders FROM customer WHERE id IN (SELECT customer_id FROM sales_order WHERE sales_representer = '100') ``` If you care about efficiency, you should check the queries with [EXPLAIN QUERY PLAN](http://www.sqlite.org/eqp.html), or even better, just measure them.
You could use a `having` clause to demand that at least one sale was by representative 100: ``` select C.id , C.name , count(*) as TotalSaleCount from customer C join sales_order S on C.id = S.customer_id group by C.id , C.name having count(case when S.sales_representor = '100' then 1 end) > 0 ```
SQL/SQL-LITE - Counting records after filtering
[ "", "sql", "sqlite", "" ]
I have a SQL query I am trying to write but haven't been able to come up with the solution. 2 entries in my table that are related, are related through the first 3 characters of the ID. When a new item is added that needs to be related to a certain entry, the first 3 characters are added, and then the 2nd two are incremented by one to create the new ID. When a completely sepere entry is needed a unique 3 digit string is used and starts the second two chars with "00". There may be some that have large gaps in the last 2 columns because of deleted data (I don't want these in my query) What I would like to do is get only the rows where there is another row with the same first 3 characters and 1 less from the count of the last 2. Table Example: ``` ID | Date ====== ======== 11100 | 07/12 22211 | 07/13 12300 | 07/14 11101 | 07/14 11400 | 07/16 22212 | 07/16 ``` The Query should only return these elements because there exsists another entry with the same first 3 chars and one less from the last 2 chars . ``` ID | Date ====== ======== 11101 | 07/14 22212 | 07/16 ```
Looks like a simple JOIN will do it; ``` SELECT a.* FROM Table1 a JOIN Table1 b ON a.id/100 = b.id/100 AND a.id = b.id + 1 ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!3/9d68e/12). You can also write it as an `EXISTS` query; ``` SELECT a.* FROM Table1 a WHERE EXISTS ( SELECT 1 FROM Table1 b WHERE b.id = a.id-1 AND a.id/100 = b.id/100 ) ``` [Another SQLfiddle](http://sqlfiddle.com/#!3/9d68e/11).
``` Declare @a table(ID Varchar(10), [Date] varchar(10)) Insert into @a Select '11100','07/12' UNION Select '22211','07/13' UNION Select '12300','07/14' UNION Select '11101','07/14' UNION Select '11400','07/16' UNION Select '22212','07/16' Select a.* from @a a JOIN ( Select SubString(ID,1,3) + RIGHT('0'+Cast(MAX(Cast(SubString(ID,4,2) as int)) as Varchar(10)),2) as ID from @a group by SubString(ID,1,3) Having Count(*)>1 ) x on x.ID=a.ID ```
Sql Query, query same table twice (possibly)
[ "", "sql", "sql-server", "sql-server-2000", "" ]
I am using [GeopositionField](https://github.com/philippbosch/django-geoposition/blob/master/docs/index.rst) in Django to store the coordinates of my user. Now I want to find a list of 20 users who are closest to current user. Can that functionality be acheaved my GeopositionField? I know that GeoDjango makes it easy to search distances, but since I am using Heroku and postgresql, I want to keep the costs down and with postgressql, installing PostGIS seems to be the only alternative. Any suggestions?
For the distance between two points you can use Geopy. From the [documetation](https://code.google.com/p/geopy/wiki/GettingStarted#Calculating_distances): Here's an example usage of distance.distance: ``` >>> from geopy import distance >>> _, ne = g.geocode('Newport, RI') >>> _, cl = g.geocode('Cleveland, OH') >>> distance.distance(ne, cl).miles 538.37173614757057 ``` To implement this in a Django project. Create a normal model in models.py: ``` class User(models.Model): name = models.Charfield() lat = models.FloatField() lng = models.FloatField() ``` To optimize a bit you can filter user objects to get a rough estimate of nearby users first. This way you don't have to loop over all the users in the db. This rough estimate is optional. To meet all your project requirements you maybe have to write some extra logic: ``` #The location of your user. lat, lng = 41.512107999999998, -81.607044999999999 min_lat = lat - 1 # You have to calculate this offsets based on the user location. max_lat = lat + 1 # Because the distance of one degree varies over the planet. min_lng = lng - 1 max_lng = lng + 1 users = User.objects.filter(lat__gt=min_lat, lat__lt=max__lat, lat__gt=min_lat, lat__lt=max__lat) # If not 20 fall back to all users. if users.count() <= 20: users = User.objects.all() ``` **Calculate the distance between your user and each user in users, sort them by distance and get the first 20.** ``` results = [] for user in users: d = distance.distance((lat, lng), (user.lat, user.lng)) results.append( {'distance':d, 'user':user }) results = sorted(results, key=lambda k: k['distance']) results = results[:20] ```
I think you have 2 options here: 1. There is no efficient way to do it without an spatial index (used by Postgis and Geodjango with PointField) and using GeopositionField. The only way I found to deal with this issue is: * You have to find all distances from the source user to all users (this is really heavy). * Then sort all the distances and top the 20 you are looking for. GeopositionField stores the coordinates as text but can be retrieved using `.latitude` and `longitude` on the field. 2. There seems to be support for the K-Nearest-Neighbors problem in Postgresql 9.1+ (<http://wiki.postgresql.org/images/4/46/Knn.pdf>). But, I think you will have to either add another column to you table to store Points (<http://www.postgresql.org/docs/9.2/static/datatype-geometric.html>) or implement a distance function for GeopositionField. If you are using the basic setup of Heroku just for development and plan to change to a higher plan, I would suggest to use the first approach since other heroku plans support Postgis and you can easily implement this approach and later change it to a simple Postgis function call. Although, if this is the only case in which you will deal with spatial data, I would recommend to use a Point field and KNN support. So you won't need postgis support in the future.
Using GeopositionField to find closest database entries
[ "", "python", "django", "geolocation", "geocoding", "django-geoposition", "" ]
I am looking for an idiomatic way to combine an n-dimensional vector (given as a list) with a list of offsets, that shall be applied to every dimensions. I.e.: Given I have the following values and offsets: ``` v = [5, 6] o = [-1, 2, 3] ``` I want to obtain the following list: ``` n = [[4, 5], [7, 5], [8, 5], [4, 8], [7, 8], [8, 8], [4, 9], [7, 9], [8, 9]] ``` originating from: ``` n = [[5-1, 6-1], [5+2, 6-1], [5+3, 6-1], [5-1, 6+2], [5+2, 6+2], [5+3, 6+2], [5-1, 6+3], [5+2, 6+3], [5+3, 6+3]] ``` Performance is not an issue here and the order of the resulting list also does not matter. Any suggestions on how this can be produced without ugly nested for loops? I guess itertools provides the tools for a solution, but I did not figure it out yet.
``` from itertools import product [map(sum, zip(*[v, y])) for y in product(o, repeat=2)] ``` or, as falsetru and Dobi suggested in comments: ``` [map(sum, zip(v, y)) for y in product(o, repeat=len(v))] ```
[`itertools.product()`](http://docs.python.org/2/library/itertools.html#itertools.product) gives you the desired combinations of `o`. Use that with a list comprehension to create `n`: ``` from itertools import product n = [[v[0] + x, v[1] + y] for x, y in product(o, repeat=2)] ``` Demo: ``` >>> [[v[0] + x, v[1] + y] for x, y in product(o, repeat=2)] [[4, 5], [4, 8], [4, 9], [7, 5], [7, 8], [7, 9], [8, 5], [8, 8], [8, 9]] ```
combinations of the values of two lists
[ "", "python", "combinations", "idioms", "" ]
I want to know how many items are in a Python Counter, including the duplicates. I tried `len` and it tells me the number of unique items: ``` >>> c = Counter(x=3,y=7) >>> len(c) 2 ``` The best I have is `sum(c.itervalues())` which I suppose isn't terrible, but I was hoping the Counter object caches the value so I could access it in O(1).
The [Counter docs](http://docs.python.org/2/library/collections.html#collections.Counter) give your `sum(c.itervalues())` answer as the standard pattern for this in the "Common patterns for working with Counter objects" section, so I doubt there's anything better. As with the other `iter*` methods on dictionaries, in Python 3 `itervalues` is replaced by `values`.
You can look through [the source code](http://hg.python.org/cpython/file/38d47ac7fc2d/Lib/collections.py#l384); there is no cached value recording the number of items in the Counter. So the best you can do is `sum(c.itervalues())`. ``` In [108]: import collections In [109]: c = collections.Counter(x=3, y=7) In [110]: sum(c.itervalues()) Out[110]: 10 ```
Size of Python Counter
[ "", "python", "counter", "" ]
I have a table that gets a new record every 30 seconds. I have a need to know the most current record for each user and the timestamp value. I wrote the following query: ``` select created_user, MAX(created_date) from gisadmin.FORESTRY_LOCATIONLOG_vw Group by created_user ``` It gets me exactly what I want. The problem is that the application I need to input this into predefines part of the select statement for me (trying to be helpful). So it is MANDATORY that the query starts ``` select * from gisadmin.FORESTRY_LOCATIONLOG_vw WHERE ``` Does anyone have ideas on how to make this work
You could use `EXISTS`: ``` SELECT * FROM gisadmin.FORESTRY_LOCATIONLOG_vw WHERE EXISTS (SELECT * FROM (SELECT created_user, MAX(created_date)'created_date' FROM gisadmin.FORESTRY_LOCATIONLOG_vw GROUP BY created_user) b WHERE gisadmin.FORESTRY_LOCATIONLOG_vw.created_user = b.created_user AND gisadmin.FORESTRY_LOCATIONLOG_vw.created_date = b.created_date) ```
here is a quick solution: Use it as subquery ``` select created_user, MAX(created_date) from ( select * from gisadmin.FORESTRY_LOCATIONLOG_vw WHERE 1=1 //append this ) Group by created_user ``` Notice that I added `1=1` which will always be evaluated to true to handle your `Where` clause.
How to get unique values when restricted to select * using SQL
[ "", "sql", "unique", "" ]
When I load my view at : localhost:8000/Scan, it throws an issue of: ``` TypeError on views.py in Scan, line 27: form = Scan() # Otherwise, set the form to unbound ``` Any idea what I'm doing wrong here? I tried researching, but couldn't find the answer. (Django newbie here) . Thank you all! **Views.py** ``` from django.http import HttpResponse from Scanner.forms import SubmitDomain def Scan(request): if request.method == 'POST': # If the form has been submitted... form = SubmitDomain(request.POST) # A form bound to the POST data if form.is_valid(): # If form input passes initial validation... form.cleaned_data['domainNm'] ## clean data in dictionary try: ## check if Tld Table has submitted domain already from Scanner.models import Tld Tld.objects.get(domainNm=form.cleaned_data['domainNm']) except Tld.DoesNotExist: print "Would you like to create an account?" ## redirect to account creation else: print "Do you have an account? Please login." ## redirect to account login else: form = Scan() # Otherwise, set the form to unbound ``` **Forms.py** ``` from django.forms import ModelForm from Scanner.models import Tld class SubmitDomain(ModelForm): class Meta: model = Tld #Create form based off Model for Tld fields = ['domainNm',] def clean_domainName(self): val = self.clean_domainName('domainNm') return val ## This creates the form. form = SubmitDomain() ```
In your model form: ``` from django.forms import ModelForm from Scanner.models import Tld class SubmitDomainForm(ModelForm): class Meta: model = Tld fields = ['domainNm'] def clean_domainName(self): val = self.cleaned_data.get('domainNm') if Tld.objects.filter(domainNm=val).count() > 0: raise forms.ValidationError(u'Sorry that domain already exists, etc, etc') return val ``` In your view, do: ``` from django.shortcuts import render from Scanner.forms import SubmitDomainForm def scan(request): # functions should start with a lowercase letter # Bind the post data to the form, if it exists. # No need for a separate if statement here form = SubmitDomainForm(request.POST or None) if request.method == 'POST': if form.is_valid(): # save your model form, or do something else return render(request, 'your-template.html', {'form': form}) ``` Hope that helps you out. Your view is currently instantiating the wrong type of object for the form, hence the TypeError. Your current clean method on your model form will never validate anything. It just sets the value equal to the clean function. Instead of cluttering your view with form validation logic, put that into the clean method of the form for that field and you can raise exceptions for different conditions.
it fails when reuqest.method != "POST", in which case form is not defined
Django ModelForm Submit To Database
[ "", "python", "django", "view", "django-forms", "" ]
Is it possible to execute a SQL statement Stored in a Table, with T-SQL? ``` DECLARE @Query text SET @Query = (Select Query FROM SCM.dbo.CustomQuery) ``` The statements that are stored in the table are ad-hoc statements which could be **SELECT TOP 100 \* FROM ATable** to more complex statements: ``` Select J.JobName As Job, JD.JobDetailJobStart AS StartDate, JD.JobDetailJobEnd AS EndDate, ( SELECT (DATEDIFF(dd, JD.JobDetailJobStart, JD.JobDetailJobEnd) + 1) -(DATEDIFF(wk, JD.JobDetailJobStart, JD.JobDetailJobEnd) * 2) -(CASE WHEN DATENAME(dw, JD.JobDetailJobStart) = 'Sunday' THEN -1 ELSE 0 END) -(CASE WHEN DATENAME(dw, JD.JobDetailJobEnd) = 'Saturday' THEN -1 ELSE 0 END) ) AS NumberOfWorkingDays, JD.JobDetailDailyTarget AS DailyTarget, JD.JobDetailWeeklyTarget AS WeeklyTarget, JD.JobDetailRequiredQTY AS RequiredQuantity, ( Select SUM(sJL.JobLabourQuantityEmployees) From JobLabour sJL ) AS NumberOfEmployees, ( Select SUM((sEM.EmployeeDesignationDefaultRate * sJL.JobLabourQuantityEmployees)*8)*(SELECT (DATEDIFF(dd, JD.JobDetailJobStart, JD.JobDetailJobEnd) + 1) -(DATEDIFF(wk, JD.JobDetailJobStart, JD.JobDetailJobEnd) * 2) -(CASE WHEN DATENAME(dw, JD.JobDetailJobStart) = 'Sunday' THEN -1 ELSE 0 END) -(CASE WHEN DATENAME(dw, JD.JobDetailJobEnd) = 'Saturday' THEN -1 ELSE 0 END)) from EmployeeDesignation sEM Inner join JobLabour sJL on sJL.EmployeeDesignationID = sEM.EmployeeDesignationID ) AS FullEmployeeRate from Job J Inner Join JobDetail JD on JD.JobID = J.JobID Inner Join JobLabour JL on JL.JobID = J.JobID WHERE J.JobActive = 0 ``` I want to execute the **@Query** Variable that I declared from T-SQL. Is this possible? (I am running a MSSQL 2005 enviroment)
You can use ``` EXECUTE sp_executesql @Query ``` to run your T-SQL Here's a link to the MS docn for SQL Server 2005 <http://msdn.microsoft.com/en-us/library/ms188001%28v=sql.90%29.aspx>
The previous answer allows you to run one statement, and is valid. The question was on how to run SQL Statements stored in a table, which I took as more than one statement being executed. For this extra step, there is a while loop involved to iterate through each statement that need to be run. ``` -- Author: Chad Slagle DECLARE @Table table (RID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, SQLText NVARCHAR(MAX) ) DECLARE @StatementMax INT ,@statementMin INT ,@isTest TINYINT = 1 ,@SQLStatement NVARCHAR(MAX) -- Insert SQL Into Temp Table INSERT INTO @table (SQLText) VALUES ('SELECT @@Version'); INSERT INTO @table (SQLText) VALUES ('SELECT SERVERPROPERTY(''ProductVersion'')') -- Get your Iterator Values SELECT @statementMAX = MAX(RID), @statementMIN = MIN(RID) FROM @table IF @isTest = 1 BEGIN SELECT *, @statementMax AS MaxVal, @StatementMin AS MinVal FROM @Table END -- Start the Loop WHILE @StatementMax >= @statementMin BEGIN SELECT @SQLStatement = SQLText FROM @table WHERE RID = @statementMin -- Get the SQL from the table IF @isTest = 1 BEGIN SELECT 'I am executing: ' + @SQLStatement AS theSqlBeingRun, GETDATE(), @statementMin, @StatementMax END ELSE BEGIN EXECUTE sp_ExecuteSQL @SQLStatement -- Execute the SQL END DELETE FROM @table WHERE RID = @statementMin -- Delete the statement just run from the table SELECT @statementMIN = MIN(RID) FROM @Table -- Update to the next RID IF @isTest = 1 BEGIN SELECT * FROM @table END END ``` In Summary, I created a temp table and put some SQL in it, using a IDENTITY (RID) field to provide an iterator for the while loop. Then ran the while loop. In the example, you should return two views of your SQL Version. I built this on 2k8, and I hope it helps someone out of a jam one day..
How to execute SQL statements saved in a table with T-SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I need to compare my 2 command line argument to the see if it doesn't equal `"tcp"` or `"udp"`. I know you can use the or statement with `==` but I am having trouble adapting it to `!=` in a statement. ``` protocol = sys.argv[2] if protocol != "tcp" or protocol != "udp": print" error" sys.exit() ```
See, the problem is in your logic, the answer will ALWAYS be either NOT one or the other. Use this code: ``` if protocol not in ['tcp','udp']: print "error" sys.exit() ```
How about using `in`? ``` if protocol not in ("tcp", "udp"): print" error" sys.exit() ``` FYI, instead of using `sys.argv` and validating script arguments manually, use `argparse` module from stdlib. Take a look at `choices` argument of `add_argument` method, [docs](http://docs.python.org/dev/library/argparse.html#the-add-argument-method): > choices - A container of the allowable values for the argument.
compare a command line argument with or statment to different strings python
[ "", "python", "command-line-arguments", "" ]
I am unable to do a groupby on a pandas Series object. DataFrames are fine, but I cannot seem to do groupby with a Series. Has anyone been able to get this to work? ``` >>> import pandas as pd >>> a = pd.Series([1,2,3,4], index=[4,3,2,1]) >>> a 4 1 3 2 2 3 1 4 dtype: int64 >>> a.groupby() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/share/apps/install/anaconda/lib/python2.7/site-packages/pandas/core/generic.py", line 153, in groupby sort=sort, group_keys=group_keys) File "/share/apps/install/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 537, in groupby return klass(obj, by, **kwds) File "/share/apps/install/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 195, in __init__ level=level, sort=sort) File "/share/apps/install/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 1326, in _get_grouper ping = Grouping(group_axis, gpr, name=name, level=level, sort=sort) File "/share/apps/install/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 1203, in __init__ self.grouper = self.index.map(self.grouper) File "/share/apps/install/anaconda/lib/python2.7/site-packages/pandas/core/index.py", line 878, in map return self._arrmap(self.values, mapper) File "generated.pyx", line 2200, in pandas.algos.arrmap_int64 (pandas/algos.c:61221) TypeError: 'NoneType' object is not callable ```
You need to pass a mapping of some kind (could be a dict/function/index) ``` In [6]: a Out[6]: 4 1 3 2 2 3 1 4 dtype: int64 In [7]: a.groupby(a.index).sum() Out[7]: 1 4 2 3 3 2 4 1 dtype: int64 In [3]: a.groupby(lambda x: x % 2 == 0).sum() Out[3]: False 6 True 4 dtype: int64 ```
if you need to groupby series' values: ``` grouped = a.groupby(a) ``` or ``` grouped = a.groupby(lambda x: a[x]) ```
groupby for pandas Series not working
[ "", "python", "pandas", "" ]
My code currently looks something like this: ``` if option1: ... elif option2: ... elif option3: .... ``` so on so forth. And while I'm not displeased with it, I was wondering if there was a better alternative in python. My script is a console based script where I'm using argparser to fetch for what the user needs.
If 'option' can contain 'one', 'two', or 'three', you could do ``` def handle_one(): do_stuff def handle_two(): do_stuff def handle_three(): do_stuff {'one': handle_one, 'two': handle_two, 'three': handle_three}[option]() ```
I'm guessing you're starting Python scripting with a background somewhere else, where a `switch` statement would solve your question. As that's not an option in Python, you're looking for another way to do things. Without context, though, you can't really answer this question very well (there are far too many options). I'll throw in one (somewhat Pythonic) alternative: Let's start with an example of where I think you're coming from. ``` def add_to_x (x): if x == 3: x += 2 elif x == 4: x += 4 elif x == 5: x += 5 return x ``` Here's my alternative: ``` def add_to_x (x): vals = { 3 : 5 , 4 : 8 , 5 : 10 } return vals[x] ``` You can also [look into lambdas](http://www.secnetix.de/olli/Python/lambda_functions.hawk) which you can put into [the dictionary structure I used](http://docs.python.org/2/tutorial/datastructures.html#dictionaries). But again, as said, without context this may not be what you're looking for.
What's an alternative to if/elif statements in Python?
[ "", "python", "" ]