Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I want to write code to get data from Excel and write it to a text file. Here is the code I have:
```
import xlrd
import os.path
wb = xlrd.open_workbook(os.path.join('D:\TRB 2014 Data','SPS1 demo data.xlsx'))
wb.sheet_names()
sh = wb.sheet_by_index(0)
i = 1
while sh.cell(i,11).value != 0:
Load = sh.cell(i,11).value
D1 = sh.cell(i,13).value
D2 = sh.cell(i,14).value
D3 = sh.cell(i,15).value
D4 = sh.cell(i,16).value
D5 = sh.cell(i,17).value
D6 = sh.cell(i,18).value
D7 = sh.cell(i,19).value
DB1 = str(Load) + " " + str(D1) + " " + str(D2) + " " + str(D3)+ " " + str(D4)+ " " + str(D5)+ " " + str(D6)+ " " + str(D7)
file = open("Output.txt", "w")
file.write(DB1 + '\n')
file.close
i = i + 1
```
The problem with this code is the data written to the text file is always displayed at the first row. Therefore, although I have 20 rows of data in excel, the text file only shows the last data in the excel file at the very first row in the text file. I have `'\n'` in `file.write` but, it doesn't seem to work. | You should [open](http://docs.python.org/2/library/functions.html#open) the `output.txt` file with `append mode`:
```
file = open("Output.txt", "a")
```
Also, you should do this before entering the loop, and it should be closed after that loop.
---
**Update**:
In cases like this one, you could use `with` instead of closing the file handle at the end.
Also including the good suggestion made by [@Josh in his own answer](https://stackoverflow.com/a/17977823/677022), the code could be this:
```
import xlrd
import os.path
wb = xlrd.open_workbook(os.path.join('D:\TRB 2014 Data','SPS1 demo data.xlsx'))
wb.sheet_names()
sh = wb.sheet_by_index(0)
i = 1
with open("Output.txt", "a") as my_file:
while sh.cell(i,11).value != 0:
Load = sh.cell(i,11).value
all_d = sh.col_values(i, 13, 19)
DB1 = Load + " " + (" ".join(all_d))
my_file.write(DB1 + '\n')
i += 1
``` | ```
import xlrd
workbook=xlrd.open_workbook("xxx.xls")
sh=workbook.sheet_by_name("test1")
print sh.nrows
print sh.ncols
n=0
i=0
file=open("xxx.txt","w")
for n in range(sh.nrows):
for i in range(sh.ncols):
data =sh.cell_value(n,i)+" "
print data,
file.write(data+" ")
print
file.write("\n")
this code is working properly for writing into text file
``` | How to read data from Excel and write it to text file line by line? | [
"",
"python",
"xlrd",
""
] |
I have a list: `[6,7,8,9,10]` , Now I want to shift it 2 times to the right and replace the shifted position by `0`. The output will be something like : `[0,0,6,7,8]`
Is it possible? | You could do it directly:
```
>>> a = [6, 7, 8, 9, 10]
>>> [0, 0] + a[:3]
[0, 0, 6, 7, 8]
```
To shift it by a variable number of positions `n`:
```
>>> [0]*n + a[:len(a)-n]
```
Appropriate bounds checking is up to you (try `n=6` to with a 5-element list for an unexpected result). | ```
>>> s = [6,7,8,9,10]
>>> def lpadzero(s, n):
... return [0] * n + s[:-n]
>>> lpadzero(s, 2)
[0, 0, 6, 7, 8]
``` | How to shift a list nth time in Left or Right and fill it up with 0 in python | [
"",
"python",
""
] |
I've created a general tree in python, by creating a Node object. Each node can have either 0, 1, or 2 trees.
I'm trying to create a method to print a list of all the nodes in a tree. The list need not be in order. Here's my simplistic attempt:
```
def allChildren(self, l = list()):
l.append(self)
for child in self.children:
l = child.allChildren(l)
return l
```
The first time I run this method, it works correctly. However, for some reason it is storing the previous runs. The second time I run the method, it prints all the nodes twice. Even if I create 2 separate trees, it still remembers the previous runs. E.g: I create 2 trees, a and b. If I run a.allChildren() I receive the correct result. Then I run b.allChildren() and recieve all of a's nodes and all of b's nodes. | try this:
```
def allChildren(self, l = None):
if(l==None):
l = list()
l.append(self)
for child in self.children:
l = child.allChildren(l)
return l
```
And check out [this answer](https://stackoverflow.com/questions/1651154/why-are-default-arguments-evaluated-at-definition-time-in-python) for explanation. | You have a mutable value as the default value of your function parameter `l`. In Python, this means that when you call `l.append(self)`, you are permanently modifying the default parameter.
In order to avoid this problem, set `l` to a new list every time the function is called, if no list is passed in:
```
def allChildren(self, l = None):
if l is None:
l = list()
l.append(self)
for child in self.children:
l = child.allChildren(l)
return l
```
This phenomenon is explained much more thoroughly in [this question](https://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument). | Python: printing all nodes of tree unintentionally stores data | [
"",
"python",
"class",
"python-3.x",
"tree",
""
] |
I'm newbie to algorithm and optimization.
I'm trying to implement **capacitated k-means**, but getting unresolved and poor result so far.
This is used as part of a CVRP simulation (capacitated vehicle routing problem).
I'm curious if I interprets the referenced algorithm wrong.
Ref: ["Improved K-Means Algorithm for Capacitated Clustering Problem"
(Geetha, Poonthalir, Vanathi)](http://www.dcc.ufla.br/infocomp/artigos/v8.4/art07.pdf)
The simulated CVRP has 15 customers, with 1 depot.
Each customer has Euclidean coordinate (x,y) and demand.
There are 3 vehicles, each has capacity of 90.
So, the capacitated k-means is trying to cluster 15 customers into 3 vehicles, with the total demands in each cluster must not exceed vehicle capacity.
**UPDATE:**
In the referenced algorithm, I couldn't catch any information about what must the code do when it runs out of "next nearest centroid".
That is, when all of the "nearest centroids" has been examined, in the **step 14.b** below, while the `customers[1]` is still unassigned.
This results in the customer with index 1 being unassigned.
Note: `customer[1]` is customer with largest demand (30).
**Q: When this condition is met, what the code should do then?**
---
Here is my interpretation of the referenced algorithm, please correct my code, thank you.
1. Given `n` requesters (customers), `n` = `customerCount`, and a depot
2. n demands,
3. n coordinates (x,y)
4. calculate number of clusters, `k` = (sum of all demands) / `vehicleCapacity`
5. select initial centroids,
5.a. sort customers based on `demand`, in descending order = `d_customers`,
5.b. select `k` first customers from `d_customers` as initial centroids = `centroids[0 .. k-1]`,
6. Create binary matrix `bin_matrix`, dimension = `(customerCount) x (k)`,
6.a. Fill `bin_matrix` with all zeros
7. start WHILE loop, condition = WHILE `not converged`.
7.a. `converged = False`
8. start FOR loop, condition = FOR `each customers`,
8.a. index of customer = i
9. calculate Euclidean distances from `customers[i]` to all `centroids` => `edist`
9.a. sort `edist` in ascending order,
9.b. select first `centroid` with closest distance = `closest_centroid`
10. start WHILE loop, condition = `while customers[i]` is not assigned to any cluster.
11. group all the other unassigned customers = `G`,
11.a. consider `closest_centroid` as centroid for `G`.
12. calculate priorities `Pi` for each `customers` of `G`,
12.a. Priority `Pi = (distance from customers[i] to closest_cent) / demand[i]`
12.b. select a customer with highest priority `Pi`.
12.c. customer with highest priority has index = `hpc`
**12.d. Q: IF highest priority customer cannot be found, what must we do ?**
13. assign `customers[hpc]` to `centroids[closest_centroid]` if possible.
13.a. demand of `customers[hpc]` = `d1`,
13.b. sum of all demands of centroids' members = `dtot`,
13.c. `IF (d1 + dtot) <= vehicleCapacity, THEN`..
13.d. assign `customers[hpc]` to `centroids[closest_centroid]`
13.e. update `bin_matrix`, row index = `hpc`, column index = `closest_centroid`, set to `1`.
14. IF `customers[i]` is (still) `not assigned` to any cluster, THEN..
14.a. choose the `next nearest centroid`, with the next nearest distance from `edist`.
**14.b. Q: IF there is no next nearest centroid, THEN what must we do ?**
15. calculate converged by comparing previous matrix and updated matrix bin\_matrix.
15.a. IF no changes in the `bin_matrix`, then set `converged = True`.
16. otherwise, calculate `new centroids` from updated clusters.
16.a. calculate new `centroids' coordinates` based on members of each cluster.
16.b. `sum_x` = sum of all `x-coordinate` of a cluster `members`,
16.c. `num_c` = number of all `customers (members)` in the cluster,
16.d. new centroid's `x-coordinate` of the cluster = `sum_x / num_c`.
16.e. with the same formula, calculate new centroid's `y-coordinate` of the cluster = `sum_y / num_c`.
17. iterate the main WHILE loop.
My code is always ended with unassigned customer at the **step 14.b**.
That is when there is a `customers[i]` still not assigned to any centroid, and it has run out of "next nearest centroid".
And the resulting clusters is poor. Output graph:

-In the picture, star is centroid, square is depot.
In the pic, customer labeled "1", with demand=30 always ended with no assigned cluster.
Output of the program,
```
k_cluster 3
idx [ 1 -1 1 0 2 0 1 1 2 2 2 0 0 2 0]
centroids [(22.6, 29.2), (34.25, 60.25), (39.4, 33.4)]
members [[3, 14, 12, 5, 11], [0, 2, 6, 7], [9, 8, 4, 13, 10]]
demands [86, 65, 77]
```
First and third cluster is poorly calculated.
`idx` with index '`1`' is not assigned (`-1`)
**Q: What's wrong with my interpretation and my implementation?**
Any correction, suggestion, help, will be very much appreciated, thank you in advanced.
Here is my full code:
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
# pastebin.com/UwqUrHhh
# output graph: i.imgur.com/u3v2OFt.png
import math
import random
from operator import itemgetter
from copy import deepcopy
import numpy
import pylab
# depot and customers, [index, x, y, demand]
depot = [0, 30.0, 40.0, 0]
customers = [[1, 37.0, 52.0, 7], \
[2, 49.0, 49.0, 30], [3, 52.0, 64.0, 16], \
[4, 20.0, 26.0, 9], [5, 40.0, 30.0, 21], \
[6, 21.0, 47.0, 15], [7, 17.0, 63.0, 19], \
[8, 31.0, 62.0, 23], [9, 52.0, 33.0, 11], \
[10, 51.0, 21.0, 5], [11, 42.0, 41.0, 19], \
[12, 31.0, 32.0, 29], [13, 5.0, 25.0, 23], \
[14, 12.0, 42.0, 21], [15, 36.0, 16.0, 10]]
customerCount = 15
vehicleCount = 3
vehicleCapacity = 90
assigned = [-1] * customerCount
# number of clusters
k_cluster = 0
# binary matrix
bin_matrix = []
# coordinate of centroids
centroids = []
# total demand for each cluster, must be <= capacity
tot_demand = []
# members of each cluster
members = []
# coordinate of members of each cluster
xy_members = []
def distance(p1, p2):
return math.sqrt((p1[0] - p2[0])**2 + (p1[1] - p2[1])**2)
# capacitated k-means clustering
# http://www.dcc.ufla.br/infocomp/artigos/v8.4/art07.pdf
def cap_k_means():
global k_cluster, bin_matrix, centroids, tot_demand
global members, xy_members, prev_members
# calculate number of clusters
tot_demand = sum([c[3] for c in customers])
k_cluster = int(math.ceil(float(tot_demand) / vehicleCapacity))
print 'k_cluster', k_cluster
# initial centroids = first sorted-customers based on demand
d_customers = sorted(customers, key=itemgetter(3), reverse=True)
centroids, tot_demand, members, xy_members = [], [], [], []
for i in range(k_cluster):
centroids.append(d_customers[i][1:3]) # [x,y]
# initial total demand and members for each cluster
tot_demand.append(0)
members.append([])
xy_members.append([])
# binary matrix, dimension = customerCount-1 x k_cluster
bin_matrix = [[0] * k_cluster for i in range(len(customers))]
converged = False
while not converged: # until no changes in formed-clusters
prev_matrix = deepcopy(bin_matrix)
for i in range(len(customers)):
edist = [] # list of distance to clusters
if assigned[i] == -1: # if not assigned yet
# Calculate the Euclidean distance to each of k-clusters
for k in range(k_cluster):
p1 = (customers[i][1], customers[i][2]) # x,y
p2 = (centroids[k][0], centroids[k][1])
edist.append((distance(p1, p2), k))
# sort, based on closest distance
edist = sorted(edist, key=itemgetter(0))
closest_centroid = 0 # first index of edist
# loop while customer[i] is not assigned
while assigned[i] == -1:
# calculate all unsigned customers (G)'s priority
max_prior = (0, -1) # value, index
for n in range(len(customers)):
pc = customers[n]
if assigned[n] == -1: # if unassigned
# get index of current centroid
c = edist[closest_centroid][1]
cen = centroids[c] # x,y
# distance_cost / demand
p = distance((pc[1], pc[2]), cen) / pc[3]
# find highest priority
if p > max_prior[0]:
max_prior = (p, n) # priority,customer-index
# if highest-priority is not found, what should we do ???
if max_prior[1] == -1:
break
# try to assign current cluster to highest-priority customer
hpc = max_prior[1] # index of highest-priority customer
c = edist[closest_centroid][1] # index of current cluster
# constraint, total demand in a cluster <= capacity
if tot_demand[c] + customers[hpc][3] <= vehicleCapacity:
# assign new member of cluster
members[c].append(hpc) # add index of customer
xy = (customers[hpc][1], customers[hpc][2]) # x,y
xy_members[c].append(xy)
tot_demand[c] += customers[hpc][3]
assigned[hpc] = c # update cluster to assigned-customer
# update binary matrix
bin_matrix[hpc][c] = 1
# if customer is not assigned then,
if assigned[i] == -1:
if closest_centroid < len(edist)-1:
# choose the next nearest centroid
closest_centroid += 1
# if run out of closest centroid, what must we do ???
else:
break # exit without centroid ???
# end while
# end for
# Calculate the new centroid from the formed clusters
for j in range(k_cluster):
xj = sum([cn[0] for cn in xy_members[j]])
yj = sum([cn[1] for cn in xy_members[j]])
xj = float(xj) / len(xy_members[j])
yj = float(yj) / len(xy_members[j])
centroids[j] = (xj, yj)
# calculate converged
converged = numpy.array_equal(numpy.array(prev_matrix), numpy.array(bin_matrix))
# end while
def clustering():
cap_k_means()
# debug plot
idx = numpy.array([c for c in assigned])
xy = numpy.array([(c[1], c[2]) for c in customers])
COLORS = ["Blue", "DarkSeaGreen", "DarkTurquoise",
"IndianRed", "MediumVioletRed", "Orange", "Purple"]
for i in range(min(idx), max(idx)+1):
clr = random.choice(COLORS)
pylab.plot(xy[idx==i, 0], xy[idx==i, 1], color=clr, \
linestyle='dashed', \
marker='o', markerfacecolor=clr, markersize=8)
pylab.plot(centroids[:][0], centroids[:][1], '*k', markersize=12)
pylab.plot(depot[1], depot[2], 'sk', markersize=12)
for i in range(len(idx)):
pylab.annotate(str(i), xy[i])
pylab.savefig('clust1.png')
pylab.show()
return idx
def main():
idx = clustering()
print 'idx', idx
print 'centroids', centroids
print 'members', members
print 'demands', tot_demand
if __name__ == '__main__':
main()
``` | When the total demand is close to the total capacity, this problem begins to take on aspects of [bin packing](https://en.wikipedia.org/wiki/Bin_packing_problem). As you've discovered, this particular algorithm's greedy approach is not always successful. I don't know whether the authors admitted that, but if they didn't, the reviewers should have caught it.
If you want to continue with something like this algorithm, I would try using [integer programming](http://en.wikipedia.org/wiki/Integer_programming) to assign requesters to centroids. | Without going through all the details, the paper you cite says
```
if ri is not assigned then
choose the next nearest centroid
end if
```
in the algorithm at the end of section 5.
There must be a next nearest centroid - if two are equidistant I presume it doesn't matter which you choose. | Capacitated k-means clustering? | [
"",
"python",
"algorithm",
"optimization",
"cluster-analysis",
"k-means",
""
] |
I want to implement a basic auth dialog style authentication in Django
Example -
When you visit some website, a dialog pops up. It's a native dialog. It has a simple message asking for username, password and below it there are two textboxes to enter username password.
I have searched a bit but did not find anything. Basically, it should look like this -

(This pic is from a Rails tutorial. Rails has it in built)
How can I do this? | There are plenty of ways to do that.
If you want standard HTTP Basic Auth with static username/password that can be configured via `settings.py`, you can create a middleware. Check <http://djangosnippets.org/snippets/2468/>. It's also quite simple with [Apache](http://wiki.apache.org/httpd/PasswordBasicAuth) or [nginx](http://wiki.nginx.org/HttpAuthBasicModule) so you don't even need Django.
If you want to use your user database to log user in, you can use some decorators. Check: <http://djangosnippets.org/snippets/243/>
Hope it helps. | You can use <https://github.com/nopped/django-http-auth> which gives you the ability to add HTTP Basic Authentication using middleware or decorator on specific views. | How to implement basic auth in Django | [
"",
"python",
"ruby-on-rails",
"django",
"authentication",
""
] |
I have a db in PostgreSQL 9.0, which has a table with a string field to store client codes.
These codes are alphanumeric and can start with a letter or number, for example `1, 2, A0001-4, A0001-2, 10`
I want to order numeric first and then order by string, like
```
1, 2, 10, A0001-2, A0001-4
```
I do this with `to_number(fields, '99999999')`, for example:
```
SELECT * FROM empleados ORDER BY to_number(legajo, '99999999'), legajo
```
But when the code is like `'ve'`, with no number, the query fails.
What can I do? | ```
WITH empleados(legajo) AS (
VALUES
('A0001-4'::text)
,('123.345-56')
,('ve')
,('123')
,('123 ve')
)
SELECT *
FROM empleados
ORDER BY CASE WHEN legajo ~ '\D' THEN 1000000000::int
ELSE to_number(legajo, '999999999')::int END
,legajo;
```
[`~` is the regular expression operaor.](http://www.postgresql.org/docs/current/interactive/functions-matching.html#POSIX-CLASS-SHORTHAND-ESCAPES-TABLE)
[`\D` is the classs shorthand for non-digits.](http://www.postgresql.org/docs/current/interactive/functions-matching.html#POSIX-CLASS-SHORTHAND-ESCAPES-TABLE)
Rows with non-digit characters in legajo (`legajo ~ '\D'`) come later.
[-> SQLfiddle demo](http://sqlfiddle.com/#!12/d41d8/1210)
[*Never* use `SIMILAR TO`](https://stackoverflow.com/questions/12452395/difference-between-like-and-in-postgres/12459689#12459689), it's an utterly pointless operator. | You can use a case statement to find the numbers:
```
select *
from empleados
order by (case when legajo not similar to '%[^0-9]%' then 1 else 0 end) desc,
(case when legajo not similar to '%[^0-9]%' then to_number(legajo, '999999999') end),
legjo;
```
The `similar to` expression is saying that all characters are digits.
EDIT:
Fixed the syntax error. You can test this:
```
with empleados as (
select 'abc' as legajo union all
select '123'
)
select *
from empleados
order by (case when legajo not similar to '%[^0-9]%' then 1 else 0 end) desc,
(case when legajo not similar to '%[^0-9]%' then to_number(legajo, '999999999') end),
legajo;
```
The SQLFiddle is [here](http://www.sqlfiddle.com/#!1/d41d8/1144). | How to order query by number first and then by string in the same field | [
"",
"sql",
"postgresql",
"pattern-matching",
"sql-order-by",
"natural-sort",
""
] |
I have the following table in SQL
```
AL1 | AL2 | AL3 | ACB | LL1 | LL2 | LL3 | LCB
------------------------------------------------
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8
```
I would like to convert it as
```
| L1 | L2 | L3 | CB
-----------------------------
A | 1 | 2 | 3 | 4
L | 5 | 6 | 7 | 8
```
Any help would be appreciated. | You can select both with different query and join them using `UNION`
```
SELECT 'A' AS COL, AL1, AL2, AL3, ACB
FROM TBL
UNION
SELECT 'L' AS COL, LL1, LL2, LL3, LCB
FROM TBL;
```
Output:
```
| COL | AL1 | AL2 | AL3 | ACB |
-------------------------------
| A | 1 | 2 | 3 | 4 |
| L | 5 | 6 | 7 | 8 |
```
### See [this SQLFiddle](http://sqlfiddle.com/#!3/5d7d3/1) | Another way (the same plan as Devart's but more concise syntax)
```
SELECT V.*
FROM TBL
CROSS APPLY (VALUES ('A', AL1,AL2,AL3,ACB),
('L', LL1,LL2,LL3,LCB)) V( COL, LL1, LL2, LL3, LCB)
```
[SQL Fiddle](http://sqlfiddle.com/#!3/5d7d3/2) | SQL multiple columns grouping in one row | [
"",
"sql",
"sql-server",
"grouping",
""
] |
I have a function which performs an expensive operation and is called often; but, the operation only needs to be performed once - its result could be cached.
I tried making an infinite generator but I didn't get the results I expected:
```
>>> def g():
... result = "foo"
... while True:
... yield result
...
>>> g()
<generator object g at 0x1093db230> # why didn't it give me "foo"?
```
Why isn't `g` a generator?
```
>>> g
<function g at 0x1093de488>
```
*Edit*: it's fine if this approach doesn't work, but I **need** something which performs exactly like a regular function, like so:
```
>>> [g() for x in range(3)]
["foo", "foo", "foo"]
``` | Here's a dead-simple caching decorator. It doesn't take into account any variations in parameters, it just returns the same result after the first call. There are fancier ones out there that cache the result for each combination of inputs ("memoization").
```
import functools
def callonce(func):
result = []
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not result:
result.append(func(*args, **kwargs))
return result[0]
return wrapper
```
Usage:
```
@callonce
def long_running_function(x, y, z):
# do something expensive with x, y, and z, producing result
return result
```
If you would prefer to write your function as a generator for some reason (perhaps the result is slightly different on each call, but there's still a time-consuming initial setup, or else you just want C-style static variables that allow your function to remember some bit of state from one call to the next), you can use this decorator:
```
import functools
def gen2func(generator):
gen = []
@functools.wraps(generator)
def wrapper(*args, **kwargs):
if not gen:
gen.append(generator(*args, **kwargs))
return next(gen[0])
return wrapper
```
Usage:
```
@gen2func
def long_running_function_in_generator_form(x, y, z):
# do something expensive with x, y, and z, producing result
while True:
yield result
result += 1 # for example
```
A Python 2.5 or later version that uses `.send()` to allow parameters to be passed to each iteration of the generator is as follows (note that `**kwargs` are not supported):
```
import functools
def gen2func(generator):
gen = []
@functools.wraps(generator)
def wrapper(*args):
if not gen:
gen.append(generator(*args))
return next(gen[0])
return gen[0].send(args)
return wrapper
@gen2func
def function_with_static_vars(a, b, c):
# time-consuming initial setup goes here
# also initialize any "static" vars here
while True:
# do something with a, b, c
a, b, c = yield # get next a, b, c
``` | `g()` is a generator *function*. Calling it returns the generator. You then need to use that generator to get your values. By looping, for example, or by calling [`next()`](http://docs.python.org/2/library/functions.html#next) on it:
```
gen = g()
value = next(gen)
```
Note that calling `g()` again will calculate the same value again and produce a *new* generator.
You may just want to use a global to cache the value. Storing it as an attribute **on** the function could work:
```
def g():
if not hasattr(g, '_cache'):
g._cache = 'foo'
return g._cache
``` | Function which computes once, caches the result, and returns from cache infinitely (Python) | [
"",
"python",
"caching",
"python-2.7",
"generator",
""
] |
I have a project setup looks like this:
**Base project**
```
/some_disk/some_folder/
|-- project/
| |-- package/
| | |-- src/
| | | |-- file_a.py
| | | |-- file_b.py
```
**Extension project**
```
/some_other_disk/some_folder/
|-- project/
| |-- package/
| | |-- src/
| | | |-- file_c.py
| | | |-- file_d.py
```
Then I have a third project, in which I would like to be able to use both mopdules file\_a and file\_c.
In that third project, I have setup my Python path like this
```
PYTHONPATH=$PYTHONPATH:/some_disk/some_folder:/some_other_disk/some_folder
```
Then, to import the files, I have this in my main module:
```
import project.module.src.file_a
import project.module.src.file_c
```
This, however, only makes me able to import one of the modules, and having an *module not found* error on the other one.
Can I make this work using this project structure? Or will Python always only look into one of the "main"-modules and consider the sub-module not found if it's not in there?
EDIT: The project makes use of Python 2.6 | Create a package file `__init__.py` in each of your `src` directories. They should contain the following two lines. See [this documentation](http://docs.python.org/2/library/pkgutil.html#module-pkgutil) for details. This solution works on Python 2.6 and is the canonical solution.
```
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
``` | This will make python search in your current directory/standard directories first, and for the second one, python will search in your pathtofile\_c first before standard directories.
```
import project.module.src.file_a #<--- here it searches some_disk first
sys.path.insert(0,'pathtofile_c') #<--- Changes your PYTHONPATH - inserts some_other_disk before standard directories
import project.module.src.file_c #<--- here it searches some_other_disk first
```
This should clear python's confusion. | Python: How to import sub-modules, from packages with the same name? | [
"",
"python",
"import",
"module",
""
] |
Recently I was Playing with SQL Server data types, and a Large number of data in the table and trying to figure out the Performance with Varchar and Numeric data. But, I got some error which I don't think should not have been but it is. My problem is below :
I have a table :
```
create table sa(f1 varchar(100))
```
I have a Stored Procedure that Inserts 100000 data into the table :
```
create proc sad
as
begin
declare @i int=0
while @i<100000
begin
insert into sa values(@i)
set @i=@i+1
end
end
exec sad
```
And I have tested the following:
```
select CONVERT(int,f1) from sa //Works Fine, i tested after the Problem
select sum(convert(int,f1)) from sa //Didn't Worked, So i tested above and Below
select sum(convert(decimal(18,2),f1)) from sa //And, it again works fine
```
But, When I sum Converting F1 to Int, it shows me an error.
But, when I only select Converting to Int it's fine.
And, when I sum Converting F1 to decimal it works Fine.
What is the SUM function data type?
On the Above data, it works well with Decimal but not Int?
Why?
I'm Getting the Following error
**Arithmetic overflow error converting expression to data type int.** | According to MS documentation (see <https://learn.microsoft.com/en-us/sql/t-sql/functions/sum-transact-sql>), the `SUM()` function returns values of a different type, according to the datatype of the column you are adding: if the column is of type `int`, `tinyint`, or `smallint`, then SUM returns values of type `int`.
Converting to `bigint` or `decimal` makes `SUM()` return a larger datatype, this explains why in that case you have no overflow. | You're summing as `INT` which has a range that cannot hold that sum.
The `DECIMAL` can.
The sum of all values from 1 up to 99999 is 4999950000, the maximum INT value is 2147483647, less than half of what the sum ends up as.
When you sum INT's, you're getting a new INT. When you're summing DECIMAL's, you're getting a new DECIMAL, so the input type defines the output type.
You can switch to using `bigint` instead, and it should "be fine".
Also, on a second note, **please don't store numbers as text**! | Sql Server SUM function with Int and Decimal Conversion | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Please help me find the cause of this problem with my Django installation on Windows.
```
C:\Djangoprojects>django-admin.py startproject mydjangoblog
Traceback (most recent call last):
File "C:\Python27\Scripts\django-admin.py", line 4, in <module>
import pkg_resources
File "build\bdist.win-amd64\egg\pkg_resources.py", line 3007, in <module>
File "build\bdist.win-amd64\egg\pkg_resources.py", line 728, in require
File "build\bdist.win-amd64\egg\pkg_resources.py", line 626, in resolve
pkg_resources.DistributionNotFound: django==1.5.1
```
I just installed Django 1.5.1 and I remember I had the 1.4.3 before removing it. When I try to create a project, the above error is shown.
After a few adjustments - which included adding the whole directory of Python27\Lib\site-packages\django\bin to the path variable, this is the error I now get when I try to create a project:
```
C:\Djangoprojects>django-admin.py startproject djangoblog
Traceback (most recent call last):
File "C:\Python27\Lib\site-packages\django\bin\django-admin.py", line 2, in <module>
from django.core import management
ImportError: No module named django.core
```
The question am now asking myself is this: Where should I put the django folder? In its own place or within python27?
I can import Django through the Python Interactive shell without a problem.
I have also added django-admin.py to the system path variable just in case.
Thanks in advance. | I finally was able to fix this problem. Here is what I did after spending some time on Stackoverflow:
* Leave everything the way it was installed by easy\_install when I installed it.
* Make sure that there is one django installation.
* Make sure the `django-admin.py` is inside Python27\Scripts
* To create a django project do: on the command line:
* `python C:\Python27\Scripts\django-admin.py startproject demosite` instead of just: `django-admin.py startproject demosite`.
* The easiest way to make creating projects easier, you can create a **batch** file called `startproject.bat` and save it inside `Python27\Scripts\` folder. Inside that file, add the following: `python C:\Python27\Scripts\django-admin.py startproject %1`
* Now, on your command line,you will be able to simply say : `startproject.bat demosite`
This worked out for me at last and I am happy to say this problem has been solved! I hope this was helpful to others as well.
I want to thank everyone who took their time to answer this question. I could have voted up your answer but I don't have enough points, but until then, I appreciate it. | I suggest you to use [virtualenv](http://www.virtualenv.org/en/latest/) to maintain a clean python environment.
1. install virtualenv, and add it to your env-path
2. run cmd `mkvirtualenv django` to create the env for your django project
3. run `pip.exe install` to install needed libraries. | I cannot create a project because of Django installation on Windows | [
"",
"python",
"django",
""
] |
I am adding some validation to a couple of stored procedures and need to check if some of the variables are not null (they are populated earlier in the stored procedure).
I have been trying to add a "throw" inside an if statement like below:
```
IF (@val is null)
BEGIN
THROW 50001, 'Custom text', 1
END
```
This causes a syntax error on the "throw" as it is looking for other code inside the if statement prior to the throw but I only need it to perform the throw inside the if statement.
I need to keep the stored procedure as light as possible to keep it as fast as possible to execute.
Does anyone have any ideas? | The syntax error is showing up because the previous statement hasn't been terminated. The other answers will work, but in order to do it this way you can either throw a semicolon right before the THROW, or get in the habit of terminating all statements with semicolons.
```
IF (@val is null)
BEGIN
;THROW 50001, 'Custom text', 1
END
```
or
```
IF (@val is null)
BEGIN;
THROW 50001, 'Custom text', 1;
END;
```
You may have noticed that:
```
IF (@val is null)
THROW 50001, 'Custom text', 1
```
... will also work, and this is because SQL Server knows that the next thing to come after an IF statement is always a new T-SQL statement.
It is perhaps worth noting that Microsoft has stated that the T-SQL language in the future will require semicolons after each statement, so my recommendation would be to start building the habit now. | If this is for SQL Server, the intellisense syntax highlighter doesn't like it, but the code should compile and run fine. Of course, with it being a single statement, you don't need the `BEGIN`...`END` block at all:
```
IF (@val is null) THROW 50001, 'Custom text', 1
``` | SQL only a throw inside if statement | [
"",
"sql",
"sql-server",
"if-statement",
"throw",
""
] |
What is the fastest way to check if a SQLite table has more than 100,000 rows?
---
The test table has 26 columns and 200,000,000 rows.
```
SELECT COUNT(*) FROM ( SELECT * FROM table LIMIT 100001 )
```
took 0.27 seconds.
The following three needed 12 and a half minutes
```
SELECT COUNT(*) FROM table
SELECT COUNT(*) FROM table LIMIT 100001
SELECT CASE WHEN COUNT(Id) >= 100000 THEN 1 ELSE 0 END FROM table
``` | ```
select count(*) from (
select top 100001 ID from T
) x
```
We need to scan an index to answer the query. This at least restricts the index scan to the first 100001 rows. If the table has 1m rows, this saves 90% of the work.
(Using SQL Server syntax here - please translate yourself because I can't do it).
Instead of ID you can choose some indexed column. I don't know if the optimizer can do that itself.
Note, that clever tricks like looking at identity or ID column values do not work in general.
To support the above query, create a dummy column of type `bit` and index it. The index will be very compact and the fastest to scan. | ```
SELECT COUNT(indexed_column) FROM TableName
```
Now the indexed column part is the important thing, you don't want to do something like counting \*, you want to be as specific as possible forcing SQL to use an index that is existing for each row on that table. | SQLite: fastest way to check if a table has more than x rows | [
"",
"sql",
"sqlite",
""
] |
I have a large sorted file with one number per line. I want to output the number of numbers in each range of some size. So for example taking the range to be 10 and the input as
```
1
4
34
37
42
48
53
```
The output should be `[2, 1, 3, 2, 2, 2, 1]`. This is the length of `[1,4], [4], [34,37,42], [37,42], [42,48], [48,53], [53]`. I think this can be solved using `deque` but, mostly as a programming exercise and also to use less memory, I am trying to solve by opening the file twice and have two different pointers into the file. One to read in the next left hand end of a list and one to read in the new right hand ends. So my code has
```
fin1 = open(file, 'r')
fin2 = open(file, 'r')
```
I think loop over fin1 and when the range gets too big, I read in more of fin2 until the range is small enough and continue stepping down fin1 and fin2.
However I can't get this to work. It seems to not like me to have two file handles open for the same file. How can one do this? | Here is a solution that uses `itertools.tee()` to simulate reading from handles, but only actually opening one:
```
from itertools import tee
def sliding_range(file, size):
fin1, fin2 = tee(int(ln.strip()) for ln in open(file) if ln.strip())
n = 1
next(fin2)
val2 = next(fin2)
for val1 in fin1:
while val2 is not None and val2 <= val1 + size:
n += 1
try:
val2 = next(fin2)
except StopIteration:
val2 = None
break
yield n
n -= 1
```
Example (with your example data copied to 'test.txt'):
```
>>> list(sliding_range('test.txt', 10))
[2, 1, 3, 2, 2, 2, 1]
``` | Here's an implementation, there might be a better way to do it but this should work. I'm assuming the same input you posted in your question.
```
def ranges(n):
f = open("tmp.txt")
while True:
i = f.tell()
try:
curr = int(f.readline().rstrip())
except ValueError:
break # EOF
j = f.tell()
while True:
k = f.tell() # End of range location
try:
next = int(f.readline().rstrip())
except ValueError:
break # EOF
if next < n or (next - curr) < n:
continue
else:
break
f.seek(i) # Go to beginning of range
r = []
while f.tell() < k:
r.append(int(f.readline().strip()))
print(r)
f.seek(j) # Go to line after beginning of range
>>> ranges(10)
[1, 4]
[4]
[34, 37, 42]
[42, 48]
[48, 53]
[53]
``` | Counting the number of points in a sliding range | [
"",
"python",
""
] |
I am creating a `setup.py` file for a project which depends on private GitHub repositories. The relevant parts of the file look like this:
```
from setuptools import setup
setup(name='my_project',
...,
install_requires=[
'public_package',
'other_public_package',
'private_repo_1',
'private_repo_2',
],
dependency_links=[
'https://github.com/my_account/private_repo_1/master/tarball/',
'https://github.com/my_account/private_repo_2/master/tarball/',
],
...,
)
```
I am using `setuptools` instead of `distutils` because the latter does not support the `install_requires` and `dependency_links` arguments per [this](https://stackoverflow.com/questions/9810603/adding-install-requires-to-setup-py-when-making-a-python-package) answer.
The above setup file fails to access the private repos with a 404 error - which is to be expected since GitHub returns a 404 to unauthorized requests for a private repository. However, I can't figure out how to make `setuptools` authenticate.
Here are some things I've tried:
1. Use `git+ssh://` instead of `https://` in `dependency_links` as I would if installing the repo with `pip`. This fails because setuptools doesn't recognize this protocol ("unknown url type: git+ssh"), though the [distribute documentation](http://pythonhosted.org/distribute/setuptools.html#dependencies-that-aren-t-in-pypi) says it should. Ditto `git+https` and `git+http`.
2. `https://<username>:<password>@github.com/...` - still get a 404. (This method doesn't work with `curl` or `wget` from the command line either - though `curl -u <username> <repo_url> -O <output_file_name>` does work.)
3. Upgrading setuptools (0.9.7) and virtualenv (1.10) to the latest versions. Also tried installing distribute though [this overview](https://stackoverflow.com/questions/6344076/differences-between-distribute-distutils-setuptools-and-distutils2) says it was merged back into setuptools. Either way, no dice.
Currently I just have `setup.py` print out a warning that the private repos must be downloaded separately. This is obviously less than ideal. I feel like there's something obvious that I'm missing, but can't think what it might be. :)
Duplicate-ish question with no answers [here](https://stackoverflow.com/questions/12956168/python-how-to-connect-to-a-protected-svn-repository-with-setuptools). | I was trying to get this to work for installing with pip, but the above was not working for me. From [1] I understood the `PEP508` standard should be used, from [2] I retrieved an example which actually does work (at least for my case).
Please note; this is with `pip 20.0.2` on `Python 3.7.4`
```
setup(
name='<package>',
...
install_requires=[
'<normal_dependency>',
# Private repository
'<dependency_name> @ git+ssh://git@github.com/<user>/<repo_name>@<branch>',
# Public repository
'<dependency_name> @ git+https://github.com/<user>/<repo_name>@<branch>',
],
)
```
After specifying my package this way installation works fine (also with `-e` settings and without the need to specify `--process-dependency-links`).
**References**
[1] <https://github.com/pypa/pip/issues/4187>
[2] <https://github.com/pypa/pip/issues/5566> | Here's what worked for me:
```
install_requires=[
'private_package_name==1.1',
],
dependency_links=[
'git+ssh://git@github.com/username/private_repo.git#egg=private_package_name-1.1',
]
```
Note that you have to have the version number in the egg name, otherwise it will say it can't find the package. | Python setuptools: How can I list a private repository under install_requires? | [
"",
"python",
"github",
"setuptools",
""
] |
```
allt = []
with open('towers1.txt','r') as f:
towers = [line.strip('\n') for line in f]
for i in towers:
allt.append(i.split('\t'))
print allt [0]
```
now i need help, im inputting this text
```
mw91 42.927 -72.84 2.8
yu9x 42.615 -72.58 2.3
HB90 42.382 -72.679 2.4
```
and when i output im getting
```
['mw91 42.927 -72.84 2.8']
```
where in my code and what functions can i use to define the 1st 2nd 3rd and 4th values in this list and all the ones below that will output, im trying
```
allt[0][2] or
allt[i][2]
```
but that dosent give me -72.84, its an error, then other times it goes list has no attribute split
update, maybe i need to use enumerate?? i need to make sure though the middle 2 values im imputing though can be used and numbers and not strings because im subtracting them with math | Are you sure those are tabs? You can specify no argument for split and it automatically splits on whitespace (which means you won't have to strip newlines beforehand either). I copied your sample into a file and got it to work like this:
```
allt = []
with open('towers1.txt','r') as f:
for line in f:
allt.append(line.split())
>>>print allt[0]
['mw91', '42.927', '-72.84', '2.8']
>>>print allt[0][1]
'42.927'
```
Footnote: if you get rid of your first list comprehension, you're only iterating the file once, which is less wasteful.
---
Just saw that you want help converting the float values as well. Assuming that `line.split()` splits up the data correctly, something like the following should probably work:
```
allt = []
with open('towers1.txt','r') as f:
for line in f:
first, *_else = line.split() #Python3
data = [first]
float_nums = [float(x) for x in _else]
data.extend(float_nums)
allt.append(data)
>>>print allt[0]
['mw91', 42.927, -72.84, 2.8]
```
For Python2, substitute the `first, *_else = line.split()` with the following:
```
first, _else = line.split()[0], line.split()[1:]
```
---
Finally (in response to comments below), if you want a list of a certain set of values, you're going to have to iterate again and this is where list comprehensions can be useful. If you want the `[2]` index value for each element in `allt`, you'll have to do something like this:
```
>>> some_items = [item[2] for item in allt]
>>> some_items
[-72.84, -72.58, -72.679]
``` | [] implies a list.
'' implies a string.
allt = ['mw91 42.927 -72.84 2.8']
allt is a list that contains a string:
allt[0] --> 'mw91 42.927 -72.84 2.8'
allt[0][2] --> '9'
allt.split() --> ['mw91', '42.927', '-72.84', '2.8']
allt.split()[2] --> '-72.84' #This is still a string.
float(allt.split()[2]) --> -72.84 #This is now a float. | Trying to input data from a txt file in a list, then make a list, then assign values to the lines | [
"",
"python",
"string",
"list",
"append",
""
] |
I'm just learning to program on Codeacademy. I have an assignment, but cant figure out what I'm doing wrong.
First I need to define a function that returns the cube of a value. Then I should define a second function that checks if a number is divisible by 3. If it is I need to return it, otherwise I need to return `False`.
heres the code:
```
def cube(c):
return c**3
def by_three(b):
if b % 3 == 0:
cube(b)
return b
else:
return False
``` | You are not catching the return value of the function `cube`. Do `b = cube(b)`. Or better yet, do `return cube(b)`.
```
def cube(c):
return c**3
def by_three(b):
if b % 3 == 0:
b = cube(b)
return b # Or simply return cube(b) and remove `b = cube(b)`
else:
return False
```
When you call the `cube` function with the argument `b`, it returns the cube of the passed argument, you need to store it in a variable and return that to the user, in your current code, you are neglecting the returned value. | I think this answer might also work:
```
def cube(b,c):
b = c ** 3
if b % 3 == 0:
return b
else:
return False
return b
```
I know that might be a little redundant but I think that might be another way of doing what you're trying to do. What Sukrit did I think is simpler. | (Beginner)Python functions Codeacademy | [
"",
"python",
""
] |
I have a class Population that contains several method.
According to an input I want the run the method on an instance of the class `Population` in a given order.
To be a bit more accurate in what I am trying to achieve is quite the same than using is something like that:
```
stuff = input(" enter stuff ")
dico = {'stuff1':functionA, 'stuff2':functionC, 'stuff3':functionB, 'stuff4':functionD}
dico[stuff]()
```
Except that the functionA, functionB etc... are methods and not functions:
```
order_type = 'a'
class Population (object):
def __init__(self,a):
self.a = a
def method1 (self):
self.a = self.a*2
return self
def method2 (self):
self.a += 2
return self
def method3 (self,b):
self.a = self.a + b
return self
if order_type=='a':
order = {1:method1, 2:method2, 3:method3}
elif order_type=='b':
order = {1:method2, 2:method1, 3:method3}
else :
order = {1:method3, 2:method2, 3:method1}
my_pop = Population(3)
while iteration < 100:
iteration +=1
for i in range(len(order)):
method_to_use = order[i]
my_pop.method_to_use() # But obviously it doesn't work!
```
Hope I've made my question clear enough!
Note that one of my method need two arguments | Pass the instance explicitly as first argument:
```
method_to_use = order[i]
method_to_use(my_pop)
```
Full working code:
```
order_type = 'a'
class Population (object):
def __init__(self,a):
self.a = a
def method1 (self):
self.a = self.a*2
return self
def method2 (self):
self.a += 2
return self
def method3 (self):
self.a = 0
return self
if order_type=='a':
order = [Population.method1, Population.method2, Population.method3]
elif order_type=='b':
order = [Population.method2, Population.method1, Population.method3]
else :
order = [Population.method3, Population.method2, Population.method1]
my_pop = Population(3)
while iteration < 100:
iteration +=1
for method_to_use in order:
method_to_use(my_pop)
```
If you want to pass more than one argument, simply use the `*args` syntax:
```
if order_type=='a':
order = [Population.method1, Population.method2, Population.method3]
arguments = [(), (), (the_argument,)]
elif order_type=='b':
order = [Population.method2, Population.method1, Population.method3]
arguments = [(), (), (the_argument,)]
else :
order = [Population.method3, Population.method2, Population.method1]
arguments = [(the_argument, ), (), ()]
my_pop = Population(3)
while iteration < 100:
iteration +=1
for method_to_use, args in zip(order, arguments):
method_to_use(my_pop, *args)
```
The `()` is an empty tuple, hence `*args` will expand to no additional arguments, while `(the_argument,)` is a 1-element tuple that will pass the argument to the method. | Use `getattr`:
```
order = {1:'method1', 2:'method2', 3:'method3'} #values are strings
...
method_to_use = order[i]
getattr(mypop, method_to_use)()
``` | Using a dictionary to control the flow of methods | [
"",
"python",
"methods",
"dictionary",
""
] |
I have csv files with unwanted first characters in the header row except the first column.
The while loop strips the first character from the headers and writes the new header row to a new file (exit by counter). The else statement then writes the rest of the rows to the new file. The problem is the else statement begins with the header row and writes it a second time. Is there a way to have else begin an the next line with out breaking the for iterator? The actual files are 21 columns by 400,000+ rows. The unwanted character is a single space, but I used \* in the example below to make it easier to see. Thanks for any help!
file.csv =
a,\*b,\*c,\*d
1,2,3,4
```
import csv
reader = csv.reader(open('file.csv', 'rb'))
writer = csv.writer(open('file2.csv','wb'))
count = 0
for row in reader:
while (count <= 0):
row[1]=row[1][1:]
row[2]=row[2][1:]
row[3]=row[3][1:]
writer.writerow([row[0], row[1], row[2], row[3]])
count = count + 1
else:
writer.writerow([row[0], row[1], row[2], row[3]])
``` | If you only want to change the header and copy the remaining lines without change:
```
with open('file.csv', 'r') as src, open('file2.csv', 'w') as dst:
dst.write(next(src).replace(" ", "")) # delete whitespaces from header
dst.writelines(line for line in src)
```
If you want to do additional transformations you can do something like [this](https://stackoverflow.com/questions/16650701/write-first-row-from-txt-file-as-a-column-in-new-txt-file/16651331#16651331) or [this](https://stackoverflow.com/a/16574900/1330293) question. | If all you want to do is remove spaces, you can use:
```
string.replace(" ", "")
``` | Python loops through CSV, but writes header row twice | [
"",
"python",
"csv",
"python-2.7",
""
] |
I'm trying to upload an image to my uploads folder on my remote server. The folder structure is always `uploads/year/month/` and I can't get paramiko to check if the folders exist and if not make them.
SSH connection is working, uploading a file is working too, but creating the subfolders in the uploads directory isn't working.
I came across what looked like the solution [here](https://stackoverflow.com/questions/14819681/upload-files-using-sftp-in-python-but-create-directories-if-path-doesnt-exist). It's the same question I have, *but* I'm on iOS and use Pythonista. Option A: my code is plain wrong or Option B it's an iOS/Pythonista specific issue.
So, the code from the other thread (linked above) set a definition and runs a try/error loop to tests if the folders passed through it already exists and if not creates them. In my script below it's `# Set Definition for "mkdir -p"`.
Calling it with the `remoteFilePath`…
1. *Unnecessary because:* ideally it should only test if `datePath` exists, since the `remotePath` definitely exists
2. *Likely problematic because:* `fileName` is no path and will be put there by the next command.
I tried adjusting the script, but somehow I can't make it work.
### I get errors no matter what I try:
* with version 1: `TypeError: mkdir_p() takes exactly 2 arguments (1 given)"`
* with version 2: `AttributeError: 'tulpe' object has no attribute 'rfind'`
* with version 3: `Exception: unknown type for (/home/userZ/Dropbox/uploads/year/month', 'test.png') type <type 'tuple'>`
Here's a snippet of the relevant parts of the script (or a [**gist**](https://gist.github.com/pattulus/bf20b63b457a259752ed) if you prefer the look of it):
```
# Set Variables
fileName = "temp.png"
remotePath = "/home/userZ/Dropbox/uploads/"
datePath = "year/month/"
remoteFilePath = remotePath + datePath + fileName #
# Set Definition for "mkdir -p"
def mkdir_p(sftp,remote_directory):
remote_dirname, basename = os.path.split(remote_directory)
mkdir_p(os.path.dirname(remote_directory))
try:
sftp.chdir(name)
except IOError:
sftp.mkdir(name)
sftp.chdir(name)
try:
transport.connect(username = username, password = password)
sftp = paramiko.SFTPClient.from_transport(transport) # Start SFTP client
# Try to make remote path - 3 Versions and all fail
mkdir_p(sftp,remoteFilePath) # Version 1
#mkdir_p(sftp, os.path.split(remoteFilePath)) # Version 2
#sftp.mkdir(os.path.split(remoteFilePath)) # Version 3
# Put file to remote
sftp.put('temp.png', remoteFilePath)
# Close connection
finally:
transport.close()
sftp.close()
```
Any help is appreciated. (Careful: OP = Python noob). I rely on Paramiko because my shared host only supports SFTP. Otherwise I'd have gone with FTPlib. | [CClauss](https://github.com/cclauss) got the answer and put it in the comment section of the gist linked above. I take no credit.
This was his answer - in cause anyone else tries to emulate `mkdir -p` with paramiko, then here you go:
> My sense is that you need to: Try to mkdir on :/home/ Try to mkdir on
> :/home/userZ/ Try to mkdir on :/home/userZ/Dropbox/ Try to mkdir on
> :/home/userZ/Dropbox/uploads/ Try to mkdir on
> :/home/userZ/Dropbox/uploads/year/ Try to mkdir on
> :/home/userZ/Dropbox/uploads/year/month/ Then cd to
> /home/userZ/Dropbox/uploads/year/month/ Then copy your file
>
> Try this...
```
# Slash '/' is hardcoded because ftp always uses slash
def mk_each_dir(sftp, inRemoteDir):
currentDir = '/'
for dirElement in inRemoteDir.split('/'):
if dirElement:
currentDir += dirElement + '/'
print('Try to mkdir on :' + currentDir)
try:
sftp.mkdir(currentDir)
except:
pass # fail silently if remote directory already exists
# Set Variables
fileName = "temp.png"
remotePath = "/home/userZ/Dropbox/uploads/"
datePath = "year/month/"
remoteDirPath = remotePath + datePath
mk_each_dir(sftp, remoteDirPath)
sftp.chdir(remoteDirPath)
remoteFilePath = remoteDirPath + fileName
``` | You run into the same problem with FTP, you can't mkdir /some/long/path, you have to
```
cd /
mkdir some # ignore errors if it exists
cd some
mkdir long # ignore errors if it exists
cd long
mkdir path # ignore errors if it exists
cd path
```
I'm guessing that the historical reason for this model is to provide a simple boolean SUCCEED/FAIL for any command issued by the client.
Having some fuzzy error like 'Couldn't make directory */some/long/path* because directory */some/long* doesn't exist' would be a pain in the ass to handle on the client side.
In looking at the SFTP protocol spec re: filenames ( <https://datatracker.ietf.org/doc/html/draft-ietf-secsh-filexfer-13#page-15> ), it's unclear to me if clients are expected to understand '/' as a path separator in the context of making a directory - since you could have an SFTP server on a windows box as well, paramiko may just omit support for this entirely?
tl;dr: Modify your code to split the path up into it's individual components, try to chdir to them, if that fails, try to make them, if that fails you're dead, otherwise chdir to the next subdirectory and continue until you have no more subdirectories to create.
**EDIT**
The specific problem you're having with the line
```
mkdir_p(os.path.split(remoteFilePath))
```
can be fixed by changing it to
```
mkdir_p(sftp, os.path.split(remoteFilePath))
``` | SFTP Upload via Python and Pythonista with Paramiko. Can't create directory/subdirectory | [
"",
"python",
"sftp",
"paramiko",
"mkdir",
"pythonista",
""
] |
I'm current working through *Learning Python the Hard Way*, and it is perhaps going a little fast for me. I've input the following code, with along with the corresponding file. In the py file, I wrote:
```
#!/usr/bin/python
from sys import argv
script, filename = argv
txt = open(filename)
print "Here's your file %r:" % filename
print txt.read()
print "Type the filename again:"
file_again = raw_input("> ")
txt_again = open(file_again)
print txt_again.read()
```
And to run it, I wrote: `python script.py readme.txt` which ran the code.
However, I don't quite understand the process here:
* How come `#!/usr/bin/python` must be at the top of the file
* What is `sys import argv`
* What is `script, filename = argv`
* Is `.read()` a built in function? | 1. `#!/usr/bin/python` is the so-called Shebang, a hint to the OS kernel that it is a Python script which should be executed with the given Python binary. Thus, if you `chmod +x` the script, you can even call it with `./script.py readme.txt`.
2. `from sys import argv` is a command to import `argv` from the `sys` module directly into our namespace. So we can use `argv` to access it instead of `sys.argv`. If we only use it once, it may be better to just `import sys` and access everything inside via e.g. `sys.argv`. You'll find about that in the Python docs and/or tutorial.
3. `script, filename = argv` is a shorthand for
```
script = argv[0]
filename = argv[1]
```
as long as `argv` contains exactly 2 elements. You'll find about that in the Python docs and/or tutorial.
4. `file.read()` is indeed built-in, but as a file object method, not as a function as such. | Answers:
1) #!/usr/bin/python is there for UNIX users, it shows python where to find certain files. ([Why do people write #!/usr/bin/env python on the first line of a Python script?](https://stackoverflow.com/questions/2429511/why-do-people-write-usr-bin-env-python-on-the-first-line-of-a-python-script))
2) sys import argv is the file in the argument [readme.txt] (<http://www.tutorialspoint.com/python/python_command_line_arguments.htm>)
3) script, filename = argv Script and Filename [new variables] take on the value from argv.
4) Yes, .read() is a built in function. (<http://www.diveintopython.net/file_handling/file_objects.html>)
Google is your friend on this one... | Opening files with Python | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
I have this code to get followers of a twitter user:
```
followers=[]
for user in tweepy.Cursor(api.followers,id=uNameInput).items():
followers.append(user.screen_name)
```
However if this is used on a user with multiple followers, the script gets a rate limit and stops. I would usually put this in a while true; try, except else break loop but unsure where it would go in this instance. | If you want to avoid rate limit, you can/should wait before the next follower page request:
```
for user in tweepy.Cursor(api.followers, id=uNameInput).items():
followers.append(user.screen_name)
time.sleep(60)
```
Doesn't look beautiful, but should help.
UPD: According to the official [twitter limits](https://dev.twitter.com/docs/rate-limiting/1.1/limits), you can make only 30 requests per 15-minute interval to get `followers`.
So, you can either catch rate limit exception and wait for 15 minutes interval to end, or define a counter and make sure you don't make more than 30 requests per 15-minute gap.
Here's an example, how you can catch the tweepy exception and wait for 15 minutes before moving to the next portion of followers:
```
import time
import tweepy
auth = tweepy.OAuthHandler(..., ...)
auth.set_access_token(..., ...)
api = tweepy.API(auth)
items = tweepy.Cursor(api.followers, screen_name="gvanrossum").items()
while True:
try:
item = next(items)
except tweepy.TweepError:
time.sleep(60 * 15)
item = next(items)
print item
```
Not sure this is the best approach though.
UPD2: There is also another option: you can check for [rate\_limit\_status](https://dev.twitter.com/docs/api/1.1/get/application/rate_limit_status), see how much requests remain for `followers` and decide whether to wait or continue.
Hope that helps. | There's a more precise way to do this with the new rate\_limit\_status' reset attribute. Whereas @alecxe's answer forces you to wait 15 minutes each time, even if the window is much smaller, you can instead wait just the right amount of time and no longer by doing:
```
import time
import tweepy
import calendar
import datetime
auth = tweepy.OAuthHandler(..., ...)
auth.set_access_token(..., ...)
api = tweepy.API(auth)
items = tweepy.Cursor(api.followers, screen_name="gvanrossum").items()
while True:
try:
item = next(items)
except tweepy.TweepError:
#Rate limited. Checking when to try again
rate_info = api.rate_limit_status()['resources']
reset_time = rate_info['followers']['/followers/ids']['reset']
cur_time = calendar.timegm(datetime.datetime.utcnow().timetuple())
#wait the minimum time necessary plus a few seconds to be safe
try_again_time = reset_time - cur_time + 5
#Will try again in try_again_time seconds...
time.sleep(try_again_time)
``` | Tweepy Hipchat API - Except rate limit? | [
"",
"python",
"twitter",
"tweepy",
""
] |
I have read alot of strange syntaxerror questions and have not seen mine among it yet and I am really at a loss. I am doing some homework for which the deadline is coming closer and this error I cant get rid of:
```
def create_voting_dict():
strlist = [voting_data[i].split() for i in range(len(voting_data))]
return voting_dict = {strlist[h][0]:[int(strlist[h][g]) for g in range(3, len(strlist[h]))] for h in range(len(strlist))}
```
Which gets me the error:
```
return voting_dict = {strlist[h][0]:[int(strlist[h][g]) for g in range(3, len(strlist[h]))] for h in range(len(strlist))}
^
SyntaxError: invalid syntax
```
This error did not occur when I defined voting\_dict inside the procedure, but I need to define it globally so i put it after return and then I got the error. Have been counting parenthesis all over but that doesnt seem to be the problem.
I am sure that when I see the problem it is very easy, but I just dont see it. Thanks for any help.
\*voting data is a list with strings and I made the procedure to split the strings and create a dictionary | You cannot define in a `return`. (Because assignments do not return values) Just do
```
return {strlist[h][0]:[int(strlist[h][g]) for g in range(3, len(strlist[h]))] for h in range(len(strlist))}
```
Or define a `voting_dict` in a new statement and then `return voting_dict`.
See the example -
```
>>> def test():
return num = 2
SyntaxError: invalid syntax
>>> def test():
return 2
``` | Its problem with your return statement in which you cannot carry out assignments. Just do it a step before. | Strange SyntaxError: invalid syntax after defining statement in return | [
"",
"python",
"return",
"syntax-error",
""
] |
I have a table like this:
```
id description status login
XWPggD bbbbbbbb 1 js
0JIERf test1 1 js
0gd2x0 nothing NULL js
bSIUIu dev NULL bob
0BNh27 hello 1 js
2TYXjd down NULL inge
axE1m5 bobby NULL bob
1iSlQM qwe 0 js
9dPjoP descr NULL inge
```
I'm trying to sort the output by status(null values at the bottom) and then by login.
But I also want the login values, where the status is NULL to be ordered correctly, so that it looks like:
```
id description status login
XWPggD bbbbbbbb 1 js
0JIERf test1 1 js
0BNh27 hello 1 js
1iSlQM qwe 0 js
axE1m5 bobby NULL bob
bSIUIu dev NULL bob
9dPjoP descr NULL inge
2TYXjd down NULL inge
0gd2x0 nothing NULL js
```
But when I try something like:
```
SELECT id, description, status, login FROM dev
ORDER BY isnull(status) ASC, isnull(login) ASC;
```
I always get a weird loginorder where status values are NULL. | Why so complicated answers ...
```
SELECT id, description, status, login FROM dev
ORDER BY status IS NULL, status DESC, login IS NULL, login DESC
``` | ```
SELECT id, description, status, login FROM dev
ORDER BY (Case when status is null then -1 Else status End ) Desc,
(Case when login is null then -1 Else 1 End ) Desc
``` | Order by NULL values with mysql | [
"",
"mysql",
"sql",
"sql-order-by",
""
] |
I thought i've learned enough python to make a caesar cipher, so I started making it and i've hit a brick wall.
Here is my code:
```
phrase = raw_input("Enter text to Cipher: ")
shift = int(raw_input("Please enter shift: "))
result = ("Encrypted text is: ")
for character in phrase:
x = ord(character)
x = x + shift
print chr(x)
```
At the moment if the phrase is 'hi' and shift is 1 , the for loop just loops around the letter i, not the letter h, so my result is: j
I want to loop around the whole word and shift each letter by whatever the shift int variable is.
How can I loop around the phrase variable? | Your code is printing `ord()` value of `'j'` because at the end of loop character is equal to `'i'`. You should store the new characters to a list, and after the end of the loop you should join them and then print.
```
new_strs = []
for character in phrase:
x = ord(character)
x = x + shift
new_strs.append(chr(x)) #store the new shifted character to the list
#use this if you want z to shift to 'a'
#new_strs.append(chr(x if 97 <= x <= 122 else 96 + x % 122))
print "".join(new_strs) #print the new string
```
**Demo:**
```
$ python so.py
Enter text to Cipher: hi
Please enter shift: 1
ij
``` | Append each encrypted character to the `result` string.
```
phrase = raw_input("Enter text to Cipher: ")
shift = int(raw_input("Please enter shift: "))
result = ""
for character in phrase:
x = ord(character)
result += chr(x + shift)
print result
``` | Python Caesar Cipher Using Ord and Chr | [
"",
"python",
"loops",
""
] |
I have a pandas dataframe with the following columns:
```
data = {'Date': ['01-06-2013', '02-06-2013', '02-06-2013', '02-06-2013', '02-06-2013', '03-06-2013', '03-06-2013', '03-06-2013', '03-06-2013', '04-06-2013'],
'Time': ['23:00:00', '01:00:00', '21:00:00', '22:00:00', '23:00:00', '01:00:00', '21:00:00', '22:00:00', '23:00:00', '01:00:00']}
df = pd.DataFrame(data)
Date Time
0 01-06-2013 23:00:00
1 02-06-2013 01:00:00
2 02-06-2013 21:00:00
3 02-06-2013 22:00:00
4 02-06-2013 23:00:00
5 03-06-2013 01:00:00
6 03-06-2013 21:00:00
7 03-06-2013 22:00:00
8 03-06-2013 23:00:00
9 04-06-2013 01:00:00
```
How do I combine data['Date'] & data['Time'] to get the following? Is there a way of doing it using `pd.to_datetime`?
```
Date
01-06-2013 23:00:00
02-06-2013 01:00:00
02-06-2013 21:00:00
02-06-2013 22:00:00
02-06-2013 23:00:00
03-06-2013 01:00:00
03-06-2013 21:00:00
03-06-2013 22:00:00
03-06-2013 23:00:00
04-06-2013 01:00:00
``` | It's worth mentioning that you may have been able to read this in **directly** e.g. if you were using [`read_csv`](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html) using `parse_dates=[['Date', 'Time']]`.
Assuming these are just strings you could simply add them together (with a space), allowing you to use [`to_datetime`](https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html), which works without specifying the `format=` parameter
```
In [11]: df['Date'] + ' ' + df['Time']
Out[11]:
0 01-06-2013 23:00:00
1 02-06-2013 01:00:00
2 02-06-2013 21:00:00
3 02-06-2013 22:00:00
4 02-06-2013 23:00:00
5 03-06-2013 01:00:00
6 03-06-2013 21:00:00
7 03-06-2013 22:00:00
8 03-06-2013 23:00:00
9 04-06-2013 01:00:00
dtype: object
In [12]: pd.to_datetime(df['Date'] + ' ' + df['Time'])
Out[12]:
0 2013-01-06 23:00:00
1 2013-02-06 01:00:00
2 2013-02-06 21:00:00
3 2013-02-06 22:00:00
4 2013-02-06 23:00:00
5 2013-03-06 01:00:00
6 2013-03-06 21:00:00
7 2013-03-06 22:00:00
8 2013-03-06 23:00:00
9 2013-04-06 01:00:00
dtype: datetime64[ns]
```
Alternatively, without the `+ ' '`, but the `format=` parameter must be used. Additionally, pandas is good at inferring the format to be converted to a `datetime`, however, specifying the exact format is faster.
```
pd.to_datetime(df['Date'] + df['Time'], format='%m-%d-%Y%H:%M:%S')
```
*Note: surprisingly (for me), this works fine with NaNs being converted to NaT, but it is worth worrying that the conversion (perhaps using the `raise` argument).*
## `%%timeit`
```
# sample dataframe with 10000000 rows using df from the OP
df = pd.concat([df for _ in range(1000000)]).reset_index(drop=True)
%%timeit
pd.to_datetime(df['Date'] + ' ' + df['Time'])
[result]:
1.73 s ± 10.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
pd.to_datetime(df['Date'] + df['Time'], format='%m-%d-%Y%H:%M:%S')
[result]:
1.33 s ± 9.88 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
``` | The accepted answer works for columns that are of datatype `string`. For completeness: I come across this question when searching how to do this when the columns are of datatypes: date and time.
```
df.apply(lambda r : pd.datetime.combine(r['date_column_name'],r['time_column_name']),1)
``` | Combine Date and Time columns using pandas | [
"",
"python",
"pandas",
"datetime",
"time-series",
""
] |
I am at a complete loss here. I have two databases. One on my localhost site that I use for development and one on my remote site that I use for my live (production) site. I manage both of them through phpMyadmin. As I have been doing for months now, when I need to update the live site, I dump the related database and import the database from my localhost site.
Now, no matter what I try, I keep getting this error:
Error
SQL query:
```
--
-- Dumping data for table `oc_address_type`
--
INSERT INTO `oc_address_type` ( `address_type_id` , `address_type_name` )
VALUES ( 1, 'Billing' ) , ( 2, 'Shipping' ) ;
```
MySQL said: Documentation
> #1062 - Duplicate entry '1' for key 'PRIMARY'
I tried creating a new blank database on my localhost and importing into that but same results. I have validated all of the tables and indexes and cannot find anything wrong there.
Any suggestions please as I am completely down until this gets resolved.
By the way, I am completely dropping all tables and importing structure and data. This has always worked until today. | you need to dump with the drop statements. The table exists and has data already and your trying to insert more which is identical. Im not 100% sure on phpmyadmin but the dumps will have an option for "add drop table" statements | Dump your database on localhost with "mysqldump --insert-ignore ..." then try to import with phpmyadmin on your live machine.
Or try to connect to your live database with command line tools (configure your database to be able to connect from other hosts than "localhost" first!)
Then you can try following:
$ mysql -f -p < yourdump.sql
with -f "force" you can ignore errors during importing. It's the same as adding "--force" parameter to "mysqlimport". | #1062 - Duplicate entry '1' for key 'PRIMARY' | [
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I need all entrys where the `curdate` is today...
`curdate` is of type `timestamp`.
Of course I searched in google for this but I think I have used the wrong keywords.
```
select * from ftt.element fee, ftt.plan p
where p.id=ftt.p_id and curdate ???
order by p_id, curdate desc;
```
Could you provide me? | `SYSDATE` returns the current datea **and** time in Oracle. A `DATE` column also contains a time part in Oracle - a `TIMESTAMP` column as well.
In order to check for "today" you must remove the time part in both values:
```
select *
from ftt.element fee
join ftt.plan p on p.id=ftt.p_id
where trunc(curdate) = trunc(sysdate)
order by p_id, curdate desc;
```
You should also get used to using explicit `JOIN`s instead of the implicit joins in the `WHERE` clause.
If you need to take care of different timezones (e.g. because `curdate` is a `timestamp with time zone`) then you might want to use `CURRENT_TIMESTAMP` (which includes a time zone) instead of `SYSDATE` | This answer is very similar to a\_horses\_with\_no\_name's answer. Difference is that it perform faster. Making a calculation on a column before comparing will slow down performance.
```
select *
from ftt.element fee
join ftt.plan p on p.id=ftt.p_id
where curdate >= trunc(sysdate)
and curdate < trunc(sysdate + 1)
order by p_id, curdate desc;
``` | SQL "Where" syntax for is today | [
"",
"sql",
"oracle",
"timestamp",
""
] |
Say I have the following as a join in a larger query aliased as 'overall'
```
INNER JOIN (SELECT
tts.typ,tts.amount,
tts.term_sale_id,
tts.term_sale_id,
tts.from_price
FROM tb_term_sale_entries tts
WHERE tts.start_date <= '#dateformat(now(),'mm/dd/yyyy')# 23:59:00'
AND tts.end_date >= '#dateformat(now(),'mm/dd/yyyy')# 23:59:00'
AND tts.style_id = overall.style_id) term_sale ON (tts.style_id = overall.style_id)
```
When SQL is handling this query, does it create the term\_sale table one time and then join it as needed, or does it create term\_sale for each row of the main query?
In the above, I have the join condition twice, once in the subquery and once outside in the join statement. My question is which is generally more efficient? | Viewing the query execution plan ([How do I obtain a Query Execution Plan?](https://stackoverflow.com/questions/7359702/how-do-i-obtain-a-query-execution-plan)) should help you determine which of the two options will be more efficient.
In this case though, I'm 99% that you are going to want to keep your condition inside the subquery, thereby limiting that result set, which should make the join and query more efficient. Basically, it's better to join to a smaller table / result set rather than a larger one. | As it is treated as Sub query SQL Engine executes the term\_sale and operates on the data set that has been created after the execution of this query.Only the comparison part i.e. On
part is done row by row.
regards
Ashutosh Arya | Is a JOIN subquery, created once as a table or is it reevaluated each time? | [
"",
"sql",
"join",
"subquery",
""
] |
Is it possible to only merge some columns? I have a DataFrame df1 with columns x, y, z, and df2 with columns x, a ,b, c, d, e, f, etc.
I want to merge the two DataFrames on x, but I only want to merge columns df2.a, df2.b - not the entire DataFrame.
The result would be a DataFrame with x, y, z, a, b.
I could merge then delete the unwanted columns, but it seems like there is a better method. | You could merge the sub-DataFrame (with just those columns):
```
df2[list('xab')] # df2 but only with columns x, a, and b
df1.merge(df2[list('xab')])
``` | You want to use TWO brackets, so if you are doing a VLOOKUP sort of action:
```
df = pd.merge(df,df2[['Key_Column','Target_Column']],on='Key_Column', how='left')
```
This will give you everything in the original df + add that one corresponding column in df2 that you want to join. | Python Pandas merge only certain columns | [
"",
"python",
"merge",
"pandas",
""
] |
In Python 2, we could reassign `True` and `False` (but not `None`), but all three (`True`, `False`, and `None`) were considered builtin variables. However, in Py3k all three were changed into keywords as per [the docs](http://docs.python.org/3.0/whatsnew/3.0.html).
From my own speculation, I could only guess that it was to prevent shenanigans like [this](https://stackoverflow.com/questions/2055029/why-cant-python-handle-true-false-values-as-i-expect) which derive from the old `True, False = False, True` prank. However, in Python 2.7.5, and perhaps before, statements such as `None = 3` which reassigned `None` raised `SyntaxError: cannot assign to None`.
Semantically, I don't believe `True`, `False`, and `None` are keywords, since they are at last semantically literals, which is what Java has done. I checked PEP 0 (the index) and I couldn't find a PEP explaining why they were changed.
Are there performance benefits or other reasons for making them keywords as opposed to literals or special-casing them like `None` in python2? | Possibly because Python 2.6 not only allowed `True = False` but also allowed you to say funny things like:
```
__builtin__.True = False
```
which would reset `True` to `False` for the entire process. It can lead to really funny things happening:
```
>>> import __builtin__
>>> __builtin__.True = False
>>> True
False
>>> False
False
>>> __builtin__.False = True
>>> True
False
>>> False
False
```
*EDIT*: As pointed out by [Mike](https://stackoverflow.com/users/77939/mike), the [Python wiki](http://wiki.python.org/moin/Python3.0) also states the following under *Core Language Changes*:
* Make True and False keywords.
+ Reason: make assignment to them impossible. | For two reasons, mainly:
1. So people don't do a `__builtin__.True = False` prank hidden on a random module. (as explained by devnull)
2. Because keywords are faster than global builtins. In Python 2.x, the interpreter would have to resolve those variables' values before using them, which is a bit slower than keywords. (see [Why is if True slower than if 1?](https://stackoverflow.com/questions/18123965/why-if-true-is-slower-than-if-1)) | Why were True and False changed to keywords in Python 3 | [
"",
"python",
"keyword",
""
] |
I want All **CREATE** statements of MySql Tables in 1 query result.
For example, **INFORMATION\_SCHEMA** contains all *table names,comments* etc. but where are the **CREATE** statements are stored in MySql ? can it be retrieved in one query for all tables ?
Currently i am retrieving **TABLE** ddl as below for 1 table. I have 100's of tables so i can repeat the same everytime which is time taking process
```
show create table row_format;
``` | ```
mysqldump -h localhost -u root -p --no-data --compact some_db
mysqldump -d --compact --compatible=mysql323 ${dbname}|egrep -v "(^SET|^/\*\!)"
```
[How do I use mysqldump to export only the CREATE TABLE commands](https://stackoverflow.com/questions/1842076/how-do-i-use-mysqldump-to-export-only-the-create-table-commands) | I needed this today, and [the `mysqldump` answer](https://stackoverflow.com/a/18050973/2223027) generated me a .sql file that I could not import: it declared full `CREATE TABLE` statements with foreign keys pointing to tables it had not declared yet.
Instead, I used [Liquibase](https://docs.liquibase.com/):
1. Download MySQL Connector/J (pick Platform Independent): <https://dev.mysql.com/downloads/connector/j/>
2. From the directory where you downloaded the connector jar, create a `liquibase.properties` file and edit it like so:
```
outputDefaultSchema=true
includeSchema=true
driver=com.mysql.cj.jdbc.Driver
url=jdbc:mysql://localhost:3306/replace_with_your_db
username=replace with your MySQL user
password=replace with your MySQL password
classpath=full path of the MySQL connector, for instance C:\mysql-connector-java-8.0.21.jar
```
3. [Install Liquibase](https://www.liquibase.org/download) then run:
```
liquibase --changeLogFile=schema.mysql.sql generateChangeLog
```
This will generate SQL commands for your schema, all in the right order.
For instance:
```
CREATE TABLE address (id BIGINT AUTO_INCREMENT NOT NULL, line VARCHAR(128) NULL, owner_id BIGINT NOT NULL);
CREATE TABLE person (id BIGINT AUTO_INCREMENT NOT NULL, name VARCHAR(64) NOT NULL);
ALTER TABLE address ADD CONSTRAINT fk_address_person FOREIGN KEY (owner_id) REFERENCES person (id) ON UPDATE RESTRICT ON DELETE RESTRICT;
``` | Generate CREATE statements for all MySql Tables | [
"",
"mysql",
"sql",
"information-schema",
""
] |
I want to replicate boolean `NA` values as they behave in R:
> NA is a valid logical object. Where a component of x or y is NA, the result will be NA if the outcome is ambiguous. In other words NA & TRUE evaluates to NA, but NA & FALSE evaluates to FALSE.
> <http://stat.ethz.ch/R-manual/R-devel/library/base/html/Logic.html>
I have seen `None` being recommended for missing values, but Python converts `None` to `False` when evaluating boolean expressions, and computes `None or False` to `False`. The result should of course have been `None`, as no conclusions can be made given the missing value.
How do I achieve this in Python?
**EDIT** Accepted answer computes correctly with bitwise boolean operators, but to achieve the same behavior with logical operators `not`, `or` and `and`, seems to require a change in the Python programming language. | As other have said, you can define your own class.
```
class NA_(object):
instance = None # Singleton (so `val is NA` will work)
def __new__(self):
if NA_.instance is None:
NA_.instance = super(NA_, self).__new__(self)
return NA_.instance
def __str__(self): return "NA"
def __repr__(self): return "NA_()"
def __and__(self, other):
if self is other or other:
return self
else:
return other
__rand__ = __and__
def __or__(self, other):
if self is other or other:
return other
else:
return self
__ror__ = __or__
def __xor__(self, other):
return self
__rxor__ = __xor__
def __eq__(self, other):
return self is other
__req__ = __eq__
def __nonzero__(self):
raise TypeError("bool(NA) is undefined.")
NA = NA_()
```
Use:
```
>>> print NA & NA
NA
>>> print NA & True
NA
>>> print NA & False
False
>>> print NA | True
True
>>> print NA | False
NA
>>> print NA | NA
NA
>>> print NA ^ True
NA
>>> print NA ^ NA
NA
>>> if NA: print 3
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 28, in __nonzero__
TypeError: bool(NA) is undefined.
>>> if NA & False: print 3
...
>>>
>>> if NA | True: print 3
...
3
>>>
``` | You can do this by creating a class and overriding the boolean operation methods.
```
>>> class NA_type(object):
def __and__(self,other):
if other == True:
return self
else:
return False
def __str__(self):
return 'NA'
>>>
>>> NA = NA_type()
>>> print NA & True
NA
>>> print NA & False
False
``` | How to do boolean algebra on missing values? | [
"",
"python",
"boolean-expression",
""
] |
I create a LabelFrame widget. It has a nice size in the beginning:
```
import Tkinter
form = Tkinter.Tk()
errorArea = Tkinter.LabelFrame(form, text=" Errors ", width=250, height=80)
errorArea.grid(row=2, column=0, columnspan=2, sticky="E", \
padx=5, pady=0, ipadx=0, ipady=0)
```

But when I insert an empty string in it, the `errorArea` widget's size adjusts according to the inserted string:
```
errorMessage = Tkinter.Label(errorArea, text="")
errorMessage.grid(row=0, column=0, padx=5, pady=2, sticky='W')
```

How do I give the `errorArea` widget a fixed size, so that its size won't change according to Lable inserted in it? | That problem always was interesting to me. One way I found to fix it is by using the `place` method instead of `grid`:
```
import Tkinter
form = Tkinter.Tk()
errorArea = Tkinter.LabelFrame(form, text=" Errors ", width=250, height=80)
errorArea.grid(row=2, column=0, columnspan=2, sticky="E", \
padx=5, pady=0, ipadx=0, ipady=0)
errorMessage = Tkinter.Label(errorArea, text="")
# 1) 'x' and 'y' are the x and y coordinates inside 'errorArea'
# 2) 'place' uses 'anchor' instead of 'sticky'
# 3) There is no need for 'padx' and 'pady' with 'place'
# since you can specify the exact coordinates
errorMessage.place(x=10, y=10, anchor="w")
form.mainloop()
```
With this, the label is placed in the window without shrinking the labelframe. | If you use a sticky value that sticks the widget to all four sides of its cell rather than just one side, it won't shrink when you put a small label widget in it.
Another option is to call `errorArea.grid_propagate(False)`, which tells the grid area not to shrink or expand to fit its contents. This will often result in undesirable resize behavior, or at least require you to do a little extra work to get the right resize behavior. | How to prevent Tkinter labelframe size changes when an empty label is inserted | [
"",
"python",
"widget",
"tkinter",
"size",
""
] |
I don't know why i can't find it, but i wanted to replace the special character `'\'` in python.
I have a String within i have `'\'` characters but i confident find the solution, to replace it with '-'.
This is what happening while i am trying to replace,
```
>>> x = 'hello\world'
>>> x
'hello\\world'
>>> x.replace('\', '-')
File "<stdin>", line 1
x.replace('\', '-')
SyntaxError: EOL while scanning string literal
```
**EDIT:**
Do try this it in the eclipse IDLE
```
x = 'hello\world'
print x
x.replace('\\', '-')
print x
```
Output:
```
hello\world
hello\world
``` | You need to escape it with another backslash:
```
x.replace('\\', '-')
```
Backslashes are special, in that they are used to introduce non-printing characters like newlines into a string.
It's also how you add a `'` character to a `'`-quoted string, which is what Python thinks you were trying to do. It sees `\'` and interprets as a literal quote within the string, rather than letting the `'` end the string. Then it gets to the end of the string and finds `EOL` ("end of line") before the end of the string.
To introduce a *real* backslash, you need to double it. You can see that Python itself did this when printing the representation of your initial string here:
```
>>> x
'hello\\world'
```
Note the double backslash.
You ought to use a double backslash when specifying your string in the first place. The reason that doesn't need it is that `\w` is not a special character, so it gets interpreted as a literal backslash and a `w`. Had you said `'Hello\now'` you would have a string with a newline in it.
You could have also marked the string as a "raw" string by prepending it with `r` as in `r'hello\world'`. This marks the string as not being eligible for any substitutions of special characters. | According to [docs](http://docs.python.org/2/reference/lexical_analysis.html):
> The backslash (\) character is used to escape characters that
> otherwise have a special meaning, such as newline, backslash itself,
> or the quote character.
You need to escape backslash with another backslash:
```
x.replace('\\', '-')
``` | Replace '\' character in python | [
"",
"python",
"string",
""
] |
I am using python's subprocess `call()` to execute shell command.
It works for a single command.
But what if my shell command calls a command and pipe it to another command.
I.e. how can I execute this in python script?
```
grep -r PASSED *.log | sort -u | wc -l
```
I am trying to use the Popen way, but i always get 0 as output
```
p1 = subprocess.Popen(("xxd -p " + filename).split(), stdout=subprocess.PIPE)
p2 = subprocess.Popen("tr -d \'\\n\'".split(), stdin=p1.stdout, stdout=subprocess.PIPE)
p3 = subprocess.Popen(("grep -c \'"+search_str + "\'").split(), stdin=p2.stdout, stdout=subprocess.PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
p2.stdout.close() # Allow p2 to receive a SIGPIPE if p3 exits.
output = p3.communicate()[0]
```
When I try the command in shell,it returns 1
```
xxd -p file_0_4.bin | tr -d '\n' | grep -c 'f5dfddd239'
```
I always get 0. Even if I get 1 when I type the same command at shell. | Call with `shell=True` argument. For example,
```
import subprocess
subprocess.call('grep -r PASSED *.log | sort -u | wc -l', shell=True)
```
---
## Hard way
```
import glob
import subprocess
grep = subprocess.Popen(['grep', '-r', 'PASSED'] + glob.glob('*.log'), stdout=subprocess.PIPE)
sort = subprocess.Popen(['sort', '-u'], stdin=grep.stdout, stdout=subprocess.PIPE)
exit_status = subprocess.call(['wc', '-l'], stdin=sort.stdout)
```
See [Replacing shell pipeline](http://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline). | The other answers would work. But here's a more elegant approach, IMO, which is to use [`plumbum`](http://plumbum.readthedocs.org/).
```
from plumbum.cmd import grep, sort, wc
cmd = grep['-r']['PASSED']['*.log'] | sort['-u'] | wc['-l'] # construct the command
print cmd() # run the command
``` | How can I execute shell command with a | pipe in it | [
"",
"python",
"shell",
""
] |
I have a Python list as follows,
```
demo= {'age': 90, 'id': '12#2'}
{'age': 12, 'id': '12#3'}
{'age': 67, 'id': '12#1'}
{'age': 56, 'id': '12#2'}
{'age': 34, 'id': '12#2'}
```
How can I sort this list with id attribute?
I have tried
```
sorted(demo, key=lambda x: x.id) # sort by id
```
but it failed.
Expected output as follows:
```
{'age': 90, 'id': '12#2'}
{'age': 56, 'id': '12#2'}
{'age': 34, 'id': '12#2'}
{'age': 12, 'id': '12#3'}
{'age': 67, 'id': '12#1'}
``` | Your code fails with an `AttributeError` because you are trying to do a lookup of `id` in a `dict` object, which doesn't have one. You need to access the desired dictionary key:
```
sorted(demo, key=lambda x: x['id'])
```
However, that will fail with a `KeyError` if at least one entry in the list does not have the `id` key. In that case, you can use:
```
sorted(demo, key=lambda x: x.get("id"))
```
Optionally you can use a default value in the `get`, if you wish to put all the entries with no `id` above or below the rest. In this case, the following would send entries with no `id` to the bottom:
sorted(demo, key=lambda x: x.get("id", "99"))
---
It may also happen that you have an `id` like `12#10` and you don't want it to be between `12#1` and `12#2`. To solve that problem, you need to split the `id` and have a more complex sorting function.
```
def get_values(item):
return [int(x) for x in item['id'].split('#')]
def compare(a, b):
a = get_values(a)
b = get_values(b)
if not a[0] == b[0]:
return a[0] - b[0]
return a[1] - b[1]
```
Then you call `sorted` using that comparison function:
```
sorted(demo, cmp=compare)
```
Or in Python 3, where `cmp` has been eliminated:
```
from functools import cmp_to_key
sorted(demo, key=cmp_to_key(compare))
``` | If `demo` is the list (note the brackets and commas)
```
demo= [{'age': 90, 'id': '12#2'},
{'age': 12, 'id': '12#3'},
{'age': 67, 'id': '12#1'},
{'age': 56, 'id': '12#2'},
{'age': 34, 'id': '12#2'}]
```
Then you could sort it by `id` with:
```
sorted(demo, key=lambda x: x['id'])
```
For example:
```
In [5]: sorted(demo, key=lambda x: x['id'])
Out[5]:
[{'age': 67, 'id': '12#1'},
{'age': 90, 'id': '12#2'},
{'age': 56, 'id': '12#2'},
{'age': 34, 'id': '12#2'},
{'age': 12, 'id': '12#3'}]
``` | How to custom sort in a Python list? | [
"",
"python",
""
] |
Before you mark this as duplicate please take a look at this [SQLFiddle](http://sqlfiddle.com/#!2/336d6/1).
I have this schema:
```
CREATE TABLE book(book_id int,
book_name varchar(100),
author_id int,
editor_id varchar(100),
isbn varchar(100));
INSERT INTO book
VALUES
(1 , 'Book1 Title' , 12 , 'Editor1' , '8000-9000' ),
(2 , 'Book2 Title' , 98 , 'Editor1' , '8000-9001' ),
(1 , 'Book1 Title' , 12 , 'Editor1' , '8000-9002' ),
(3 , 'Book3 Title' , 3 , 'Editor1' , '8000-9003' );
CREATE TABLE author(author_id int,
fn varchar(100),
ln varchar(100));
INSERT INTO author
VALUES
(12, 'name1','lname1'),
(98,'name2','lname2'),
(3,'name3','lname3');
```
The sub-query:
```
SELECT c.author_id,COUNT(*) book_count FROM book c
GROUP BY c.author_id
```
has a result:
```
| AUTHOR_ID | BOOK_COUNT |
--------------------------
| 3 | 1 |
| 12 | 2 |
| 98 | 1 |
```
Now, the tricky part here is the result of this query:
```
SELECT MAX(book_count),a.* FROM
author a,(
SELECT c.author_id,COUNT(*) book_count FROM book c
GROUP BY c.author_id
) b
where a.author_id = b.author_id
```
is this:
```
| MAX(BOOK_COUNT) | AUTHOR_ID | FN | LN |
------------------------------------------------
| 2 | 3 | name3 | lname3 |
```
which should be like this:
```
| MAX(BOOK_COUNT) | AUTHOR_ID | FN | LN |
------------------------------------------------
| 2 | 12 | name1 | lname1 |
```
What do you think is wrong in the query? | Instead of `MAX()` you can simply use `LIMIT` for the same. Also use `JOIN` instead.
```
SELECT book_count,a.author_id,a.fn, a.ln
FROM author a
JOIN
(
SELECT c.author_id,COUNT(*) book_count FROM book c
GROUP BY c.author_id
) b
ON a.author_id = b.author_id
ORDER BY book_count DESC LIMIT 1
```
Output:
```
| BOOK_COUNT | AUTHOR_ID | FN | LN |
-------------------------------------------
| 2 | 12 | name1 | lname1 |
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/336d6/16)
---
**Edit:**
If you want to use `MAX()` for that, you have to use sub-query like this:
```
SELECT book_count,a.author_id,a.fn, a.ln
FROM author a
JOIN
(
SELECT c.author_id,COUNT(*) book_count FROM book c
GROUP BY c.author_id
) b
ON a.author_id = b.author_id
WHERE book_count =
(SELECT MAX(book_count)
FROM
(
SELECT c.author_id,COUNT(*) book_count FROM book c
GROUP BY c.author_id
) b )
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/336d6/40)
---
**Edit2:**
Instead of using `LIMIT` in outer query you can simply use it in inner query too:
```
SELECT book_count,a.author_id,a.fn, a.ln
FROM author a
JOIN
(
SELECT c.author_id,COUNT(*) book_count FROM book c
GROUP BY c.author_id
ORDER BY COUNT(*) DESC LIMIT 1
) b
ON a.author_id = b.author_id
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/336d6/51) | In fact, MySQL has a lack of support SQL's standard, because it allows use aggregate functions w/o GROUP BY clause and returns random data in result. You should avoid the usage of aggregates in that way.
EDIT:
I mean, for example in MySQL you can execute query like this:
```
SELECT
MAX(a), b, c
FROM
table
GROUP BY
b;
```
Which returns random data in c column, and that's terribly wrong. | MySQL: Select MAX() from sub-query with COUNT() | [
"",
"mysql",
"sql",
"aggregate-functions",
""
] |
My table will contain the following values. **h1,h2,h3** are varchar fields with size 1
```
**Register Date Year Batch h1 h2 h3**
1138M0321 02-08-2013 3 1 A A NULL
1138M0323 02-08-2013 3 1 P P NULL
1138M0324 02-08-2013 3 1 P P NULL
1138M0325 02-08-2013 3 1 P P NULL
```
I need to **update** one of these fields\**(h1/h2/h3)*\* with NULL. But I can only add **""** and not actually **NULL**
How can I update the table with NULL? | The `NULL` value must be SQL, not a .NET value, so instead of trying things like:
```
"... SET h1 = " & NULL & " ... "
```
Simply use this in the query:
```
"... SET h1 = NULL ... "
```
Note that:
> Null values cannot be used for information that is required to distinguish one row in a table from another row in a table, such as primary keys.
See [MSDN Documentation](http://msdn.microsoft.com/en-us/library/ms191504%28v=sql.105%29.aspx). | Why use VB Nulls at all ...
```
UPDATE student_attendance_table SET h1 = NULL WHERE...
``` | Replacing a value with NULL in a table in SQL Server using vb.net | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
I am having some trouble figuring out how to do relative imports in Python. I am currently working on my first major project so I want to do it right using unit tests. However, I am having trouble with my file structure and relative imports.
Here is my current structure:
```
App/
__init__.py
src/
__init__.py
person.py
tests/
__init__.py
person_tests.py
```
What I want to do is be able to import person.py into person\_tests.py for the unit tests. I have attempted the following:
```
from . import person
from .. import person
from .App.src import person
from ..App.src import person
from ..src.person import *
from ..src import person
from .src import person
```
Every one of the above throws either a syntax error or
```
ValueError: Attempted relative import in non-package
```
Can someone please clarify this for me?
**Edit**: Python version is 2.7.
**Edit**: I would like to be able to use this with say unittest or nose. | you could append it to sys.path.
```
import sys
sys.path.append("../src")
import person
``` | My guess (and I'll delete this if I'm wrong) is that you're trying to use `person_tests.py` as a top-level script, rather than as a module inside a package, by doing something like this:
```
$ cd App/tests
$ python person_tests.py
```
In that case, `person_tests` does not end up as `App.tests.person_tests`, but as just `__main__` (or, with minor variations, as the top-level `person_tests`, which has the same basic issues). So, `..` does not refer to `App`, and therefore there is no way to get to `person` as a relative import.
More generally, nothing on `PYTHONPATH`, including `.`, should ever be in the middle of any package directory, or things will get broken.
The right answer is to not do that. Do something like this:
```
$ python -m App.tests.person_tests
```
Or write a top-level (outside the package) script that imports the module, and run that top-level script. | Relative Import Confusion in Python | [
"",
"python",
"import",
"importerror",
""
] |
I'm looking for a list of strings *and their variations* within a very large string.
What I want to do is find even the *implicit* matches between two strings.
For example, if my start string is `foo-bar`, I want the matching to find `Foo-bAr` `foo Bar`, or *even* ***`foo(bar...`.*** Of course, `foo-bar` should also return a match.
---
**EDIT:** More specifically, I need the following matches.
1. The string itself, case insenstive.
2. The string with spaces separating any of the characters
3. The string with parentheses separating any of the characters.
---
***How do I write an expression to meet these conditions?***
I realize this might require some tricky regex. The thing is, I have a large list of strings I need to search for, and I feel regex is just the tool for making this as robust as I need.
Perhaps regex isn't the best solution?
Thanks for your help guys. I'm still learning to think in regex. | ```
>>> def findString(inputStr, targetStr):
... if convertToStringSoup(targetStr).find(convertToStringSoup(inputStr)) != -1:
... return True
... return False
...
>>> def convertToStringSoup(testStr):
... testStr = testStr.lower()
... testStr = testStr.replace(" ", "")
... testStr = testStr.replace("(", "")
... testStr = testStr.replace(")", "")
... return testStr
...
>>>
>>> findString("hello", "hello")
True
>>> findString("hello", "hello1")
True
>>> findString("hello", "hell!o1")
False
>>> findString("hello", "hell( o)1")
True
```
should work according to your specification. Obviously, could be optimized. You're asking about regex, which I'm thinking hard about, and will hopefully edit this question soon with something good. If this isn't too slow, though, regexps can be miserable, and readable is often better!
I noticed that you're repeatedly looking in the same big haystack. Obviously, you only have to convert that to "string soup" once!
Edit: I've been thinking about regex, and any regex you do would either need to have many clauses or the text would have to be modified pre-regex like I did in this answer. I haven't benchmarked string.find() vs re.find(), but I imagine the former would be faster in this case. | I'm going to assume that your rules are right, and your examples are wrong, mainly since you added the rules later, as a clarification, after a bunch of questions. So:
> EDIT: More specifically, I need the following matches.
>
> 1. The string itself, case insenstive.
> 2. The string with spaces separating any of the characters
> 3. The string with parentheses separating any of the characters.
The simplest way to do this is to just remove spaces and parens, then do a case-insensitive search on the result. You don't even need regex for that. For example:
```
haystack.replace(' ', '').replace('(', '').upper().find(needle.upper())
``` | A more powerful method than Python's find? A regex issue? | [
"",
"python",
"regex",
"string",
"find",
""
] |
I have a table with two string columns: `Name` and `Code`. `Code` is unique, but `Name` is not. Sample data:
```
Name Code
-------- ----
Jacket 15
Jeans 003
Jeans 26
```
I want to select unique rows with the smallest `Code` value, but not in terms of numeric value; rather, the length of the string. Of course this does not work:
```
SELECT Name, Min(Code) as Code
FROM Clothes
GROUP BY Name, Code
```
The above code will return one row for Jeans like such:
```
Jeans | 003
```
That is correct, because as a number, `003` is less than `26`. But not in my application, which cares about the length of the value, not the actual value. A value with a length of three characters is greater than a value with two characters. I actually need it to return this:
```
Jeans | 26
```
Because the *length* of `26` is shorter than the *length* of `003`.
So how do I write SQL code that will select row that has the code with the minimum length, not the actual minimum value? I tried doing this:
```
SELECT Name, Min(Len(Code)) as Code
FROM Clothes
GROUP BY Name, Code
```
The above returns me only a single character so I end up with this:
```
Jeans | 2
``` | ```
;WITH cte AS
(
SELECT Name, Code, rn = ROW_NUMBER()
OVER (PARTITION BY Name ORDER BY LEN(Code))
FROM dbo.Clothes
)
SELECT Name, Code
FROM cte
WHERE rn = 1;
```
[SQLfiddle demo](http://sqlfiddle.com/#!3/d6b45/1)
If you have multiple values of code that share the same length, the choice will be arbitrary, so you can break the tie by adding an additional order by clause, e.g.
```
OVER (PARTITION BY Name ORDER BY LEN(Code), CONVERT(INT, Code) DESC)
```
[SQLfiddle demo](http://sqlfiddle.com/#!3/038c0/1) | Try this
```
select clothes.name, MIN(code)
from clothes
inner join
(
SELECT
Name, Min(Len(Code)) as CodeLen
FROM
clothes
GROUP BY
Name
) results
on clothes.name = results.name
and LEN(clothes.code) = results.CodeLen
group by clothes.name
``` | Removing duplicate rows by selecting only those with minimum length value | [
"",
"sql",
"sql-server",
""
] |
I have a 2D list, for example `mylist =[[1,2,3],[4,5,6],[7,8,9]]`.
Is there any way I can use `len()` function such that I can calculate the lengths of array indices? For example:
```
len(mylist[0:3])
len(mylist[1:3])
len(mylist[0:1])
```
Should give:
```
9
6
3
``` | `length = sum([len(arr) for arr in mylist])`
```
sum([len(arr) for arr in mylist[0:3]]) = 9
sum([len(arr) for arr in mylist[1:3]]) = 6
sum([len(arr) for arr in mylist[2:3]]) = 3
```
Sum the length of each list in `mylist` to get the length of all elements.
This will only work correctly if the list is 2D. If some elements of `mylist` are not lists, who knows what will happen...
Additionally, you could bind this to a function:
```
len2 = lambda l: sum([len(x) for x in l])
len2(mylist[0:3]) = 9
len2(mylist[1:3]) = 6
len2(mylist[2:3]) = 3
``` | You can flatten the list, then call `len` on it:
```
>>> mylist=[[1,2,3],[4,5,6],[7,8,9]]
>>> import collections
>>> def flatten(l):
... for el in l:
... if isinstance(el, collections.Iterable) and not isinstance(el, basestring):
... for sub in flatten(el):
... yield sub
... else:
... yield el
...
>>> len(list(flatten(mylist)))
9
>>> len(list(flatten(mylist[1:3])))
6
>>> len(list(flatten(mylist[0:1])))
3
``` | Length of 2d list in python | [
"",
"python",
"python-2.7",
""
] |
I am reading the book "*Python Programming for the Absolute Beginner (3rd edition)*". I am in the chapter introducing custom modules and I believe this may be an error in the coding in the book, because I have checked it 5 or 6 times and matched it exactly.
First we have a custom module *games.py*
```
class Player(object):
""" A player for a game. """
def __init__(self, name, score = 0):
self.name = name
self.score = score
def __str__(self):
rep = self.name + ":\t" + str(self.score)
return rep
def ask_yes_no(question):
""" Ask a yes or no question. """
response = None
while response not in ("y", "n"):
response = input(question).lower()
return response
def ask_number(question, low, high):
""" Ask for a number within a range """
response = None
while response not in range (low, high):
response = int(input(question))
return response
if __name__ == "__main__":
print("You ran this module directly (and did not 'import' it).")
input("\n\nPress the enter key to exit.")
```
And now the *SimpleGame.py*
```
import games, random
print("Welcome to the world's simplest game!\n")
again = None
while again != "n":
players = []
num = games.ask_number(question = "How many players? (2 - 5): ", low = 2, high = 5)
for i in range(num):
name = input("Player name: ")
score = random.randrange(100) + 1
player = games.Player(name, score)
players.append(player)
print("\nHere are the game results:")
for player in players:
print(player)
again = games.ask_yes_no("\nDo you want to play again? (y/n): ")
input("\n\nPress the enter key to exit.")
```
So this is exactly how the code appears in the book. When I run the program I get the error `IndentationError` at `for i in range(num):`. I expected this would happen so I changed it and removed 1 tab or 4 spaces in front of each line from `for i in range(num)` to `again = games.ask_yes_no("\nDo you want to play again? (y/n): ")`.
After this the output is `"Welcome to the world's simplest game!"` and that's it.
**I was wondering if someone could let me know why this is happening?**
Also, the `import games` module, is recognized in Eclipse after I added the path to PYTHONPATH. | I actually have this book myself. And yes, it is a typo. Here is how to fix it:
```
# SimpleGame.py
import games, random
print("Welcome to the world's simplest game!\n")
again = None
while again != "n":
players = []
num = games.ask_number(question = "How many players? (2 - 5): ", low = 2, high = 5)
for i in range(num):
name = input("Player name: ")
score = random.randrange(100) + 1
player = games.Player(name, score)
players.append(player)
print("\nHere are the game results:")
for player in players:
print(player)
again = games.ask_yes_no("\nDo you want to play again? (y/n): ")
input("\n\nPress the enter key to exit.")
```
All I did was indent `num` 4 spaces and lined it up with the first for-loop. | You have an infinite loop here:
```
again = None
while again != "n":
players = []
``` | Python custom modules - error with example code | [
"",
"python",
"python-3.x",
"module",
""
] |
First, all relevant code
**main.py**
```
import string
import app
group1=[ "spc", "bspc",",","."]#letters, space, backspace(spans mult layers)
# add in letters one at a time
for s in string.ascii_lowercase:
group1.append(s)
group2=[0,1,2,3,4,5,6,7,8,9, "tab ","ent","lAR" ,"rAR" , "uAR", "dAR"]
group3= []
for s in string.punctuation:
group3.append(s)#punc(spans mult layers)
group4=["copy","cut","paste","save","print","cmdW","quit","alf","sWDW"] #kb shortcut
masterGroup=[group1,group2,group3,group4]
myApp =app({"testFKey":[3,2,2]})
```
**app.py**
```
import tkinter as tk
import static_keys
import dynamic_keys
import key_labels
class app(tk.Frame):
def __init__(inputDict,self, master=None,):
tk.Frame.__init__(self, master)
self.grid(sticky=tk.N+tk.S+tk.E+tk.W)
self.createWidgets(self, inputDict)
def createWidgets(self,inDict):
top=self.winfo_toplevel()
top.rowconfigure(0, weight=1)
top.columnconfigure(0, weight=1)
self.rowconfigure(0, weight=1)
self.columnconfigure(0, weight=1)
tempDict = {}
for k,v in inDict.items():
if 1<=v[0]<=3:
tempDict[k] = static_keys(*v[1:])
elif v[0] ==4:
tempDict[k] = dynamic_keys(k,*v[1:])
elif v[0]==5:
tempDict[k] = key_labels(*v[1:])
for o in tempDict:
tempDict[o].grid()
return tempDict
```
**static\_keys.py**
```
import tkinter
class static_keys(tkinter.Label):
"""class for all keys that just are initiated then do nothing
there are 3 options
1= modifier (shift etc)
2 = layer
3 = fkey, eject/esc"""
def __init__(t,selector,r,c,parent,self ):
if selector == 1:
tkinter.Label.__init__(master=parent, row=r, column=c, text= t, bg ='#676731')
if selector == 2:
tkinter.Label.__init__(master=parent, row=r, column=c, text= t, bg ='#1A6837')
if selector == 3:
tkinter.Label.__init__(master=parent, row=r, column=c, text= t, bg ='#6B6966')
```
Now for a description of the problem. When I run `main.py` in python3, I get the error
```
File "Desktop/kblMaker/main.py", line 13, in <module>
myApp =app({"testFKey":[3,2,2]})
TypeError: 'module' object is not callable
``` | You have a module named `app` that contains a class named `app`. If you just do `import app` in main.py then `app` will refer to the module, and `app.app` will refer to the class. Here are a couple of options:
* Leave your import statement alone, and use `myApp = app.app({"testFKey":[3,2,2]})` inside of main.py
* Replace `import app` with `from app import app`, now `app` will refer to the class and `myApp = app({"testFKey":[3,2,2]})` will work fine | In `main.py` change second line to:
```
from app import app
```
The issue is you have `app` module and `app` class within it. But you are importing module, not the class from it:
```
myApp = app({"testFKey": [3, 2, 2]})
```
(you can also instead replace "`app`" inside line above into "`app.app`") | Why am I getting 'module' object is not callable in python 3? | [
"",
"python",
"python-3.x",
"tkinter",
"runtime-error",
""
] |
```
SELECT
Stockmain.VRNOA,
item.description as item_description,
party.name as party_name,
stockmain.vrdate,
stockdetail.qty,
stockdetail.rate,
stockdetail.amount,
ROW_NUMBER() OVER (ORDER BY VRDATE) AS RowNum
FROM StockMain
INNER JOIN StockDetail
ON StockMain.stid = StockDetail.stid
INNER JOIN party
ON party.party_id = stockmain.party_id
INNER JOIN item
ON item.item_id = stockdetail.item_id
WHERE stockmain.etype='purchase' AND RowNum BETWEEN 1 and (1 + 100)
ORDER BY VRDATE DESC
```
I am trying to execute this query on SQL Server and it keeps giving this error
> Invalid column name 'RowNum'.
Can anyone please have a look and tell me what I'm doing wrong here?
**Update**
The query I was originally using was
```
SELECT
*
FROM (
SELECT
Stockmain.VRNOA,
item.description as item_description,
party.name as party_name,
stockmain.vrdate,
stockdetail.qty,
stockdetail.rate,
stockdetail.amount,
ROW_NUMBER() OVER (ORDER BY VRDATE DESC) AS RowNum --< ORDER BY
FROM StockMain
INNER JOIN StockDetail
ON StockMain.stid = StockDetail.stid
INNER JOIN party
ON party.party_id = stockmain.party_id
INNER JOIN item
ON item.item_id = stockdetail.item_id
WHERE stockmain.etype='purchase'
) AS MyDerivedTable
WHERE
MyDerivedTable.RowNum BETWEEN 1 and 5
```
but the data in the data is immense and using this query slows down the retrieval process, so I modified the query as above and now I'm getting this error of undefined column. | It looks like you're trying to perform a paging operation, or at least a query that is greatly simplified using the new paging operators in SQL Server 2012, OFFSET and FETCH:
```
SELECT
Stockmain.VRNOA,
item.description as item_description,
party.name as party_name,
stockmain.vrdate,
stockdetail.qty,
stockdetail.rate,
stockdetail.amount
FROM StockMain
INNER JOIN StockDetail
ON StockMain.stid = StockDetail.stid
INNER JOIN party
ON party.party_id = stockmain.party_id
INNER JOIN item
ON item.item_id = stockdetail.item_id
WHERE stockmain.etype='purchase'
ORDER BY VRDATE DESC
OFFSET 0 ROWS
FETCH NEXT 100 ROWS ONLY
```
For more information, please see the following: <http://www.dbadiaries.com/new-t-sql-features-in-sql-server-2012-offset-and-fetch> | Put it in a subquery, the `ROW_NUMBER()` can't be used in the `WHERE` clause, not to mention you can't use aliases created in the `SELECT` list in the `WHERE` clause:
```
SELECT *
FROM (SELECT
Stockmain.VRNOA,
item.description as item_description,
party.name as party_name,
stockmain.vrdate,
stockdetail.qty,
stockdetail.rate,
stockdetail.amount,
ROW_NUMBER() OVER (ORDER BY VRDATE) AS RowNum
FROM StockMain
INNER JOIN StockDetail
ON StockMain.stid = StockDetail.stid
INNER JOIN party
ON party.party_id = stockmain.party_id
INNER JOIN item
ON item.item_id = stockdetail.item_id
WHERE etype='purchase'
)sub
WHERE RowNum BETWEEN 1 and (1 + 100)
ORDER BY VRDATE DESC
```
Update: After seeing your update it's clear you've got a working query but are trying to optimize it, even if you could move the `ROW_NUMBER()` inside the main query it wouldn't improve performance, it still has to perform the intensive `ORDER` on the full data set. Indexing `VRDATE` will help. | SQL Error of Undefined Column | [
"",
"sql",
"sql-server",
"database",
""
] |
I have an application which uses Hibernate to support Oracle and MySQL databases. After an update I have to manually delete some columns with indexes/constraints on it. These indexes have Hibernate generated random names.
In Oracle I can do this:
```
ALTER TABLE table_name DROP (column_name) CASCADE CONSTRAINTS;
```
Unfortunately this isn't possible for MySQL. Is there a possibility to do something like this
```
DROP INDEX (SELECT Key_name FROM (SHOW INDEX FROM table_name WHERE Column_name = 'column_name')) ON table_name;
```
before I drop the column?
EDIT: This should work without user interaction in a SQL script. | You can select indexes for a table form information\_schema:
```
SELECT DISTINCT INDEX_NAME, TABLE_NAME, TABLE_SCHEMA FROM information_schema.STATISTICS;
``` | To get all indexes for a particular database (replace **<Database\_Name>** with your database name), use:
```
SELECT DISTINCT INDEX_NAME
FROM information_schema.STATISTICS
WHERE TABLE_SCHEMA LIKE '<Database_Name>';
```
To get all indexes for a table (replace **<Table\_Name>** with your table name) of a particular database, use:
```
SELECT DISTINCT INDEX_NAME
FROM information_schema.STATISTICS
WHERE TABLE_SCHEMA LIKE '<Database_Name>' AND
TABLE_NAME LIKE '<Table_Name>';
```
To get all indexes of a specific column (replace **<Column\_Name>** with your column name) of a table in a particular database, use:
```
SELECT DISTINCT INDEX_NAME
FROM information_schema.STATISTICS
WHERE TABLE_SCHEMA LIKE '<Database_Name>' AND
TABLE_NAME LIKE '<Table_Name>' AND
COLUMN_NAME LIKE '<Column_Name>';
```
In addition to that, you may also use any Wildcard character in the LIKE operator to get specific records, like:
```
SELECT DISTINCT INDEX_NAME
FROM information_schema.STATISTICS
WHERE TABLE_SCHEMA LIKE '<Database_Name>' AND
TABLE_NAME LIKE 'tbl_prefix_%';
``` | Delete MySQL column index without knowing its name | [
"",
"mysql",
"sql",
"database",
"hibernate",
"indexing",
""
] |
I am trying to debug a python code using pdb. I have a variable that called c and when I press c to print this variable the pdb get confused and continue debugging to the next break point. How can I avoid such confusion given that it would be very difficult to change the name of the variable. | Your confusion is about what the various commands in PDB do. I think of it a bit like a [MUD](https://en.wikipedia.org/wiki/MUD) and that works fairly often:
Use **p** to print out the contents of a variable (or **pp** to pretty-print (or handle your character's basic needs)):
```
(Pdb) p df
Empty DataFrame
Columns: [Dist, type, Count]
Index: []
```
Type **where** or **w** to see where you are on the stack:
```
(Pdb) w
-> return df[df['type']=='dev'][['Dist','Count']].as_matrix()
/home/user/core/ops.py(603)wrapper()
-> res = na_op(values, other)
> /home/user/core/ops.py(567)na_op()
-> raise TypeError("invalid type comparison")
```
See that little `>` arrow? That's where we are in the stack.
Use **list** or **l** to look around:
```
(Pdb) list
564 try:
565 result = getattr(x, name)(y)
566 if result is NotImplemented:
567 >> raise TypeError("invalid type comparison")
568 except (AttributeError):
569 -> result = op(x, y)
570
571 return result
572
573 def wrapper(self, other):
574 if isinstance(other, pd.Series):
```
To move around in the stack continue MUDing and use **up** (**u**) or **down** (**d**).
Use **args** (**a**) to examine what arguments the current function was called with:
```
(Pdb) args
dat = array([], shape=(0, 3), dtype=float64)
dev_classes = {81, 82, 21, 22, 23, 24, 31}
```
Use **interact** to enter the code at the current point in the stack. **Ctrl+D** brings you back in to PDB. | You can tell pdb not to evaluate things like that using the `!` prefix:
```
>>> !c
... <value of c>
``` | Debugging Python using pdb | [
"",
"python",
"python-2.7",
"pdb",
""
] |
relatively new to Django and trying to piece together standard practice for dealing with M2M relationships in a form. I already have the model and db squared away.
For this example, I've written an app in my project for Articles, and I'm attempting to add Categories. To keep it simple an Article has a title, body, timestamp (not included in form), and Categories. I prefer checkboxes to represent 1 or more categories that an Article can belong to.
So far I have:
# models.py
```
class Category(models.Model):
category = models.CharField(max_length=100)
def __unicode__(self):
return self.category
class Article(models.Model):
title = models.CharField(max_length=200)
body = models.TextField()
pub_date = models.DateTimeField(auto_now_add=True)
category = models.ManyToManyField(Category)
def __unicode__(self):
return self.title
```
# views.py
```
def article_index(request):
return render_to_response('article_index.html', {'articles': Article.objects.all()})
def article_detail(request, article_id=1):
return render_to_response('article_detail.html', {'article': Article.objects.get(id=article_id)} )
def article_create(request):
if request.method == 'POST': # If the form has been submitted...
form = ArticleForm(request.POST) # A form bound to the POST data
if form.is_valid(): # All validation rules pass
article = Article.objects.create(
title=form.cleaned_data['title'],
body=form.cleaned_data['body'],
category=form.cleaned_data['category']
)
return redirect('article_index') # Redirect after POST
else:
form = ArticleForm() # An unbound form
return render(request, 'article_form.html', { 'form': form })
```
# forms.py
```
class ArticleForm(forms.Form):
title = forms.CharField(required=True)
body = forms.CharField(required=True, widget=forms.Textarea)
category = forms.MultipleChoiceField(Category.objects.all(), widget=forms.CheckboxSelectMultiple)
```
The two item's I'm currently stuck on are:
1) in the view 'article\_create', I'm not sure how to create the category(ies) as part of the Article object. In the shell, I had to create the Article with a call to save(), then add each category after that. Do I need to do something similar here, e.g. create the article then iterate through each category? Example code is appreciated.
2) Haven't coded 'article\_edit' yet, assuming it will be highly similar to create, but I'm not sure if or how I need to handle the logic for comparing previously selected categories to the current submission. Or, should I just delete all category entries for the article being edited, and re-enter them based on the current submission? That's probably the easiest. Again, sample code for this would help.
Thanks! | Comments per file...
# models.py
```
class Category(models.Model):
category = models.CharField(max_length=100)
```
The category's name should be named `name`. A field named `category` I'd expect to be something like `models.ForeignKey("Category")`.
```
class Article(models.Model):
title = models.CharField(max_length=200)
body = models.TextField()
pub_date = models.DateTimeField(auto_now_add=True)
category = models.ManyToManyField(Category)
```
As Adam pointed out, this should be named `categories`. Further, its reverse (the field in `Category` that links back to `Article`) should be named `articles`. So we get:
```
categories = models.ManyToManyField(Category, related_name="articles")
```
So now you can get a queryset with all the articles in a category with something like:
```
get_object_or_404(Category, id=int(cat_id, 10)).articles.all()
```
# views.py
```
def article_detail(request, article_id=1):
```
Don't use a default here. There's nothing special about the ID 1, and if someone forgets the ID, it should be an error.
```
def article_create(request):
if request.method == 'POST': # If the form has been submitted...
form = ArticleForm(request.POST) # A form bound to the POST data
if form.is_valid(): # All validation rules pass
article = Article.objects.create(
title=form.cleaned_data['title'],
body=form.cleaned_data['body'],
category=form.cleaned_data['category']
)
```
By using a `ModelForm`, this is simplified to:
```
def article_create(request):
if request.method == 'POST': # If the form has been submitted...
form = ArticleForm(request.POST) # A form bound to the POST data
if form.is_valid(): # All validation rules pass
form.save()
return redirect('article_index') # Redirect after POST
else:
form = ArticleForm() # An unbound form
return render(request, 'article_form.html', {'form': form})
```
# forms.py
```
class ArticleForm(forms.Form):
```
You really should be using `ModelForm` instead ([docs here](https://docs.djangoproject.com/en/dev/topics/forms/modelforms/)):
```
class ArticleForm(forms.ModelForm):
class Meta:
model = Article
fields = ["title", "body", "category"]
widgets = {
'body': forms.Textarea(),
'category': forms.CheckboxSelectMultiple()
}
```
On to your questions:
> 1) in the view 'article\_create', I'm not sure how to create the category(ies) as part of the Article object. In the shell, I had to create the Article with a call to save(), then add each category after that. Do I need to do something similar here, e.g. create the article then iterate through each category? Example code is appreciated.
IIRC, `ModelForm.save()` will take care of this for you.
> 2) Haven't coded 'article\_edit' yet, assuming it will be highly similar to create, but I'm not sure if or how I need to handle the logic for comparing previously selected categories to the current submission. Or, should I just delete all category entries for the article being edited, and re-enter them based on the current submission? That's probably the easiest. Again, sample code for this would help.
Editing is almost exactly like creating. All you have to do is associate the original object with the form. (Typically, you figure out what the original object is from the URL.) So something like:
```
def article_edit(request, article_id):
article = get_object_or_404(Article, id=int(article_id, 10))
if request.method == 'POST': # If the form has been submitted...
form = ArticleForm(request.POST, instance=article)
if form.is_valid(): # All validation rules pass
form.save()
return redirect('article_index') # Redirect after POST
else:
form = ArticleForm(instance=article)
return render(request, 'article_form.html', {'form': form})
```
**EDIT:** As jheld comments below, you can combine `article_create` and `article_edit` into one view method:
```
def article_modify(request, article_id=None):
if article_id is not None:
article = get_object_or_404(Article, id=int(article_id, 10))
else:
article = None
if request.method == 'POST': # If the form has been submitted...
form = ArticleForm(request.POST, instance=article)
if form.is_valid(): # All validation rules pass
form.save()
return redirect('article_index') # Redirect after POST
else:
form = ArticleForm(instance=article)
return render(request, 'article_form.html', {'form': form})
```
Then the URLs are easy:
```
url(r"^/article/edit/(?P<article_id>[0-9]+)$", "app.views.article_modify", name="edit"),
url(r"^/article/new$", "app.views.article_modify", name="new"),
``` | I'd start by renaming `category` in the model to `categories`, and updating the related code accordingly - the singular naming is just going to be a continuous headache.
At that point, you're pretty close. In your success branch when submitting an article, assign the categories as a separate statement.
```
article = Article.objects.create(
title=form.cleaned_data['title'],
body=form.cleaned_data['body']
)
# note changed plural name on the m2m attr & form field
article.categories.add(*form.cleaned_data['categories'])
# alternately
# for cat in form.cleaned_data['categories']:
# article.categories.add(cat)
return redirect('article_index') # Redirect after POST
```
Oh, and, kudos on avoiding `ModelForm`. It's muuch easier to hook up the form-instance plumbing yourself, this question would be much more complicated with `ModelForm` involved.
For the edit view, yes, clear & re-adding is easiest. There are more efficient ways, but nothing that's worth the complexity until it's actually a problem. The method call to clear will be `article.categories.clear()`, re-adding is same as above. | Django Forms - Many to Many relationships | [
"",
"python",
"django",
"forms",
""
] |
I have written a mysql query to retrieve data from two tables with subquery and join.
The query works fine but I want to avoid subquery because of some performance issues.
It should be ordered by date time before group.
```
SELECT a.value1, a.value2, b.value1 FROM (SELECT * FROM A ORDER BY datetime DESC, id DESC) AS a
INNER JOIN B AS b ON b.a_id=a.id
WHERE a.value4="value"
GROUP BY b.value2, b.value3;
```
I have tried several ways to rewrite this without a subquery but still couldn't find a solution.
Is it possible to avoid a sub-query in this case? | There appears to be no purpose to your subquery in the first place. All you are doing is performing an `ORDER BY`, which will be lost the instant you join to the other table. You should be able to do this:
```
SELECT a.value1, a.value2, b.value1 FROM
A as a
INNER JOIN
B AS b
ON b.a_id=a.id
WHERE a.value4="value"
GROUP BY b.value2, b.value3;
``` | You haven't got any filter on A table, so you can directly join the table:
```
SELECT b.value2, b.value3 FROM A AS a
INNER JOIN B AS b ON b.a_id=a.id
WHERE a.value4='value'
GROUP BY b.value2, b.value3;
```
Note you have to select columns that were grouped by. | Avoid sub query | [
"",
"mysql",
"sql",
""
] |
I have two tables. A and B. I'm trying to figure out a way to add a column in my select query that returns true or false as to whether or not there exists a record in B.
```
Table A
ID Title
1 A
2 B
3 C
4 D
5 E
Table B
ID Detail
3 foo
4 foo
4 bar
4 barfood
```
I want to basically "SELECT ID, Title, (Exists?) FROM A" to return
```
ID Title Exists
1 A False
2 B False
3 C True
4 D True
5 E False
```
Table A's ID column will always be unique. Table B's ID column can have zero, one, or many relating back to table A's ID. I don't care about the detail in table B, I just want to know if there is at least one record in table B that relates to table A's ID.
I'm new to SQL and I've been searching for ways to use 'if exists' or any other way to parse this out but I'm not really finding what I'm looking for. | If you're adding a column named 'Exists' temporary then try this
```
select a.id, a.title,case when a.id=b.id then 'True' else 'False' end as Exists
from A a left outer join B b
on a.id = b.id
```
If you've alredy added Exists column to table then
```
select a.id, a.title,Exists=(case when a.id=b.id then 'True' else 'False')
from A a left outer join B b
on a.id = b.id
``` | There are probably more efficient ways to accomplish it, but a combination of count and a case statement will do the trick:
```
select ID, Title,
case when
(select count(1) from B where ID = A.ID) = 0 then 'False'
else 'True'
end as 'Exists'
from A
```
[SQLFiddle link](http://sqlfiddle.com/#!3/81d94/1/0) | SQL alter a column if exists | [
"",
"sql",
"if-statement",
"exists",
""
] |
I'm using CXFreeze with PySide (QT). I get an error:
cx\_Freeze: Python error in main script.
myscript.py line 33, in
File ExtensionLoader\_Pyside\_QtGUI.py, line 11, in
Import Error: DLL load failed: The specified module could not be found
When running a fresh install of Windows server 2008.
I'm running the frozen EXE package (with the folder). It seems to work on my own system and other systems. What might be the issue?
After reading, online, I tried to replace the Qt4Gui file, but this didn't solve the issue.
Python version is 2.7 | **I used Py2exe instead of CXFreeze and it worked perfectly.**
Also, apparently Python requires the MS Visual C++ Dependency Files:
<http://www.microsoft.com/en-us/download/details.aspx?id=29>
So any bundling needs that as well, if it's a fresh install. (Although I think they are now bundled with newer Windows versions.)
**Other Notes:**
In my experience, sometimes you should try CXFreeze, Py2EXE and PyInstaller quickly and see if one works best. As ideal as CXFreeze is re: cross platform, it just isn't going to happen perfectly.
Also, while I don't know if this was a factor, I set up a Windows 2000 Pro virtual machine and ran Py2exe on that. That was to ensure compatibility for all older Windows versions, and seemed to work well. (NOTE: Many things won't even run on Win2000 anymore so be careful that your other tools and libraries will run on it.)
Finally, be extra careful to match the bit level (32 vs 64) of all your libraries, and your Python install itself. If you have 32-bit python, ensure that your PySide, CXFreeze and any other libraries you use are 32-bit. (Or 64-bit if you're using 64-bit python.) | Based on your `Import Error: DLL load failed` it is most likely an installation issue causing the missing DLL. To figure our exactly which DLL you are missing, use <http://www.dependencywalker.com/> Run the .exe and open the .pyd file for File ExtensionLoader\_Pyside\_QtGUI.py and it will show you exactly which DLL's are missing and more importantly the locations where they should be. You can probably then track down the missing DLL online. | pySide: ExtensionLoader_Pyside_QtGUI.py specified module could not be found | [
"",
"python",
"qt",
"pyside",
"cx-freeze",
""
] |
If I have a frame like this
```
frame = pd.DataFrame({
"a": ["the cat is blue", "the sky is green", "the dog is black"]
})
```
and I want to check if any of those rows contain a certain word I just have to do this.
```
frame["b"] = (
frame.a.str.contains("dog") |
frame.a.str.contains("cat") |
frame.a.str.contains("fish")
)
```
`frame["b"]` outputs:
```
0 True
1 False
2 True
Name: b, dtype: bool
```
If I decide to make a list:
```
mylist = ["dog", "cat", "fish"]
```
How would I check that the rows contain a certain word in the list? | ```
frame = pd.DataFrame({'a' : ['the cat is blue', 'the sky is green', 'the dog is black']})
frame
a
0 the cat is blue
1 the sky is green
2 the dog is black
```
The [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html) method accepts a regular expression pattern:
```
mylist = ['dog', 'cat', 'fish']
pattern = '|'.join(mylist)
pattern
'dog|cat|fish'
frame.a.str.contains(pattern)
0 True
1 False
2 True
Name: a, dtype: bool
```
Because regex patterns are supported, you can also embed flags:
```
frame = pd.DataFrame({'a' : ['Cat Mr. Nibbles is blue', 'the sky is green', 'the dog is black']})
frame
a
0 Cat Mr. Nibbles is blue
1 the sky is green
2 the dog is black
pattern = '|'.join([f'(?i){animal}' for animal in mylist]) # python 3.6+
pattern
'(?i)dog|(?i)cat|(?i)fish'
frame.a.str.contains(pattern)
0 True # Because of the (?i) flag, 'Cat' is also matched to 'cat'
1 False
2 True
``` | For list should work
```
print(frame[frame["a"].isin(mylist)])
```
See [`pandas.DataFrame.isin()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isin.html). | Check if a string in a Pandas DataFrame column is in a list of strings | [
"",
"python",
"python-2.7",
"pandas",
""
] |
How can I get the SQL Server server and instance name of the current connection, using a T-SQL script? | Just found the answer, in [this SO question](https://stackoverflow.com/questions/129861/how-can-i-query-the-name-of-the-current-sql-server-database-instance) (literally, inside the question, not any answer):
```
SELECT @@servername
```
returns servername\instance as far as this is not the default instance
```
SELECT @@servicename
```
returns instance name, even if this is the default (MSSQLSERVER) | How about this:
```
EXECUTE xp_regread @rootkey='HKEY_LOCAL_MACHINE',
@key='SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQl',
@value_name='MSSQLSERVER'
```
This will get the instance name as well. `null` means default instance:
```
SELECT SERVERPROPERTY ('InstanceName')
```
<http://technet.microsoft.com/en-us/library/ms174396.aspx> | How to get current instance name from T-SQL | [
"",
"sql",
"sql-server",
"database",
"t-sql",
"sql-server-2008r2-express",
""
] |
The R `qchisq` function converts a p-value and number of degrees of freedom to the corresponding chi-squared value. Is there a Python library that has an equivalent?
I've looked around in SciPy without finding anything. | It's `scipy.stats.chi2.ppf` - Percent point function (inverse of cdf).
E.g., in R:
```
> qchisq(0.05,5)
[1] 1.145476
```
in Python:
```
In [8]: scipy.stats.chi2.ppf(0.05, 5)
Out[8]: 1.1454762260617695
``` | As @VadimKhotilovich points out in his answer, you can use `scipy.stats.chi2.ppf`. You can also use the function `chdtri` from `scipy.special`, but use 1-p as the argument.
R:
```
> qchisq(0.01, 7)
[1] 1.239042
> qchisq(0.05, 7)
[1] 2.16735
```
scipy:
```
In [16]: from scipy.special import chdtri
In [17]: chdtri(7, 1 - 0.01)
Out[17]: 1.2390423055679316
In [18]: chdtri(7, 1 - 0.05)
Out[18]: 2.1673499092980579
```
The only advantage to use `chdtri` over `scipy.stats.chi2.ppf` is that it is *much* faster:
```
In [30]: from scipy.stats import chi2
In [31]: %timeit chi2.ppf(0.05, 7)
10000 loops, best of 3: 135 us per loop
In [32]: %timeit chdtri(7, 1 - 0.05)
100000 loops, best of 3: 3.67 us per loop
``` | Is there a python equivalent of R's qchisq function? | [
"",
"python",
"r",
"statistics",
"scipy",
"chi-squared",
""
] |
I've been trying to fit the amplitude, frequency and phase of a sine curve given some generated two dimensional toy data. (Code at the end)
To get estimates for the three parameters, I first perform an FFT. I use the values from the FFT as initial guesses for the actual frequency and phase and then fit for them (row by row). I wrote my code such that I input which bin of the FFT I want the frequency to be in, so I can check if the fitting is working well. But there's some pretty strange behaviour. If my input bin is say 3.1 (a non integral bin, so the FFT won't give me the right frequency) then the fit works wonderfully. But if the input bin is 3 (so the FFT outputs the exact frequency) then my fit fails, and I'm trying to understand why.
Here's the output when I give the input bins (in the X and Y direction) as 3.0 and 2.1 respectively:
(The plot on the right is data - fit)

Here's the output when I give the input bins as 3.0 and 2.0:

**Question:** Why does the non linear fit fail when I input the exact frequency of the curve?
---
Code:
```
#! /usr/bin/python
# For the purposes of this code, it's easier to think of the X-Y axes as transposed,
# so the X axis is vertical and the Y axis is horizontal
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as optimize
import itertools
import sys
PI = np.pi
# Function which accepts paramters to define a sin curve
# Used for the non linear fit
def sineFit(t, a, f, p):
return a * np.sin(2.0 * PI * f*t + p)
xSize = 18
ySize = 60
npt = xSize * ySize
# Get frequency bin from user input
xFreq = float(sys.argv[1])
yFreq = float(sys.argv[2])
xPeriod = xSize/xFreq
yPeriod = ySize/yFreq
# arrays should be defined here
# Generate the 2D sine curve
for jj in range (0, xSize):
for ii in range(0, ySize):
sineGen[jj, ii] = np.cos(2.0*PI*(ii/xPeriod + jj/yPeriod))
# Compute 2dim FFT as well as freq bins along each axis
fftData = np.fft.fft2(sineGen)
fftMean = np.mean(fftData)
fftRMS = np.std(fftData)
xFreqArr = np.fft.fftfreq(fftData.shape[1]) # Frequency bins along x
yFreqArr = np.fft.fftfreq(fftData.shape[0]) # Frequency bins along y
# Find peak of FFT, and position of peak
maxVal = np.amax(np.abs(fftData))
maxPos = np.where(np.abs(fftData) == maxVal)
# Iterate through peaks in the FFT
# For this example, number of loops will always be only one
prevPhase = -1000
for col, row in itertools.izip(maxPos[0], maxPos[1]):
# Initial guesses for fit parameters from FFT
init_phase = np.angle(fftData[col,row])
init_amp = 2.0 * maxVal/npt
init_freqY = yFreqArr[col]
init_freqX = xFreqArr[row]
cntr = 0
if prevPhase == -1000:
prevPhase = init_phase
guess = [init_amp, init_freqX, prevPhase]
# Fit each row of the 2D sine curve independently
for rr in sineGen:
(amp, freq, phs), pcov = optimize.curve_fit(sineFit, xDat, rr, guess)
# xDat is an linspace array, containing a list of numbers from 0 to xSize-1
# Subtract fit from original data and plot
fitData = sineFit(xDat, amp, freq, phs)
sub1 = rr - fitData
# Plot
fig1 = plt.figure()
ax1 = fig1.add_subplot(121)
p1, = ax1.plot(rr, 'g')
p2, = ax1.plot(fitData, 'b')
plt.legend([p1,p2], ["data", "fit"])
ax2 = fig1.add_subplot(122)
p3, = ax2.plot(sub1)
plt.legend([p3], ['residual1'])
fig1.tight_layout()
plt.show()
cntr += 1
prevPhase = phs # Update guess for phase of sine curve
``` | I've tried to distill the important parts of your question into this answer.
1. First of all, try fitting a *single* block of data, not an array. Once you are confident that your model is sufficient you can move on.
2. Your fit is only going to be as good as your model, if you move on to something not "sine"-like you'll need to adjust accordingly.
3. Fitting is an "art", in that the initial conditions can **greatly** change the convergence of the error function. In addition there may be more than one minima in your fits, so you often have to worry about the *uniqueness* of your proposed solution.
While you were on the right track with your FFT idea, I think your implementation wasn't quite correct. The code below should be a great toy system. It generates random data of the type `f(x) = a0*sin(a1*x+a2)`. Sometimes a random initial guess will work, sometimes it will fail spectacularly. However, using the FFT guess for the frequency the convergence should *always* work for this system. An example output:

```
import numpy as np
import pylab as plt
import scipy.optimize as optimize
# This is your target function
def sineFit(t, (a, f, p)):
return a * np.sin(2.0*np.pi*f*t + p)
# This is our "error" function
def err_func(p0, X, Y, target_function):
err = ((Y - target_function(X, p0))**2).sum()
return err
# Try out different parameters, sometimes the random guess works
# sometimes it fails. The FFT solution should always work for this problem
inital_args = np.random.random(3)
X = np.linspace(0, 10, 1000)
Y = sineFit(X, inital_args)
# Use a random inital guess
inital_guess = np.random.random(3)
# Fit
sol = optimize.fmin(err_func, inital_guess, args=(X,Y,sineFit))
# Plot the fit
Y2 = sineFit(X, sol)
plt.figure(figsize=(15,10))
plt.subplot(211)
plt.title("Random Inital Guess: Final Parameters: %s"%sol)
plt.plot(X,Y)
plt.plot(X,Y2,'r',alpha=.5,lw=10)
# Use an improved "fft" guess for the frequency
# this will be the max in k-space
timestep = X[1]-X[0]
guess_k = np.argmax( np.fft.rfft(Y) )
guess_f = np.fft.fftfreq(X.size, timestep)[guess_k]
inital_guess[1] = guess_f
# Guess the amplitiude by taking the max of the absolute values
inital_guess[0] = np.abs(Y).max()
sol = optimize.fmin(err_func, inital_guess, args=(X,Y,sineFit))
Y2 = sineFit(X, sol)
plt.subplot(212)
plt.title("FFT Guess : Final Parameters: %s"%sol)
plt.plot(X,Y)
plt.plot(X,Y2,'r',alpha=.5,lw=10)
plt.show()
``` | The problem is due to a bad initial guess of the phase, not the frequency. While cycling through the rows of genSine (inner loop) you use the fit result of the previous line as initial guess for the next row which does not work always. If you determine the phase from an fft of the current row and use that as initial guess the fit will succeed.
You could change the inner loop as follows:
```
for n,rr in enumerate(sineGen):
fftx = np.fft.fft(rr)
fftx = fftx[:len(fftx)/2]
idx = np.argmax(np.abs(fftx))
init_phase = np.angle(fftx[idx])
print fftx[idx], init_phase
...
```
Also you need to change
```
def sineFit(t, a, f, p):
return a * np.sin(2.0 * np.pi * f*t + p)
```
to
```
def sineFit(t, a, f, p):
return a * np.cos(2.0 * np.pi * f*t + p)
```
since phase=0 means that the imaginary part of the fft is zero and thus the function is cosine like.
Btw. your sample above is still lacking definitions of sineGen and xDat. | Failure of non linear fit to sine curve | [
"",
"python",
"scipy",
"curve-fitting",
""
] |
Suppose I have two lists of any, but equal length, for example:
```
['a','b','c','d']
['r','t','y','h']
```
For these two lists, I would want output to be:
```
'ar', 'bt', 'cy', 'dh'
```
Basically, the first element of the first list to the first element of the second list and so on. How would I do that? And note, that the lists can be of any length, not just what the example shows, but the length of the first list is equal to the length of the second list. | [`zip`](http://docs.python.org/3/library/functions.html#zip) the lists to combine them, then [`join`](http://docs.python.org/3/library/stdtypes.html#str.join) each pair of strings into a single string:
```
>>> list1 = ['a','b','c','d']
>>> list2 = ['r','t','y','h']
>>> [''.join(pair) for pair in zip(list1, list2)]
['ar', 'bt', 'cy', 'dh']
``` | You can use `map` and `zip` to do the job:
```
>>> l1=['a','b','c','d']
>>> l2=['r','t','y','h']
>>> map(lambda(x,y): x+y, zip(l1,l2))
['ar', 'bt', 'cy', 'dh']
```
What `zip` does is it creates a list of tuples, where the i-th tuple contains the i-th element from each list. Then you can transform each tuple into a string by concatenation (using `lambda(x,y): x+y`). | Combining two lists in python3 | [
"",
"python",
"list",
"python-3.x",
""
] |
I have the following select:
```
SELECT TOP 1000 [ObjectiveId]
,[Name]
,[Text]
FROM [dbo].[Objective]
```
It gives me
```
Name Text
0100 Header1
0101 Detail1
0102 Detail2
0200 Header2
0201 Detail1a
0202 Detail1b
```
Is there a way I could make a string like this with a ||| divider from the data.
```
Header1 ||| Detail1
Header1 ||| Detail2
Header2 ||| Detail1a
Header2 ||| Detail1b etc.
```
The key here is that when the last two digits of name are "00" then it's a header row for following detail rows. | ```
; WITH headers AS (
SELECT Name
, Text
FROM dbo.Objective
WHERE Right(Name, 2) = '00'
)
, details AS (
SELECT Name
, Text
FROM dbo.Objective
WHERE Right(Name, 2) <> '00'
)
SELECT headers.Text + ' ||| ' + details.Text
FROM headers
LEFT
JOIN details
ON Left(details.name, 2) = Left(headers.name, 2)
``` | Query:
**[SQLFIDDLEExample](http://sqlfiddle.com/#!6/01adf/6)**
```
SELECT t1.Text + ' ||| ' + t2.Text AS clm
FROM Objective t1
LEFT JOIN Objective t2
ON SUBSTRING(t2.Name, 1, 2) = SUBSTRING(t1.Name, 1, 2)
AND t2.Name not like '%00'
WHERE t1.Name like '%00'
```
Result:
```
| CLM |
------------------------
| Header1 ||| Detail1 |
| Header1 ||| Detail2 |
| Header2 ||| Detail1a |
| Header2 ||| Detail1b |
``` | Can I flatten out header and detail records with a SQL Statement in Sql Server 2012 | [
"",
"sql",
"sql-server",
""
] |
I added a facebook button to my page by copying/pasting the code they supply on their website.
It looks like this:
```
"http://www.facebook.com/dialog/feed?app_id={{fbapp_id}}&link={{link_url}}&message={{share_message|urlencode}}&display=popup&redirect_uri={{link_url}}
```
As you can see, it's got the `{}` in there that `Jinja` looks for. However, being that I *don't* want any of the above code replaced with anything, is there something I can add into my template which tells `Jinja` to ignore everything between the delimiters?
Python Handler:
```
class MainHandler(webapp2.RequestHandler):
def get(self):
template = JINJA_ENV.get_template('index.html')
self.response.write(template.render(None))
``` | You can usually find that information [in the documentation, under "Escaping"](http://jinja.pocoo.org/docs/templates/#escaping) or similar. In this case, you can either output the delimiter with a variable expression:
```
{{ '{{' }}
```
Or you can use the `raw` block, for longer stretches of code:
```
{% raw %}
<ul>
{% for item in seq %}
<li>{{ item }}</li>
{% endfor %}
</ul>
{% endraw %}
``` | This question is old but I recently had the same issue. If you setup Jinja2 Environment to use undefined=jinja2.DebugUndefined it will ignore missing parameters and leave them as if a new Jinja template. Useful for say multi-stage parsing and u can run logging as well to know when variables have not been defined:
```
import logging
from Jinja2 import DebugUndefined
logging.basicConfig()
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
undefined_logging = jinja2.make_logging_undefined(logger=logger, base=DebugUndefined)
jinja_env = jinja2.Environment(loader=FileSystemLoader, undefined=undefined_logging)
print(jinja2.from_string("Hello {{ worldarg }}")
This will result in a logger message such as
[date time] WARNING [<module>:lineno] Template variable warning worldarg is undefined
Hello {{ worldarg }}
```
Template in will have jinja rendered for passed parameters but unaltered for undefined. NOTE: This will unlikely resolve missing templates or macros defined by the routine though but standard {{ x }} types should be logged and unaltered. \*Logging is also subject to how its configured as well!
Options also exist for StrictUndefined (results in exception and template processing to stop) or Undefined results in parameters passed to be removed and fields blank where expected without errors being returned to calling function. | How to disable Jinja2 for sections of template with {}? | [
"",
"python",
"jinja2",
""
] |
I am trying to display a list in vertically sorted columns with number of columns decided by the user. I want to use `zip()` but I can't seem to figure out how to tell it to go through `n` no of lists.
```
#!/usr/bin/env python
import random
lst = random.sample(range(100), 30)
lst = sorted(lst)
col_size = int(raw_input('How many columns do you want?: '))
sorting = 'Vertical'
if sorting == 'Vertical':
# vertically sorted
columns = []
n = len(lst)//col_size
for i in range(0, len(lst), n):
columns.append(lst[i:i+n])
print '\nVertically Sorted:'
print columns
print zip(*columns)
```
This gives this result:
```
How many columns do you want?: 4
Vertically Sorted:
[[0, 2, 4, 11, 12, 16, 23], [24, 31, 32, 36, 41, 48, 50], [52, 54, 61, 62, 63, 64, 67], [76, 80, 81, 89, 91, 92, 94], [96, 97]]
[(0, 24, 52, 76, 96), (2, 31, 54, 80, 97)]
```
If I knew the number of columns (e.g. 4), I could've coded:
```
for c1, c2, c3, c4 in zip(columns[0], columns[1], columns[2], columns[3]):
print str(c1), str(c2).rjust(8), str(c3).rjust(8), str(c4).rjust(8)
```
But since I don't, how do I use `zip`? As you can see I tried `zip(*columns)` but that failed due to unequal no. of items in the last list. | Use [the grouper recipe](http://docs.python.org/2/library/itertools.html#itertools.izip) `IT.izip_longest(*[iterable]*n)` to collect the items in `lst` into groups of size `n`. (See [this page](https://stackoverflow.com/a/17516752/190597) for a more detailed explanation of how the grouper recipe works.)
```
import random
import itertools as IT
# lst = random.sample(range(100), 30)
lst = range(30)
lst = sorted(lst)
col_size = int(raw_input('How many columns do you want?: '))
sorting = 'Vertical'
if sorting == 'Vertical':
# vertically sorted
n = len(lst)//col_size
lst = iter(lst)
columns = IT.izip_longest(*[lst]*n, fillvalue='')
print '\nVertically Sorted:'
print('\n'.join(
[''.join(map('{:4}'.format, row))
for row in IT.izip(*columns)]))
```
yields
```
0 7 14 21 28
1 8 15 22 29
2 9 16 23
3 10 17 24
4 11 18 25
5 12 19 26
6 13 20 27
``` | Zip doesn't do what you're after because the rows are different sizes. Map will transpose when rows are uneven.
See the following with code help from [Create nice column output in python](https://stackoverflow.com/questions/9989334/create-nice-column-output-in-python).
## PROGRAM
import random
lst = random.sample(range(100), 30)
lst = sorted(lst)
col\_size = int(raw\_input('How many columns do you want?: '))
sorting = 'Vertical'
if sorting == 'Vertical':
# vertically sorted
columns = []
n = len(lst)//col\_size
```
for i in range(0, len(lst), n):
columns.append(lst[i:i+n])
print '\nColumns:'
columns = map(None,*columns)
print columns
print '\nVertically Sorted:'
col_width = max(len(str(word)) for row in columns for word in row) + 2 # padding
for row in columns:
print "".join(str(word).ljust(col_width) for word in row if word is not None)
```
## OUTPUT
```
How many columns do you want?: 4
Columns:
[(0, 19, 45, 62, 92), (1, 24, 47, 64, 93), (5, 29, 48, 72, None), (6, 31, 50, 80, None), (9, 34, 56, 85, None), (14, 36, 58, 87, None), (15, 37, 61, 90, None)]
Vertically Sorted:
0 19 45 62 92
1 24 47 64 93
5 29 48 72
6 31 50 80
9 34 56 85
14 36 58 87
15 37 61 90
``` | Zip together n number of lists | [
"",
"python",
"list",
""
] |
```
>>> list=[None]
>>> def list[0](x,y):
File "<stdin>", line 1
def list[0](x,y):
^
SyntaxError: invalid syntax
```
How can I define a function as an element of a list? | Python's `def` isn't flexible enough to handle generic [lvalues](http://dictionary.reference.com/browse/lvalue) such as `list[0]`. The language only allows you to use an *identifier* as function name. Here are the relevant parts of the [grammar rule for the def-statement](http://docs.python.org/2.7/reference/compound_stmts.html#function-definitions):
```
funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite
funcname ::= identifier
```
Instead, you can use a series of assignment and definition statements:
```
s = [None]
def f(x, y):
return x + y
s[0] = f
```
As an alternative, you could also store a [`lambda` expression](http://docs.python.org/2/glossary.html#term-lambda) directly in a list:
```
s = [lambda x,y : x+y]
``` | ```
def f(whatever):
do_stuff()
l[0] = f
```
The function definition syntax doesn't allow you to define a function directly into a data structure, but you can just create the function and then assign it wherever it needs to go. | Define function as list element | [
"",
"python",
""
] |
I have a computer on my LAN that I would like to run an IPython Notebook server on. The computer is headless, so I can only access this computer using SSH. I tried to start IPython Notebook through screen, then detach, but the kernel restarts with X server errors.
Specifically, I did the following:
* SSH into remote box: `ssh -X 1.1.1.1`
* Start or re-attach to last screen: `screen -RD`
* Start Notebook `ipython notebook`
* Detach and logout: `ctrl-a-d`, `exit`
The remote Notebook server works fine, until I log out, and then try and create a matplotlib plot. At which time I get
```
Kernel Restarting
The kernel appears to have died. It will restart automatically.
```
from the client's web-browser, and
```
-c: cannot connect to X server localhost:10.0
2013-08-01 10:28:48.072 [NotebookApp] KernelRestarter: restarting kernel (1/5)
WARNING:root:kernel 6e0f5395-6ba7-44c8-912f-1e736dd66517 restarted
```
on the server.
It appears as though the Notebook can't plot as soon as I log out due to lack of X-resources. Does anyone have a solution to this? | It appears as though these kernel restarts only occur when I import traitsui modules in a notebook. Specially, the following imports cause the error.
```
from traitsui.api import *
from traitsui.menu import *
```
The solution is to change the backend for traitsui, *before* importing any traitsui modules,
```
from traits.etsconfig.api import ETSConfig
ETSConfig.toolkit = 'null'
```
EDIT: the traitsui functionality was not being used across ssh, it was just part of a larger module. | Ipython is creating your plots (or any other X-graphics such as `traitsui` UI elements) in the X session you created with `ssh -X`. When you log out, the ssh tunnel closes along with that X session, thus the errors. Without the ssh tunnel, the plots have no way to get to you.
Assuming you don't have a physical terminal on the server with which to see the plots etc.., you can see matplotlib plots by telling IPython notebook to show the plots inline. Note that with the current version of ipython, the plots are static png's. I'm not sure if `traitsui` has a way to do something like that?
There's a few ways to do this:
* import the pylab module with`%pylab inline`
* import matplotlib with `%matplotlib inline` (not sure if this is available in older versions)
* run ipython with `ipython notebook --pylab=inline`
* [configure ipython](http://ipython.org/ipython-doc/dev/config/ipython.html) to use pylab inline always: run `ipython profile create`, then look for "inline" in `ipython_notebook_config.py`, which is in your ipython profile directory - `~/.config/ipython/profile_default/` on Ubuntu. | Start IPython remotely using SSH+screen, then detach and log out | [
"",
"python",
"ssh",
"ipython",
"enthought",
"jupyter-notebook",
""
] |
Let us say I have a python method to get last synced changelist in p4. (The question itself is not related to p4 but only a basic python question)
```
def get_last_synced_changelist(p4port, client_name, p4 = None):
with P4Connection(p4port) as p4:
last_change_list = p4.run_changes("-m1", "@%s" % client_name)
if last_change_list:
return last_change_list[0]["change"]
else:
return None
```
The caller of this method can either supply p4port which is a String or they could supply the p4 object in itself. My requirement is that if a "p4" object is supplied I want to run the method body *without* the *with* context, i.e, I dont want the **enter** or **exit** method called on the p4. This is because the responsibility of closing/entering p4 object lies now with the caller. If p4 object is not supplied the with syntax needs to be constructed in this program using the p4port string.
Can someone tell me what is the best way of structuring this method? I want to change the body as less as possible. Basically something like
```
p4 or with P4Connection(p4port) as p4:
p4.run_changes(...)
```
But I am not sure what the best syntax is. | You can create a dummy context manager:
```
import contextlib
@contextlib.contextmanager
def dummyctx(x):
yield x
def get_last_synced_changelist(p4port, client_name, p4=None):
if p4 is None:
ctx = P4Connection(p4port)
else:
ctx = dummyctx(p4)
with ctx as p4:
last_change_list = p4.run_changes("-m1", "@%s" % client_name)
if last_change_list:
return last_change_list[0]["change"]
else:
return None
``` | It's not directly possible, `with` is a compound statement and can't be embedded in expressions like this. But you can make use of the fact that your function also supports "borrowing" a resource:
```
def get_last_synced_changelist(p4port, client_name, p4 = None):
if p4 is None:
with P4Connection(p4port) as p4:
return get_last_synced_changelist(p4port, client_name, p4)
last_change_list = p4.run_changes("-m1", "@%s" % client_name)
if last_change_list:
return last_change_list[0]["change"]
else:
return None
```
This approach even works if you have separate functions for the two paths (which may make sense in this example, as `p4port` is apparently not used when an existing `P4Connection` is passed in). | Python handling parameters in "with" context | [
"",
"python",
"python-2.7",
""
] |
I have to write a programme where a turtle takes 90 degree turns, chosen randomly as left or right, around the screen, until it hits a wall, takes a 180 degree turn and goes back to walking around the screen. When it's hit the wall 4 times, the loop terminates. The problem I'm having is that when it bounces off the wall it just stops walking, and the loop has clearly terminated as I can close the window by clicking on it (`wn.exitonclick`). Here's the full programme:
```
import turtle
import random
def isInScreen(w,t):
leftBound = w.window_width() / -2
rightBound = w.window_width() / 2
bottomBound = w.window_height() / -2
topBound = w.window_height() / 2
turtlex = t.xcor()
turtley = t.ycor()
stillIn = True
if turtlex < leftBound or turtlex > rightBound or turtley < bottomBound or turtley > topBound:
stillIn = False
return(stillIn)
def randomWalk(t,w):
counter = 0
while isInScreen(w,t) and counter < 4:
coin = random.randrange(0,2)
if coin == 0:
t.left(90)
else:
t.right(90)
t.forward(50)
t.left(180)
t.forward(50)
counter = counter+1
wn = turtle.Screen()
wn.bgcolor('lightcyan')
steklovata = turtle.Turtle()
steklovata.color('darkslategray')
steklovata.shape('turtle')
randomWalk(steklovata,wn)
wn.exitonclick()
```
I'm confused as to why it stops, considering once the turtle bounces back, its x and y coordinates meet the requirements for isInScreen(w,t) to be true and thus go back to walking. Any ideas?
**EDIT:** Accepted Sukrit's answer as it was the easiest to relate to what I'd already programmed and gave me a few pointers on other stuff, but Brian's answer was very useful too and I'd accept both if it was possible. Thanks a lot to both of you! | Your `counter = counter + 1` is wrong. When your `isInScreen` returns `False`, the while loop breaks and the code ends, since, the counter is being incremented but you don't loop over again. See the following code -
```
import turtle
import random
def isInScreen(w,t):
leftBound = w.window_width() / -2.0
rightBound = w.window_width() / 2.0
bottomBound = w.window_height() / -2.0
topBound = w.window_height() / 2.0
turtlex = t.xcor()
turtley = t.ycor()
if turtlex < leftBound or turtlex > rightBound or turtley < bottomBound or turtley > topBound:
return False
return True
def randomWalk(t,w):
counter = 0
while True:
while isInScreen(w,t):
coin = random.randrange(0,2)
if coin == 0:
t.left(90)
else:
t.right(90)
t.forward(50)
t.left(180)
t.forward(50)
counter += 1
if counter == 4:
break
wn = turtle.Screen()
wn.bgcolor('lightcyan')
steklovata = turtle.Turtle()
steklovata.color('darkslategray')
steklovata.shape('turtle')
randomWalk(steklovata,wn)
wn.exitonclick()
```
**P.S** - You don't need a variable to store `stillIn`, if the `if` condition evaluates to `True`, just `return False`, and if it doesn't `return True`. (Changes reflected in the above code). | As an alternative to the nested loops and to avoid some redundant statements, the following should produce the same result as Sukrit's answer.
```
def randomWalk(t,w):
counter = 0
while counter < 4:
if not isInScreen(w,t):
t.left(180)
counter += 1
else:
coin = random.randrange(0,2)
if coin == 0:
t.left(90)
else:
t.right(90)
t.forward(50)
```
The core issue is making sure that `isInScreen` returning false does not cause your while loop to terminate while also incrementing `counter` within the loop body. | Randomly walking turtle function not doing what I want it to | [
"",
"python",
"python-3.x",
"turtle-graphics",
""
] |
I wanted to have a look at the python deque class. When I checked [the source code](http://hg.python.org/cpython/file/791034a0ae1e/Lib/collections/__init__.py) , I found the following at line 10
```
from _collections import deque, defaultdict
```
where exactly can I find this \_collections module? I searched on my copy of the python source, but couldn't spot it.
Where is this class located? | `_collections` is builtin extension module.
You can find source for \_collection module [here](http://hg.python.org/cpython/file/tip/Modules/_collectionsmodule.c).
[Setup.dist](http://hg.python.org/cpython/file/tip/Modules/Setup.dist) contains mapping between builtin extension module name to source file. | `_collections` is a private implementation of a class according to this answer: ["Private" (implementation) class in Python](https://stackoverflow.com/questions/551038/private-implementation-class-in-python).
Being private, I don't think that you will be able to access its Python source but you can check out the C implementation [here](http://hg.python.org/cpython/file/20557286cc35/Modules/_collectionsmodule.c). | finding _collections in python source | [
"",
"python",
""
] |
I have two dates: a `StartDate` and a `CreatedDate`
If I say
```
CreatedDate < StartDate
```
what will happen if `StartDate` is `NULL`?
Should I use
```
isnull(startdate,0)
```
or will it just return a `NULL` or will it always be less than `StartDate` as `StartDate` is `NULL`? | Any ordinary comparisons (>, <, >=, <=, =, <>, like, in) return null if at least one of their arguments is null; so CreatedDate < StartDate returns null if StartDate is null. You can test for null explicitly:
```
((CreatedDate < StartDate) or (StartDate is null))
```
Or you can transform the StartDate value
```
(CreatedDate < NVL(StartDate, To_Date('1.1.1000', 'DD.MM.YYYY')) -- <- Oracle Syntax
``` | You have to check for `null`.
If you don't check for `null` then your compare will result in `unknown` which is `false`. | Calculate Nulls with a date | [
"",
"sql",
"null",
""
] |
after a simple query...
```
SELECT Col_ID, datetime1, datetime2, datetime3
FROM TABLE_DATETIME
WHERE datetime1 LIKE '20130805%'
OR datetime2 LIKE '20130805%'
OR datetime3 LIKE '20130805%'
```
... I have this scenario
```
Col_ID |datetime1 |datetime2 |datetime3 |
----------|--------------|--------------|--------------|
40302025 |20130805123022|NULL |NULL |
40302028 |20130805180055|NULL |NULL |
40302030 |NULL |20130805090055|NULL |
40302055 |NULL |20130805190055|NULL |
40302074 |NULL |NULL |20130805070055|
```
Now, in the same previous query, I want to merge the datetime1, datetime2, datetime3 in one columns which we will call "ALL\_DATETIME" then order by this to have this results...
```
Col_ID |ALL_DATETIME |
----------|--------------|
40302074 |20130805070055|
40302030 |20130805090055|
40302025 |20130805123022|
40302028 |20130805180055|
40302055 |20130805190055|
``` | You can use [`COALESCE()`](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_coalesce) function for this:
```
SELECT Col_ID
, COALESCE(datetime1,datetime2,datetime3) AS ALL_DATETIME
FROM TABLE_DATETIME
WHERE ...
ORDER BY COALESCE(datetime1,datetime2,datetime3)
```
Output:
```
| COL_ID | ALL_DATETIME |
-----------------------------
| 40302074 | 20130805070055 |
| 40302030 | 20130805090055 |
| 40302025 | 20130805123022 |
| 40302028 | 20130805180055 |
| 40302055 | 20130805190055 |
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/3430a/8) | Use `NULL` handling function
```
SELECT Col_ID, ifnull(datetime1, ifnull(datetime2, datetime3 )) as ALL_DATETIME
FROM TABLE_DATETIME
WHERE datetime1 LIKE '20130805%'
OR datetime2 LIKE '20130805%'
OR datetime3 LIKE '20130805%'
order by 2
``` | Union different column in one and then order by this | [
"",
"mysql",
"sql",
""
] |
I have a jsp file that looks like this:
```
<font color="#121212">
<br>
Text 1
<br>
Text 2
<br>
</font>
```
Does anyone know a quick sed/awk command I could invoke in my shell script to replace "Text 1" and "Text 2" with predefined variables? Text1/2 are just placeholders for this question, the space inbetween those `<br>` tags could be filled with anything.
Update: Changing tags to allow suggestions in python as well. | Try this awk command:
```
awk '/<font /{intag=1}
/<\/font>/{intag=0 ;br=0}
intag==1 && /<br>/{br++}
{print}
br==1{print "Foo"; getline}
br==2{print "Bar"; getline}' file
```
This command will replace line just after 1st `<br>` with `Foo` and line just after 2nd `<br>` with `Bar`. | If you have some separator you can use between your blocks of replacement text, e.g. newline:
```
$ awk -v text="foo
bar" '
BEGIN {
split(text,t,/\n/)
}
/<br>/ {
if (++c in t) {
print $0 ORS t[c]
f = 1
}
else {
f = 0
}
}
!f
' file
<font color="#121212">
<br>
foo
<br>
bar
<br>
</font>
```
Otherwise:
```
$ awk -v text1="foo" -v text2="bar" '
BEGIN {
t[++n]=text1
t[++n]=text2
}
/<br>/ {
if (++c in t) {
print $0 ORS t[c]
f = 1
}
else {
f = 0
}
}
!f
' file
<font color="#121212">
<br>
foo
<br>
bar
<br>
</font>
```
Note that you could just add as many blocks of replacement text as you like in the `-v/BEGIN` sections if you had more text between `<br>`s you needed to replace in future and the rest of the code wouldn't change - it just replaces as many blocks as are populated in the array `t`.
I see a couple of answers posted using getline. Make sure you read and fully understand all the getline caveats described at <http://awk.info/?tip/getline> if you're considering using it. IMHO this problem is not a good candidate for a solution using getline. | Change text inbetween tags - shell script | [
"",
"python",
"regex",
"bash",
"sed",
"awk",
""
] |
Lets have a look at the next snippet -
```
@event.listens_for(Pool, "checkout")
def check_connection(dbapi_con, con_record, con_proxy):
cursor = dbapi_con.cursor()
try:
cursor.execute("SELECT 1") # could also be dbapi_con.ping(),
# not sure what is better
except exc.OperationalError, ex:
if ex.args[0] in (2006, # MySQL server has gone away
2013, # Lost connection to MySQL server during query
2055): # Lost connection to MySQL server at '%s', system error: %d
# caught by pool, which will retry with a new connection
raise exc.DisconnectionError()
else:
raise
engine = create_engine('mysql://user:puss123@10.0.51.5/dbname', pool_recycle = 3600,pool_size=10, listeners=[check_connection])
session_factory = sessionmaker(bind = engine, autoflush=True, autocommit=False)
db_session = session_factory()
...
some code that may take several hours to run
...
db_session.execute('SELECT * FROM ' + P_TABLE + " WHERE id = '%s'" % id)
```
I thought that registering the checkout\_connection function under the checkout event would solve it but it didnt
now the question is how am i suppose to tell SQLAlchemy handle connection dropouts so every time i call execute() it will check if connection is available and if not it will initiate it once again?
**----UPDATE----**
The version of SQLAlchemy is 0.7.4
**----UPDATE----**
```
def checkout_listener(dbapi_con, con_record, con_proxy):
try:
try:
dbapi_con.ping(False)
except TypeError:
dbapi_con.ping()
except dbapi_con.OperationalError as exc:
if exc.args[0] in (2006, 2013, 2014, 2045, 2055):
raise DisconnectionError()
else:
raise
engine = create_engine(CONNECTION_URI, pool_recycle = 3600,pool_size=10)
event.listen(engine, 'checkout', checkout_listener)
session_factory = sessionmaker(bind = engine, autoflush=True, autocommit=False)
db_session = session_factory()
```
session\_factory is sent to every newly created thread
```
class IncidentProcessor(threading.Thread):
def __init__(self, queue, session_factory):
if not isinstance(queue, Queue.Queue):
raise TypeError, "first argument should be of %s" (type(Queue.Queue))
self.queue = queue
self.db_session = scoped_session(session_factory)
threading.Thread.__init__(self)
def run(self):
self.db_session().execute('SELECT * FROM ...')
...
some code that takes alot of time
...
self.db_session().execute('SELECT * FROM ...')
```
now when execute runs after a big period of time i get the "MySQL server has gone away" error | There was a talk about this, and this doc describes the problem pretty nicely, so I used their recommended approach to handle such errors: <http://discorporate.us/jek/talks/SQLAlchemy-EuroPython2010.pdf>
It looks something like this:
```
from sqlalchemy import create_engine, event
from sqlalchemy.exc import DisconnectionError
def checkout_listener(dbapi_con, con_record, con_proxy):
try:
try:
dbapi_con.ping(False)
except TypeError:
dbapi_con.ping()
except dbapi_con.OperationalError as exc:
if exc.args[0] in (2006, 2013, 2014, 2045, 2055):
raise DisconnectionError()
else:
raise
db_engine = create_engine(DATABASE_CONNECTION_INFO,
pool_size=100,
pool_recycle=3600)
event.listen(db_engine, 'checkout', checkout_listener)
``` | Try the `pool_recycle` argument to `create_engine`.
From [the documentation](http://docs.sqlalchemy.org/en/rel_0_9/dialects/mysql.html#connection-timeouts):
> Connection Timeouts
>
> MySQL features an automatic connection close behavior, for connections
> that have been idle for eight hours or more. To circumvent having this
> issue, use the pool\_recycle option which controls the maximum age of
> any connection:
>
> engine = create\_engine('mysql+mysqldb://...', pool\_recycle=3600) | Python SQLAlchemy - "MySQL server has gone away" | [
"",
"python",
"mysql",
"sqlalchemy",
""
] |
I have the following query:
```
with cte as
(SELECT top 10 [1],[2]
FROM [tbl_B] where [2] > '2000-01-01' and Status_7 = 0 and Status_8 = 1
ORDER BY [2])
,
CTE1 AS
( select [1], row_number() over (order by [2]) as rn
from CTE
)
select [1] from CTE1 where rn = '10'
```
how can I put this into a variable to compare it to another query result?
If i use set @123 = (above query) it gives errors. | ```
with cte as
(
SELECT top 10 [1],[2]
FROM [tbl_B]
where [2] > '2000-01-01' and Status_7 = 0 and Status_8 = 1
ORDER BY [2]
)
,CTE1 AS
(
select [1], row_number() over (order by [2]) as rn
from CTE
)
select @123 = [1] from CTE1 where rn = '10'
``` | ```
with cte as
(SELECT top 10 [1],[2]
FROM [tbl_B] where [2] > '2000-01-01' and Status_7 = 0 and Status_8 = 1
ORDER BY [2])
,
CTE1 AS
( select [1], row_number() over (order by [2]) as rn
from CTE
)
select @123 = [1] from CTE1 where rn = '10'
``` | How to store result of sql query in variable? | [
"",
"sql",
""
] |
I have a file that has a list of files but it adds `\n` at the end how can I have python just write the info I need on a new line without getting `\n` in the way so that way my info will be called `X.acc` not `x.acc\n`? Here is my code that writes the file
```
def add(x):
nl = "\n"
acc = ".acc"
xy = x + acc
exyz = xy
xyz = exyz
xxx = str(xyz)
tf = open('accounts.dat',"a+")
tf.writelines(nl)
tf.writelines(xxx)
tf.close
```
Here is the code that calls upon the file:
```
import sys
tf = open('accounts.dat','r')
names = tf.readlines()
u = choicebox(msg="pick something",title = "Choose an account",choices=(names))
counter_file = open(u, 'r+')
content_lines = []
for line in counter_file:
if line == "credits =":
creds = line
else:
False
for line in counter_file:
if 'credits =' in line:
line_components = line.split('=')
int_value = int(line_components[1]) + 1
line_components[1] = str(int_value)
updated_line= "=".join(line_components)
content_lines.append(updated_line)
else:
msgbox(msg=(creds))
content_lines.append(line)
counter_file.seek(0)
counter_file.truncate()
counter_file.writelines(content_lines)
counter_file.close()
```
thank you for your help and sorry if this is a trival question still new to python :) | Your question doesn't actually make sense, because of what a "line" actually is and what that `'\n'` character means.
Files don't have an intrinsic concept of lines. A file is just a sequence of bytes. `'\n'` is the line separator (as Python represents it with universal newlines). If you want your data to show up on different "lines", you *must* put a line separator between them. That's all that the `'\n'` character is. If you open up the file in a text editor after you write it, most editors won't explicitly show the newline character by default, because it's already represented by the separation of the lines.
---
To break down **what your code is doing**, let's look at the `add` method, and fix some things along the way.
The first thing `add` does is name a variable called `nl` and assign it the newline character. From this, I can surmise that `nl` stands for "newline", but it would be much better if that was actually the variable name.
Next, we name a variable called `acc` and assign it the `'.acc'` suffix, presumably to be used as a file extension or something.
Next, we make a variable called `xy` and assign it to `x + acc`. `xy` is now a string, though I have no idea of what it contains from the variable name. With some knowledge of what `x` is supposed to be or what these lines represent, perhaps I could rename `xy` to something more meaningful.
The next three lines create three new variables called `exyz`, `xyz`, and `xxx`, and point them all to the same string that `xy` references. There is no reason for any of these lines whatsoever, since their values aren't really used in a meaningful way.
Now, we open a file. Fine. Maybe `tf` stands for "the file"? "text file"? Again, renaming would make the code much more friendly.
Now, we call `tf.writelines(nl)`. This *writes the newline character* (`'\n'`) to the file. Since the `writelines` method is intended for writing a whole list of strings, not just a single character, it'll be cleaner if we change this call to `tf.write(nl)`. I'd also change this to write the newline at the end, rather than the beginning, so the first time you write to the file it doesn't insert an empty line at the front.
Next, we call `writelines` again, with our data variable (`xxx`, but hopefully this has been renamed!). What this actually does is break the iterable `xxx` (a string) into its component characters, and then write each of those to the file. Better replace this with `tf.write(xxx)` as well.
Finally, we have `tf.close`, which is a reference to the `close` function of the file object. It's a no-op, because what you presumably meant was to close the file, by calling the method: `tf.close()`. We could also [wrap the file up](http://effbot.org/zone/python-with-statement.htm) as a [context manager](http://docs.python.org/2/reference/datamodel.html#with-statement-context-managers), to make its use a little cleaner. Also, most of the variables aren't necessary: we can use [string formatting](http://docs.python.org/2/library/string.html#string-formatting) to do most of the work in one step. All in all, your method could look like this at the end of the day:
```
def add(x):
with open('accounts.dat',"a+") as output_file:
output_file.write('{0}.acc\n'.format(x))
```
---
So you can see, the reason the `'\n'` appears at the end of every line is because you are writing it between each line. Furthermore, this is *exactly what you have to do* if you want the lines to appear as "lines" in a text editor. Without the newline character, everything would appear all smashed together (take out the `'\n'` in my `add` method above and see for yourself!).
---
The problem you described in the comment is happening because `names` is a direct reading of the file. Looking at the [`readlines` documentation](http://docs.python.org/2/tutorial/inputoutput.html#methods-of-file-objects), it returns a list of the lines in the file, breaking at each newline. So to clean those names up, you want line 4 of the code you posted to call [`str.strip`](http://docs.python.org/2/library/stdtypes.html#str.strip) on the individual lines. You can do that like this:
```
names = tf.readlines()
for i in range(len(names)):
names[i] = names[i].strip() # remove all the outside whitespace, including \n
```
However, it's **much** cleaner, quicker, and generally nicer to take advantage of Python's [list comprehensions](http://carlgroner.me/Python/2011/11/09/An-Introduction-to-List-Comprehensions-in-Python.html), and the fact that file objects are already iterable line-by-line. So the expression below is equivalent to the previous one, but it looks far nicer:
```
names = [line.strip() for line in tf]
``` | Just change add:
```
def add(x):
nl = "\n"
acc = ".acc"
xy = x + acc
exyz = xy
xyz = exyz
xxx = str(xyz)
tf = open('accounts.dat',"a+")
tf.writelines(xxx)
tf.writelines(nl) # Write the newline AFTER instead of before the output
tf.close() # close is a function so needs to be called by having () at the end.
```
*See the comments for what has changed.* | how to add a new line in a text file using python without \n | [
"",
"python",
"python-3.x",
""
] |
This happens in the python build:
```
#is it executable
print os.access("support/d8/d8", os.X_OK)
#is it there in the shell
os.system("test -f support/d8/d8 && echo \"found\" || echo \"not found\"")
```
and then:
```
#run it
os.system("support/d8/d8 --trace_exception with a bunch of files");
```
which outputs:
```
True
found
sh: 1: support/d8/d8: not found
```
I don't get it. It's there it's executable. Why is it not there when I start it.
* link to the travis build: <https://travis-ci.org/albertjan/skulpt/builds>
* and a link to the repository: <https://github.com/albertjan/skulpt> the build script is called `m` | You're running an x86\_32 bit executable [`d8`](https://github.com/albertjan/skulpt/raw/master/support/d8/d8) (despite the [comment](https://github.com/albertjan/skulpt/blob/master/support/d8/skulpt_readme.txt), by the way). If the (Travis) system is x64, and/or does not have all of the x86\_32 libraries
* `linux-gate.so.1`
* `libpthread.so.0`
* `libstdc++.so.6`
* `libm.so.6`
* `libgcc_s.so.1`
* `libc.so.6`
then the executable won't run, since the loader cannot find all required libraries. Build statically and/or for x64. | Why don't you try this:
```
os.system("./support/d8/d8 --trace_exception with a bunch of files");
```
I had a similar issue, while executing the ./ is some how required. | os.system not finding file that is really there | [
"",
"python",
"travis-ci",
""
] |
i have to sum element in a column(SCENARIO1) that is varchar and contain data like (1,920, 270.00, 0, NULL) but when i try to convert data into int or decimal i get this error :
"he command is wrong when converting value "4582014,00" to int type"
here is my request :
```
select sum( convert( int, SCENARIO1) )
from Mnt_Scenario_Exercice where code_pere='00000000'
```
any help please | try this
```
select sum(cast(replace(SCENARIO1, ',', '.') as decimal(29, 10)))
from Mnt_Scenario_Exercice
where code_pere = '00000000';
```
If you couldn't convert your `'4582014,00'` into decimal, there's a chance you have different decimal separator on your server. You could look what it is or just try `'.'` | 4582014,00 should be a decimal
try this (I assume that youre query is working) and changed convert(int into decimal)
> select sum(convert(decimal(20,2),replace(SCENARIO1, ',', '.'))) from Mnt\_Scenario\_Exercice where code\_pere='00000000' | Sum of data in varchar column | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have the following table
```
Name | Subject | Marks
--------------------------
a M 20
b M 25
c M 30
d C 44
e C 45
f C 46
g H 20
```
Here I have a "Student" table I want to get the Name of the student who got
Max marks from each subject from the student table like the following OUTPUT.
```
Name | Subject | Marks
c M 30
f c 46
g h 20
``` | You can use the ROW\_NUMBER function to return only the "best" row per subject:
[SQL Fiddle](http://sqlfiddle.com/#!3/db6d8/7)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE Student
([Name] varchar(1), [Subject] varchar(1), [Marks] int)
;
INSERT INTO Student
([Name], [Subject], [Marks])
VALUES
('a', 'M', 20),
('b', 'M', 25),
('c', 'M', 30),
('d', 'C', 44),
('e', 'C', 45),
('f', 'C', 46),
('g', 'H', 20)
;
```
**Query 1**:
```
SELECT Name, Subject, Marks
FROM(
SELECT *, ROW_NUMBER()OVER(PARTITION BY Subject ORDER BY Marks DESC) rn
FROM dbo.Student
)X
WHERE rn = 1
```
**[Results](http://sqlfiddle.com/#!3/db6d8/7/0)**:
```
| NAME | SUBJECT | MARKS |
--------------------------
| f | C | 46 |
| g | H | 20 |
| c | M | 30 |
``` | You can use other functions and cte also to get the result..
eg : 1
```
select B.Name,
A.Subject,
B.Marks
from ( select Subject,
max(Marks) as High_Marks
from Student
group by Subject
) a
join Student b
on a.subject = b.subject
and a.high_Marks = b.Marks
```
Eg : 2 : use of cte and dense\_rank function
```
;WITH cte
AS
(
SELECT
[Name],
[Subject],
[Marks],
dense_rank() over(partition BY [Subject] order by [Marks] DESC) AS Rank
FROM Student
)
SELECT * FROM cte WHERE Rank = 1;
``` | How to get the name of a the student who got max marks in each subject? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"sql-server-2008-r2",
""
] |
We have several own Python packages and want to create local PyPI repository for them using simple interface like <https://pypi.python.org/simple/>
This repository I want to create for local only without any mirrors due to security reason, and it will be put under [Apache](https://en.wikipedia.org/wiki/Apache_HTTP_Server)'s control.
The command `pypimirror` looks has to be initialized once, which it needs to mirror.
How can I generate a PyPI Simple Index based on local Python packages?
Is there another simple script for this? | Take a look at [`pip2pi`](https://github.com/wolever/pip2pi). It seems to be exactly what you are looking for. | We had a similar need at my company. Basically how can we upload "closed source" packages to an index while being able to install them as if they were on PyPI?
We have sponsored a project called [devpi](https://pypi.python.org/pypi/devpi) which acts as a PyPI cache (packages you access from PyPI will be cached on your server) as well as a powerful and fast index server. The documentation is available at *[devpi: PyPI server and packaging/testing/release tool](https://devpi.net/docs/devpi/devpi/latest/%2Bd/index.html)*.
Next on the roadmap is mirroring for multi geos deployment. To kick the tires on your machine takes about 5 minutes (look at the quick start guides). Finally, devpi is compatible with both pip and easy\_install (i.e., you do not need the devpi client installed on your machine). | How can I create a local own PyPI repository index without a mirror? | [
"",
"python",
"pip",
"pypi",
""
] |
I'm trying to Convert a Decimal DateTime value to DateTime datatype and check a condition in a select query
```
SELECT * FROM CLBALTRNTBL WHERE CONVERT(DATETIME,LSTDAT) >= @sDt
AND CONVERT(DATETIME,LSTDAT) <= @eDt
```
but the following error Occurs
```
Arithmetic overflow error converting expression to data type datetime.
```
I'm doing this in a stored Procedure. and the `@sDt`and `@eDt` are `DateTime` Variables. The `LSTDAT` is in the format `yyyyMMdd`ie `20120317`
But if I enter the `LSTDAT` column directly ,for ex:`20130805`, the query executes. but what with the column??
Can you help me out ? | Try this :
```
SELECT *
FROM CLBALTRNTBL
WHERE CONVERT(DATETIME, CONVERT(VARCHAR(8), LSTDAT), 112) >= @sDt
AND CONVERT(DATETIME, CONVERT(VARCHAR(8), LSTDAT), 112) <= @eDt
``` | It's way more efficient to convert your variables and compare to column - than to convert your column.
```
SELECT *
FROM CLBALTRNTBL
WHERE LSTDAT >= CONVERT(INT,CONVERT(NVARCHAR(8),@sDt,112))
AND LSTDAT <= CONVERT(INT,CONVERT(NVARCHAR(8),@eDt,112))
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!3/30803/1)** | How to Convert Decimal to DateTime in where clause | [
"",
"sql",
"sql-server",
"datetime",
"decimal",
"converters",
""
] |
I have a cursor with values from a select and i want to do something after depending if i had any row found or none.
```
recs_Table SYS_REFCURSOR;
begin
open recs_Table for
select * from table1, table2;
if recs_Table%found then
--do this
else
--do that
end if;
end;
```
This doesnt seem to work, any help?Ty | You need to execute a FETCH against the cursor prior to using the %FOUND attribute. Change your code to something like
```
DECLARE
recs_Table SYS_REFCURSOR;
nTable_1_value NUMBER;
nTable_2_value NUMBER;
begin
open recs_Table for
select * from table1, table2;
FETCH recs_Table INTO nTable_1_value, nTable_2_value;
if recs_Table%found then
--do this
else
--do that
end if;
end;
```
Note that the way you'll probably need to add variables to the INTO clause of the FETCH statement, one for each column in TABLE1 and TABLE2. Note also that the way this cursor is written you'll probably get more rows returned than you might expect; because there is no join criteria specified you'll get what's called a Cartesian join, where each row in TABLE1 is joined to each row in TABLE2 - thus, the number of rows you'll get back is (# of rows in TABLE1) \* (# of rows in TABLE2).
A potentially simpler way to do this would be to use a cursor FOR loop, as follows:
```
DECLARE
bData_found BOOLEAN := FALSE;
begin
FOR aRow IN (select * from table1, table2)
LOOP
-- If the program gets here, it means a row was fetched
-- do this
bData_found := TRUE;
EXIT; -- if you only care if data was found and don't want to
-- process all the rows
END LOOP;
IF NOT bData_found THEN
-- do that
END IF;
end;
```
Share and enjoy. | we use two procedures to execute the result
```
create or replace procedure pro_sample(recs_Table out SYS_REFCURSOR) is
begin
open recs_Table for
select * from table1, table2;
end;
```
this above procedure will be used to open a cursor
```
create or replace procedure pro_sample(recs_Table out SYS_REFCURSOR) is
sam sys_refcursor;
var number;
-- if you have any variables then declare them
begin
pro_sample(sam);
fetch sam into var;
if sam%found then
--do this
else
--do that
end if;
close sam;
end;
```
the above procedure will help you to know whether the cursor contains rows or not | Best way to check if SYS_REFCURSOR is empty | [
"",
"sql",
"oracle",
"plsql",
"sys-refcursor",
""
] |
This next behaviour befuddles me, any explanation would be appreciated.
```
>>> a = (0.1457164443693023, False)
>>> print a
(0.1457164443693023, False)
>>> print a[0]
0.145716444369
```
Using python 2.7 | The only difference is the `print`. The number doesn't change, just its representation. You can reduce your problem to:
```
>>> 0.1457164443693023
0.1457164443693023
>>> print 0.1457164443693023
0.145716444369
```
(I guess (and this is only and merely a guess) this boils down to `__repr__` vs `__str__` or something along this line) | The first calls \_\_ repr \_\_ , the second \_\_ str \_\_
```
a = (0.1457164443693023, False)
print a
>>> (0.1457164443693023, False)
print a[0]
>>> 0.145716444369
print repr(a[0])
>>> 0.1457164443693023
print str(a[0])
>>> 0.145716444369
```
For some design reason Double.\_\_ str \_\_() returns fewer decimals. | Python float numbers lose accuracy out of a tuple? | [
"",
"python",
"floating-accuracy",
""
] |
I have two tables to store addresses as follows:
I want to get every address, city, name and area name (cityId,Areaid maybe null)
I tried:
```
SELECT [Details]
,[AddressId]
,[CityId]
,[PlaceName] as CityName
,[AreaId]
,[PlaceName] as AreaName
FROM [MyDB].[dbo].[Address] LEFT OUTER JOIN [MyDB].[dbo].[Places]
ON [CityId] = [PlaceI]
```
but that will give me only the names of the city. How to get the area name, too? | You need to join twice with the Places table. Once for the cities and once for the Areas.
```
SELECT [Details]
,[AddressId]
,[CityId]
,C.[PlaceName] as CityName
,[AreaId]
,A.[PlaceName] as AreaName
FROM [MyDB].[dbo].[Address]
LEFT OUTER JOIN [MyDB].[dbo].[Places] C
ON [CityId] = C.[PlaceI]
LEFT OUTER JOIN [MyDB].[dbo].[Places] A
ON [AreaId] = A.[PlaceI]
``` | Add another join
```
SELECT [Details]
,[AddressId]
,[CityId]
,p1.[PlaceName] as CityName
,[AreaId]
,p2.[PlaceName] as AreaName
FROM [MyDB].[dbo].[Address]
LEFT OUTER JOIN [MyDB].[dbo].[Places] p1 ON [CityId] = p1.[PlaceId]
LEFT OUTER JOIN [MyDB].[dbo].[Places] p2 ON [AreaId] = p2.[PlaceId]
``` | Two joins between two tables | [
"",
"sql",
"sql-server",
""
] |
I want to split a string using python. I have successfully done it for one variable, but finding it hard to do it for 2.
The String :
```
Paragraph 4-2 says. i am going home$ early- Yes.
```
I need the output to be
```
Paragraph 4-2 says
i am going home
early
Yes
```
The sentence should split from `.`, `$` and `-` (But when it's between 2 numbers (4-2) it shouldn't split)
How can i do this?
```
text.split('.')
```
**UPDATE**
The new output should be like :
```
Paragraph 4-2 says.
i am going home$
early-
Yes.
``` | You can do this:
```
>>> import re
>>> st='Paragraph 4-2 says. i am going home$ early- Yes.'
>>> [m.group(1) for m in re.finditer(r'(.*?[.$\-])(?:\s+|$)',st)]
['Paragraph 4-2 says.', 'i am going home$', 'early-', 'Yes.']
```
If you are not going to modify the match group at all (with strip or something) you can also just use findall with the same regex:
```
>>> re.findall(r'(.*?[.$\-])(?:\s+|$)',st)
['Paragraph 4-2 says.', 'i am going home$', 'early-', 'Yes.']
```
The regex is explained [here](http://www.regex101.com/r/vW8rK2), but in summary:
```
(.*?[.$\-]) is the capture group containing:
.*? Any character (except newline) 0 to infinite times [lazy]
[.$\-] Character class matching .$- one time
(?:\s+|$) Non-capturing Group containing:
\s+ First alternate: Whitespace [\t \r\n\f] 1 to infinite times [greedy]
| or
$ Second alternate: end of string
```
Depending on your strings, you may need to change the regex to `(.*?[.$\-])(?:[ ]+|$)` if you don't want to match `\r\n\f` with the `\s` | ```
>>> import re
>>> s = 'Paragraph 4-2 says. i am going home$ early- Yes'
>>>
>>> re.split(r'(?<!\d)\s*[.$-]\s*(?!\d)', s)
['Paragraph 4-2 says', 'i am going home', 'early', 'Yes']
```
* `\s*[.$-]\s*` matches any of `.`,`$` or `-` surrounded by 0 or more spaces (`\s*`).
* `(?<!\d)` is a negative-lookbehind to ensure that the match is not preceded by a digit.
* `(?!\d)` is a negative-lookahead to ensure that the match is not followed by a digit.
You can read more about lookarounds [here](http://www.regular-expressions.info/lookaround.html). | Splitting a String using multiple delimiters | [
"",
"python",
"regex",
""
] |
How do I modify a table with SQL Server 2012 in design view? I am not able to alter the table in design view, when I'm done with my changes it gives me the error on the image:
 | The message error is self explanatory :)
* Check the Allow nulls checkbox.
* Fill this column with data
* then edit the table again and uncheck the allow nulls checkbox | You try to add a new column that must have a value by your definition. But you are not giving a value. Ask yourself: *What should the existing records have for that column?*
If you can think of a value then add this as `default`. If not then `allow nulls` for that column. | How can I modify a table using SQL Server 2012 in design view | [
"",
"sql",
"sql-server-2012-express",
""
] |
I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
> `python ex1.py`
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line? | For Windows 7, I just rename the `python.exe` from the Python 3 folder to `python3.exe` and add the path into the environment variables. Using that, I can execute `python test_script.py` and the script runs with Python 2.7 and when I do `python3 test_script.py`, it runs the script in Python 3.
To add `Python 3` to the environment variables, follow these steps -
1. Right Click on My Computer and go to `Properties`.
2. Go to `Advanced System Settings`.
3. Click on `Environment Variables` and edit `PATH` and add the path to your Python 3 installation directory.
For example,
 | No need for "tricks". Python 3.3 comes with PyLauncher "py.exe", installs it in the path, and registers it as the ".py" extension handler. With it, a special comment at the top of a script tells the launcher which version of Python to run:
```
#!python2
print "hello"
```
Or
```
#!python3
print("hello")
```
From the command line:
```
py -3 hello.py
```
Or
```
py -2 hello.py
```
`py hello.py` by itself will choose the latest Python installed, or consult the `PY_PYTHON` environment variable, e.g. `set PY_PYTHON=3.6`.
See [Python Launcher for Windows](https://docs.python.org/3/using/windows.html#python-launcher-for-windows) | How to switch between python 2.7 to python 3 from command line? | [
"",
"python",
"windows",
"command-line",
"cmd",
""
] |
So, to start, this is finding our top 20% of highest spenders in 2010:
```
select top (20)percent o.BillEmail,SUM(o.total) as TotalSpent,
count(o.OrderID) as TotalOrders
from dbo.tblOrder o with (nolock)
where o.DomainProjectID=13
and o.BillEmail not like ''
and o.OrderDate >= '2010-01-01'
and o.OrderDate < '2011-01-01'
group by o.BillEmail
order by TotalSpent desc
```
From this, I need to find the retention rate of those top 20% spenders over the next two years.
Meaning, **which of the top 20% in 2010 stuck around and are on top in 2011, and then in 2012 as well?** Note: I'd need a count of how many were in 2010, then in 2011, then also in 2012.
I know it'd be much easier if I could create another table or pull from an excel sheet with only the top buyers listed. However, I don't have write access to our database so I have to do it all in nested queries, or whatever y'all have to suggest. I'm still a beginner so I don't know the best methods.
Thank you! | You have an interesting question. Fundamentally, it is about migration in spending quintiles from one year to the next. I would solve this by looking at all quintiles for the three years, to see where people move.
This starts with a summary of the data by year and email. The key function is `ntile()`. To be honest, I often do the calculation myself using `row_number()` and `count()`, which is why those are in the CTE (but not used subsequently):
```
with YearSummary as (
select year(OrderDate) as yr, o.BillEmail, SUM(o.total) as TotalSpent,
count(o.OrderID) as TotalOrders,
row_number() over (partition by year(OrderDate) order by sum(o.Total) desc) as seqnum,
count(*) over (partition by year(OrderDate)) as NumInYear,
ntile(5) over (partition by year(OrderDate) order by sum(o.Total) desc) as Quintile
from dbo.tblOrder o with (nolock)
where o.DomainProjectID=13 and o.BillEmail not like ''
group by o.BillEmail, year(OrderDate)
)
select q2010, q2011, q2012,
count(*) as NumEmails,
min(BillEmail), max(BillEmail)
from (select BillEmail,
max(case when yr = 2010 then Quintile end) as q2010,
max(case when yr = 2011 then Quintile end) as q2011,
max(case when yr = 2012 then Quintile end) as q2012
from YearSummary
group by BillEmail
) ys
group by q2010, q2011, q2012
order by 1, 2, 3;
```
The final step is to take the multiple rows for each email and to combine them into counts. Do note that some emails will not have any spending in certain years, so their corresponding `Quintile` will be NULL (this should actually produce more like 180 row - 5\*6\*6 - rather than 125 rows - 5\*5\*5
I also include sample emails in the final results (the `min()`and `max()`), which allow you to see samples for each group.
Note: For the retention rate, calculate the ratio between (1, 1, 1) -- top tile in all years -- and the total in the top quintile in 2010. | Try this:
```
;WITH top_2010 AS
(
select top (20)percent o.BillEmail,SUM(o.total) as TotalSpent,
count(o.OrderID) as TotalOrders
from dbo.tblOrder o with (nolock)
where o.DomainProjectID=13
and o.BillEmail not like ''
and o.OrderDate >= '2010-01-01'
and o.OrderDate < '2011-01-01'
group by o.BillEmail
),
top_2011 AS
(
select top (20)percent o.BillEmail,SUM(o.total) as TotalSpent,
count(o.OrderID) as TotalOrders
from dbo.tblOrder o with (nolock)
where o.DomainProjectID=13
and o.BillEmail not like ''
and o.OrderDate >= '2011-01-01'
and o.OrderDate < '2012-01-01'
group by o.BillEmail
),
top_2012 AS
(
select top (20)percent o.BillEmail,SUM(o.total) as TotalSpent,
count(o.OrderID) as TotalOrders
from dbo.tblOrder o with (nolock)
where o.DomainProjectID=13
and o.BillEmail not like ''
and o.OrderDate >= '2012-01-01'
and o.OrderDate < '2013-01-01'
group by o.BillEmail
)
SELECT top_2010.*,
ISNULL(top_2011.TotalSpent, 0) AS [TotalSpent_2011],ISNULL(top_2011.TotalOrders, 0) AS [TotalOrders_2011] ,
ISNULL(top_2012.TotalSpent, 0) AS [TotalSpent_2012],ISNULL(top_2012.TotalOrders, 0) AS [TotalOrders_2012]
FROM top_2010
LEFT JOIN top_2011 ON top_2010.BillEmail = top_2011.BillEmail
LEFT JOIN top_2012 ON top_2010.BillEmail = top_2012.BillEmail
WHERE top_2011.BillEmail IS NOT NULL OR top_2012.BillEmail IS NOT NULL
order by top_2010.TotalSpent desc
```
please note that I'm using `LEFT JOIN` so you can see all those that were in the top 2011 **or** 2012
if you need those that were in 2011 **AND** 2012 you can change to `INNER JOIN` | Top 20% Spender's Retention Rate | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to implement a trigger to auto increment a field on the row update. Each time an existing row in PERSON table is updated, i want consistency\_version to be incremented by one automatically.
My table:
```
CREATE TABLE PERSON (
id INT NOT NULL AUTO_INCREMENT,
consistency_version INT NOT NULL DEFAULT 0,
name CHAR (100) NOT NULL,
);
```
and Trigger:
```
CREATE TRIGGER consistency_version_trigger BEFORE UPDATE ON PERSON
FOR EACH ROW SET NEW.VERSION = @VERSION+1
```
When I execute an update statement on a row in the table i expect the consistency\_version to increment by one
But i get the following error:
```
Column "consistency_version" cannot be null
```
Can anyone advise what im doing wrong? I think @VERSION is null. How can i set this value to the version of the current row that is being updated? | My best guess is that '@version' is returning NULL. This would cause @version + 1 to result in null, and then attempt to set the not-nullable consistency\_version column to null, resulting in an error.
Have you tried using OLD instead?
`CREATE TRIGGER consistency_version_trigger BEFORE UPDATE ON PERSON`
`FOR EACH ROW SET NEW.VERSION = OLD.VERSION+1` | @VERSION is a session variable. It exists and is visible only during the session which created it. So you are basically saying `SET NEW.VERSION = NULL + 1` which is NULL.
*Assuming the builtin autoincrement feature doesn't fit your needs for some reason:*
To create an global auto incrementing value you need to use a table.
Create a table like this:
```
CREATE TABLE `consistency_version` (
`consistency_version` INT(10) UNSIGNED NOT NULL
) ENGINE=MyISAM;
INSERT INTO `consistency_version` VALUES(1);
```
And then get the next number like this:
```
UPDATE consistency_version SET consistency_version = LAST_INSERT_ID(consistency_version +1);
SET NEW.VERSION = LAST_INSERT_ID();
```
This uses LAST\_INSERT\_ID as a temporary place to hold the next number. You can get more info about this in [the manual](http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_last-insert-id).
It is also important for the sequence table to be MyISAM to avoid locking it for a long time during transactions. | MySQL - Trigger error on row update | [
"",
"mysql",
"sql",
"triggers",
""
] |
I'm using urllib3 against private services that have self signed certificates. Is there any way to have urllib3 ignore the certificate errors and make the request anyways?
```
import urllib3
c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001)
c.request('GET', '/')
```
When using the following:
```
import urllib3
c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001, cert_reqs='CERT_NONE')
c.request('GET', '/')
```
The following error is raised:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 67, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 80, in request_encode_url
return self.urlopen(method, url, **urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 415, in urlopen
body=body, headers=headers)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 267, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.3/http/client.py", line 1061, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python3.3/http/client.py", line 1099, in _send_request
self.endheaders(body)
File "/usr/lib/python3.3/http/client.py", line 1057, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.3/http/client.py", line 902, in _send_output
self.send(msg)
File "/usr/lib/python3.3/http/client.py", line 840, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 103, in connect
match_hostname(self.sock.getpeercert(), self.host)
File "/usr/lib/python3/dist-packages/urllib3/packages/ssl_match_hostname/__init__.py", line 32, in match_hostname
raise ValueError("empty or no certificate")
ValueError: empty or no certificate
```
Using `cURL` I'm able to get the expected response from the service
```
$ curl -k https://10.0.3.168:9001/
Please read the documentation for API endpoints
``` | Try following code:
```
import urllib3
c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001, cert_reqs='CERT_NONE',
assert_hostname=False)
c.request('GET', '/')
```
See [Setting assert\_hostname to False will disable SSL hostname verification](https://github.com/shazow/urllib3/pull/194) | In this question I see many answers but, IMHO, too much unnecessary information that can lead to confusion.
**Just add the `cert_reqs='CERT_NONE'` parameter**
```
import urllib3
http = urllib3.PoolManager(cert_reqs='CERT_NONE')
``` | Ignore certificate validation with urllib3 | [
"",
"python",
"python-3.x",
"urllib3",
""
] |
HI i have a field in database whose **value** is **null = true** but i need to update it timely with the integers .I am running this script on terminal
```
getW = get_HomeTeam_myworld.w
getL = get_HomeTeam_myworld.l
if winloss == "w":
getW = getW + 1
#getW.save()
print getW
```
but it gives the following error
```
unsupported operand type(s) for +: 'NoneType' and 'int'
```
please suggest where i am doing mistake. | It seems like getW value is None. Add some check:
```
if winloss == "w":
getW = getW + 1 if getW else 1
#getW.save()
print getW
``` | tis is right can you also please tell me how can i save this update this value in db
this is the whole process
```
get_HomeTeam_myworld = myworld.objects.get(team_id=gethome_teamID)
get_HomeTeam_myworld = myworld.objects.get(team_id=getaway_teamID)
getW = get_HomeTeam_myworld.w
getL = get_HomeTeam_myworld.l
if winloss == "w":
getW = getW + 1 if getW else 1
getW.save()
print getW
```
it gives me the following error
```
'int' object has no attribute 'save'
``` | update null field with integer in django Python | [
"",
"python",
"mysql",
"django",
"null",
"integer",
""
] |
Can someone explain me what's wrong with the following script?
I have started learning programming in Python very recently so this might be very trivial for the experienced out here but please look into it and let me know what's wrong with it. The idea is to write a script that reverses a given string. I understand there is a simpler way of doing this using `s[::-1]` but I would like to do it in my own way. Does the error have anything to do with `z` not being defined in a proper way? If so, please do let me know how to fix it. Thanks!
```
def reverse(x):
y = len(x)
for i in range(y-1):
z[i] == x[y - 1 - i]
return z
``` | First, from you recent comment I deduce that your python program is executed as a bash script. To have it execute properly, add this line to the top:
```
#!/usr/bin/env python
```
Then there's the reverse function. The others have all pointed out that you need to use the assignment operator (`=`) and not the equality operator (`==`). But that doesn't solve the problem, because `z` is undefined. But how should we define it? It can't be a string, because strings are immutable, so you can't change single characters in a string like you're attempting to do.
The pythonic solution, if you're determined to use an algorithm exactly like this, is to let `z` be a list, and use the `join` method of `string` to concatenate the characters in the list `z` to a single string at the end. But we're not out of the woods yet either. The `range(y - 1)` is a list from 0 to `y - 2`, but you want to have `y - 1` in that list as well. So we need to use `range(y)` instead.
All together this gives us the following code:
```
def reverse(x):
y = len(x)
z = list(x) # Using the characters of x will give it the correct length
for i in range(y):
z[i] = x[y - 1 - i]
return "".join(z)
``` | You used double "=" which return True or False for comparison. you should use "=".
i highly recommend you to read about the operators because it will save you a lot of time when coding:
<http://www.tutorialspoint.com/python/python_basic_operators.htm> | What is wrong with the following Python Script? | [
"",
"python",
""
] |
I have created table using this create command as:
> CREATE TABLE test\_table(id INT PRIMARY KEY,name
> VARCHAR(50),price INT)
i want to insert into this table wherein values are stored already in variable
```
bookdb=# name = 'algorithms'
bookdb-# price = 500
bookdb-# INSERT INTO test_table VALUES(1,'name',price);
```
I get the following error:
> ERROR: syntax error at or near "name"
> LINE 1: name = 'algorithms'
Can anyone point out the mistake and propose solution for the above?
Thanks in advance
**Edit:**
```
import psycopg2
import file_content
try:
conn = psycopg2.connect(database='bookdb',user='v22')
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS book_details")
cur.execute("CREATE TABLE book_details(id INT PRIMARY KEY,name VARCHAR(50),price INT)")
cur.execute("INSERT INTO book_details VALUES(1,'name',price)")
conn.commit()
except:
print "unable to connect to db"
```
I have used the above code to insert values into table,variables **name and price** containing the values to be inserted into table are available in file\_content python file and i have imported that file.The normal INSERT statement takes values manually but i want my code to take values which are stored in variables. | don't have postgres installed here, but you can try this
```
import psycopg2
import file_content
try:
conn = psycopg2.connect(database='bookdb',user='v22')
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS book_details")
cur.execute("CREATE TABLE book_details(id INT PRIMARY KEY,name VARCHAR(50),price INT)")
cur.execute("INSERT INTO book_details VALUES(1, '%s', %s)" % (name, price))
conn.commit()
except:
print "unable to connect to db"
``` | [SQL](https://en.wikipedia.org/wiki/SQL) does not support the concept of variables.
To use variables, you must use a programming language, such as [Java](http://en.wikipedia.org/wiki/Java), [C](http://en.wikipedia.org/wiki/C_%28programming_language%29), [Xojo](http://en.wikipedia.org/wiki/Xojo). One such language is [PL/pgSQL](http://en.wikipedia.org/wiki/PL/pgSQL), which you can think of as a superset of SQL. PL/PgSQL is often [bundled](http://www.postgresql.org/docs/current/static/plpgsql.html) as a part of Postgres installers, but not always.
I suggest you read some basic tutorials on SQL.
See this similar question: [How do you use script variables in PostgreSQL?](https://stackoverflow.com/questions/36959/how-do-you-use-script-variables-in-postgresql) | How do I insert data into table? | [
"",
"python",
"database",
"postgresql",
""
] |
I have been using selenium for automatic browser simulations and web scraping in python and it has worked well for me. But now, I have to run it behind a proxy server. So now selenium open up the window but could not open the requested page because of proxy settings not set on the opened browser. Current code is as follows (sample):
```
from selenium import webdriver
sel = webdriver.Firefox()
sel.get('http://www.google.com')
sel.title
sel.quit()
```
How do I change the above code to work with proxy server now as well? | You need to set desired capabilities or browser profile, like this:
```
profile = webdriver.FirefoxProfile()
profile.set_preference("network.proxy.type", 1)
profile.set_preference("network.proxy.http", "proxy.server.address")
profile.set_preference("network.proxy.http_port", "port_number")
profile.update_preferences()
driver = webdriver.Firefox(firefox_profile=profile)
```
Also see related threads:
* [how do i set proxy for chrome in python webdriver](https://stackoverflow.com/questions/11450158/how-do-i-set-proxy-for-chrome-in-python-webdriver)
* [Selenium using Python: enter/provide http proxy password for firefox](https://stackoverflow.com/questions/8885137/selenium-using-python-enter-provide-http-proxy-password-for-firefox)
* [Running Selenium Webdriver with a proxy in Python](https://stackoverflow.com/questions/17082425/running-selenium-webdriver-with-a-proxy-in-python)
* <http://krosinski.blogspot.ru/2012/11/selenium-firefox-webdriver-and-proxies.html> | The official Selenium documentation (<http://docs.seleniumhq.org/docs/04_webdriver_advanced.jsp#using-a-proxy>) provides clear and helpful guidelines about using a proxy.
For Firefox (which is the browser of choice in your sample code) you should do the following:
```
from selenium import webdriver
from selenium.webdriver.common.proxy import *
myProxy = "host:8080"
proxy = Proxy({
'proxyType': ProxyType.MANUAL,
'httpProxy': myProxy,
'ftpProxy': myProxy,
'sslProxy': myProxy,
'noProxy': '' # set this value as desired
})
driver = webdriver.Firefox(proxy=proxy)
``` | Running selenium behind a proxy server | [
"",
"python",
"selenium",
"selenium-webdriver",
"proxy",
"web-scraping",
""
] |
Is there a pythonic way to convert a structured array to vector?
For example:
I'm trying to convert an array like:
```
[(9,), (1,), (1, 12), (9,), (8,)]
```
to a vector like:
```
[9,1,1,12,9,8]
``` | ```
In [15]: import numpy as np
In [16]: x = np.array([(9,), (1,), (1, 12), (9,), (8,)])
In [17]: np.concatenate(x)
Out[17]: array([ 9, 1, 1, 12, 9, 8])
```
Another option is `np.hstack(x)`, but for this purpose, `np.concatenate` is faster:
```
In [14]: x = [tuple(np.random.randint(10, size=np.random.randint(10))) for i in range(10**4)]
In [15]: %timeit np.hstack(x)
10 loops, best of 3: 40.5 ms per loop
In [16]: %timeit np.concatenate(x)
100 loops, best of 3: 13.6 ms per loop
``` | You don't need to use any `numpy`, you can use `sum`:
```
myList = [(9,), (1,), (1, 12), (9,), (8,)]
list(sum(myList, ()))
```
result:
```
[9, 1, 1, 12, 9, 8]
``` | Python array to 1-D Vector | [
"",
"python",
"numpy",
""
] |
The below is returning a syntax error issue.
Having searched thoroughly online, I cannot see why. Any ideas?
```
delete Tracks
from tracks
left join releases
on tracks.label_id=releases.label_id
where tracks.label_id = 185
and releases.id = 4394
and tracks.position = 1
and tracks.name != 'Da Antidote';
```
The Syntax error is on line 1. | If I remember correctly Postgres doesn't allow joins in `DELETE`, but you can use the `USING`keyword instead like [described in the documentation](http://www.postgresql.org/docs/current/static/sql-delete.html):
```
DELETE FROM Tracks
USING releases
WHERE tracks.label_id=releases.label_id
AND tracks.label_id = 185
AND releases.id = 4394
AND tracks.position = 1
AND tracks.name != 'Da Antidote';
``` | ```
delete from tracks
left join releases
on tracks.label_id=releases.label_id
where tracks.label_id = 185
and releases.id = 4394
and tracks.position = 1
and tracks.name != 'Da Antidote';
``` | SQL Deleting from a join table not working | [
"",
"sql",
"postgresql",
""
] |
I would like to access the result of the following shell command,
```
youtube-dl -g "www.youtube.com/..."
```
to print its output `direct url` to a file, from within a python program. This is what I have tried:
```
import youtube-dl
fromurl="www.youtube.com/..."
geturl=youtube-dl.magiclyextracturlfromurl(fromurl)
```
Is that possible?
I tried to understand the mechanism in the source but got lost: `youtube_dl/__init__.py`, `youtube_dl/youtube_DL.py`, `info_extractors` ... | It's not difficult and [actually documented](https://github.com/rg3/youtube-dl/blob/master/README.md#embedding-youtube-dl):
```
import youtube_dl
ydl = youtube_dl.YoutubeDL({'outtmpl': '%(id)s.%(ext)s'})
with ydl:
result = ydl.extract_info(
'http://www.youtube.com/watch?v=BaW_jenozKc',
download=False # We just want to extract the info
)
if 'entries' in result:
# Can be a playlist or a list of videos
video = result['entries'][0]
else:
# Just a video
video = result
print(video)
video_url = video['url']
print(video_url)
``` | For simple code,
may be i think
```
import os
os.system('youtube-dl [OPTIONS] URL [URL...]')
```
Above is just running command line inside python.
Other is mentioned in the documentation [Using youtube-dl on python](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#embedding-youtube-dl)
Here is the way
```
from __future__ import unicode_literals
import youtube_dl
ydl_opts = {}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
``` | How to use youtube-dl from a python program? | [
"",
"python",
"youtube-dl",
""
] |
How efficient are iterations over slice operations in Python? And if a copy is inevitable with slices, is there an alternative?
I know that a slice operation over a list is O(k), where k is the size of the slice.
```
x[5 : 5+k] # O(k) copy operation
```
However, when iterating over a part of a list, I find that the cleanest (and most Pythonic?) way to do this (without having to resort to indices) is to do:
```
for elem in x[5 : 5+k]:
print elem
```
However my intuition is that this still results in an expensive copy of the sublist, rather than simply iterating over the existing list. | You can use `itertools.islice` to get a sliced iterator from the list:
Example:
```
>>> from itertools import islice
>>> lis = range(20)
>>> for x in islice(lis, 10, None, 1):
... print x
...
10
11
12
13
14
15
16
17
18
19
```
# Update:
As noted by @user2357112 the performance of `islice` depends on the start point of slice and the size of the iterable, normal slice is going to be fast in almost all cases and should be preferred. Here are some more timing comparisons:
For **Huge lists** `islice` is slightly faster or equal to normal slice when the slice's start point is less than half the size of list, for bigger indexes normal slice is the clear winner.
```
>>> def func(lis, n):
it = iter(lis)
for x in islice(it, n, None, 1):pass
...
>>> def func1(lis, n):
#it = iter(lis)
for x in islice(lis, n, None, 1):pass
...
>>> def func2(lis, n):
for x in lis[n:]:pass
...
>>> lis = range(10**6)
>>> n = 100
>>> %timeit func(lis, n)
10 loops, best of 3: 62.1 ms per loop
>>> %timeit func1(lis, n)
1 loops, best of 3: 60.8 ms per loop
>>> %timeit func2(lis, n)
1 loops, best of 3: 82.8 ms per loop
>>> n = 1000
>>> %timeit func(lis, n)
10 loops, best of 3: 64.4 ms per loop
>>> %timeit func1(lis, n)
1 loops, best of 3: 60.3 ms per loop
>>> %timeit func2(lis, n)
1 loops, best of 3: 85.8 ms per loop
>>> n = 10**4
>>> %timeit func(lis, n)
10 loops, best of 3: 61.4 ms per loop
>>> %timeit func1(lis, n)
10 loops, best of 3: 61 ms per loop
>>> %timeit func2(lis, n)
1 loops, best of 3: 80.8 ms per loop
>>> n = (10**6)/2
>>> %timeit func(lis, n)
10 loops, best of 3: 39.2 ms per loop
>>> %timeit func1(lis, n)
10 loops, best of 3: 39.6 ms per loop
>>> %timeit func2(lis, n)
10 loops, best of 3: 41.5 ms per loop
>>> n = (10**6)-1000
>>> %timeit func(lis, n)
100 loops, best of 3: 18.9 ms per loop
>>> %timeit func1(lis, n)
100 loops, best of 3: 18.8 ms per loop
>>> %timeit func2(lis, n)
10000 loops, best of 3: 50.9 us per loop #clear winner for large index
>>> %timeit func1(lis, n)
```
For **Small lists** normal slice is faster than `islice` for almost all cases.
```
>>> lis = range(1000)
>>> n = 100
>>> %timeit func(lis, n)
10000 loops, best of 3: 60.7 us per loop
>>> %timeit func1(lis, n)
10000 loops, best of 3: 59.6 us per loop
>>> %timeit func2(lis, n)
10000 loops, best of 3: 59.9 us per loop
>>> n = 500
>>> %timeit func(lis, n)
10000 loops, best of 3: 38.4 us per loop
>>> %timeit func1(lis, n)
10000 loops, best of 3: 33.9 us per loop
>>> %timeit func2(lis, n)
10000 loops, best of 3: 26.6 us per loop
>>> n = 900
>>> %timeit func(lis, n)
10000 loops, best of 3: 20.1 us per loop
>>> %timeit func1(lis, n)
10000 loops, best of 3: 17.2 us per loop
>>> %timeit func2(lis, n)
10000 loops, best of 3: 11.3 us per loop
```
# Conclusion:
**Go for normal slices.** | Use:
```
for elem in x[5 : 5+k]:
```
It's Pythonic! Don't change this until you've *profiled* your code and determined that this is a bottleneck -- though I doubt you will ever find this to be the main source of a bottleneck.
---
In terms of speed it will probably be your best choice:
```
In [30]: x = range(100)
In [31]: k = 90
In [32]: %timeit x[5:5+k]
1000000 loops, best of 3: 357 ns per loop
In [35]: %timeit list(IT.islice(x, 5, 5+k))
100000 loops, best of 3: 2.42 us per loop
In [36]: %timeit [x[i] for i in xrange(5, 5+k)]
100000 loops, best of 3: 5.71 us per loop
```
---
In terms of memory, it is not as bad you might think. `x[5: 5+k]` is a *shallow* copy of part of `x`. So even if the objects in `x` are large, `x[5: 5+k]` is creating a new list with k elements which reference the *same* objects as in `x`. So you only need extra memory to create a list with k references to pre-existing objects. That probably is not going to be the source of any memory problems. | Efficient iteration over slice in Python | [
"",
"python",
"performance",
"iteration",
"slice",
""
] |
I read the docs from python.org, but is still confused about this problem. In one project, I have the following script structure:
dir a:
```
math.py containing func c()
main.py containing main()
```
What should I do in main.py in order to import math under dir a rather than stdlib?
import math just does not work | ***Edit:***
Sorry my mistake... This will never work. You can choose either to give your top level package a name that doesn't conflicts with a name in the standard library. Or the main script cannot be in the package directory. So basically you can either:
Rename your module to `my_math.py` and then `main.py` can be in the same directory and you can just do:
```
from my_math import c
c()
```
Or you make a package, for example folder name `test` (or any other name that doesn't conflict with a standard library package) with files: `__init__.py` and `math.py`, and in the same level as the `test` you create a `main.py` and then the code will look like:
```
from test.math import c
c()
```
Folder structure:
```
.
|-- test
| |-- __init__.py
| `-- math.py
`-- main.py
``` | This CAN work.
```
sys.path.insert(0,'path/to/math.py')
import math
```
This will make python check at the specified directory BEFORE stdlibs. This is a risky method for the same reason. I would recommend putting math.py in its OWN directory and using that path, so nothing else is changed.
I would also recommend importing this LAST if you have other modules imported as well.
```
import stuff
import morestuff
sys.path.insert(0,'path/to/math')
import math
```
Again, using "math" is not recommended. Renaming is best, especially for popular modules like "math". | What should I do to import math.py that is not stdlib | [
"",
"python",
"python-import",
""
] |
I need to read from a file several integers written line by line and separated by line feed and insert them into a list.
```
1
2
3
4
5
```
Currently I was able to read it using the following code, but I need also to optimize my code:
```
import sys
fd = open(sys.argv[1], 'r')
for line in fd:
line = line.rstrip('\n')
L.append(int(line))
```
Is there another way to read from a file all the lines removing the line feed characters from a performance point of view ?
Thanks. | `int()` automatically removes the white-space characters,so there's no need of `str.rstrip`.
```
>>> int('10\r\n')
10
>>> int('10\n')
10
>>> int('10 \n')
10
```
You can also use a list comprehension here, it is faster than `list.append`:
```
import sys
with open(sys.argv[1]) as fd:
L = [int(line) for line in fd]
```
[Why the `with` statement?](http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files):
> It is good practice to use the `with` keyword when dealing with file
> objects. This has the advantage that the file is properly closed after
> its suite finishes, even if an exception is raised on the way. | You don't actually need to strip the line because `int()` already gets rid of trailing whitespace:
```
L = []
with open('nums.txt') as myfile: # With statements are more pythonic!
for line in myfile:
L.append(int(line))
print L
```
Returns:
```
[1, 2, 3, 4, 5]
```
As a result, you can then use [`map()`](http://docs.python.org/2/library/functions.html#map):
```
with open('nums.txt') as myfile:
L = map(int, myfile)
```
Hope this helps! | How to optimize the reading of a file removing all the line feed \n characters | [
"",
"python",
"performance",
""
] |
I've got two tables:
* tags
* tags\_news(binding)
I need to delete data which not contains `id` from `tags table`.
**Example**:
tags: 1, 2
tags\_news: 2
There is no data with id=1 in tags\_news. And I need to delete this. I don't know how. Please, help me. | You can use `NOT EXISTS`
```
DELETE tn FROM dbo.TagsNews tn
WHERE NOT EXISTS(
SELECT 1 FROM dbo.Tags t
WHERE t.ID = tn.ID
)
``` | ```
delete from tags where id not in(select id from tags_new)
``` | Special delete query | [
"",
"sql",
"sql-delete",
""
] |
This request might seem slightly ridiculous, unfortunately however, it is direly needed by my small company and because of this I will be awarding the maximum bounty for a good solution.
We have a set of legacy order information stored in a .txt file. In order to import this order information into our new custom database system, we need to, for each row, append on a value from another set.
So, in my .txt file I have :
```
Trans Date,NorthTotal,NorthSoFar,SouthTotal,SouthSoFar,IsNorthWorkingDay,IsSouthWorkingDay
2012-01-01,21,0,21,0,0,0
2012-01-02,21,0,21,0,0,0
2012-01-03,21,1,21,1,1,1
...
```
Now, I have a set of locations in a .txt file also, for which I need to add two columns - city and country. Let's say :
```
City, Country
London,England
Paris,France
```
For each row in my first text file, I need to append on a row of my second text file! So, for my end result, using my sample data above, I wish to have :
```
Trans Date,NorthTotal,NorthSoFar,SouthTotal,SouthSoFar,IsNorthWorkingDay,IsSouthWorkingDay,City,Country
2012-01-01,21,0,21,0,0,0,London,England
2012-01-02,21,0,21,0,0,0,London,England
2012-01-03,21,1,21,1,1,1,London,England
2012-01-01,21,0,21,0,0,0,Paris,France
2012-01-02,21,0,21,0,0,0,Paris,France
2012-01-03,21,1,21,1,1,1,Paris,France
...
```
At the moment my only idea for this is to import both files into an SQL database and write a complicated function to append the two together (hence my tag) - surely someone can save me and think of something that will not take all day though! Please?! Thank you very much.
Edit : I am open to solutions written in all programming languages; but would prefer something which uses DOS or some kind of console/program that can be easily reran! | If you are open to using a database and importing these files (which should not be very difficult), then you do not need a "complicated function to append the two together". All you need is a simple cross join like this ... `select t1.*, t2.* from t1, t2`
See for yourself at... <http://sqlfiddle.com/#!2/0c584/1> | Here is a solution in C#. You run it like:
```
joinfiles a.txt b.txt c.txt
```
where a.txt is the first file, b.txt the second one, and c.txt the output file that will be created. It generates the output at 100 MB/s on my machine so that is probably fast enough.
```
using System;
using System.IO;
using System.Text;
namespace JoinFiles
{
class Program
{
static void Main(string[] args)
{
if (args.Length != 3)
return;
string[] file1, file2;
try
{
using (var sr1 = new StreamReader(args[0]))
using (var sr2 = new StreamReader(args[1]))
{
file1 = sr1.ReadToEnd().Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries);
file2 = sr2.ReadToEnd().Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries);
}
using (var outstream = new StreamWriter(args[2], false, Encoding.Default, 1048576))
{
outstream.WriteLine(file1[0] + "," + file2[0]);
for (int i = 1; i < file2.Length; i++)
for (int j = 1; j < file1.Length; j++)
outstream.WriteLine(file1[j] + "," + file2[i]);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
``` | Elegantly appending a set of strings (.txt file) to another set of strings (.txt also)? | [
"",
"sql",
"string",
"algorithm",
"text",
"append",
""
] |
I am writing like this in Django:
```
writer.writerow(['VideoName ', 'Director ', 'Cameraman ', 'Editor ', 'Reporter ', 'Tag '])
```
It is writing in CSV like this:
```
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="report.csv"'
Video, Director, Cameraman
```
But I only want:
```
Video Director Cameraman
```
I am doing this in Django. | Set a `delimiter` when you are initializing your `csv.writer`:
```
writer = csv.writer(buffer, delimiter=" ")
``` | Specify [delimiter](http://docs.python.org/2/library/csv#csv.Dialect.delimiter) when create csv.writer.
In addition to that trim field values. (`'videoName '` -> `'videoName'`)
```
>>> import csv
>>> import sys
>>>
>>> writer = csv.writer(sys.stdout, delimiter=' ')
>>> writer.writerow(['VideoName', 'Director', 'Cameraman', 'Editor', 'Reporter', 'Tag'])
VideoName Director Cameraman Editor Reporter Tag
``` | Don't include comma while writing to CSV in Django? | [
"",
"python",
"django",
"csv",
""
] |
First time writing a class here and I need a little help.
I've been trying to write a class in which the first takes a tab-delimited csv file and outputs a list of dictionaries. Each of the keys in the dictionary is a column title in the csv.
So far, this is what my class looks like:
```
import csv
class consolidate(object):
def __init__(self, file):
self.file = file
def create_master_list(self):
with(open(self,'rU')) as f:
f_d = csv.DictReader(f, delimiter = '\t')
m_l = []
for d in f_d:
m_l.append(d)
return m_l
```
When I try to pass it a file, as follows:
```
c = consolidate()
a = c.create_master_list('Abilities.txt')
```
I get the following error:
```
TypeError: __init__() takes exactly 2 arguments (1 given)
```
I know that what I want to pass a file argument to the `create_master_list` function, but I'm unsure what the right syntax to do this is.
I've tried `self.file` and `file` as arguments, and both do not work as well.
Thanks! | ## Problem
You did not supply second argument for `__init__()`:
```
class consolidate(object):
def __init__(self, file):
self.file = file
# rest of the code
```
while you are instantiating it like this:
```
c = consolidate()
```
## Solution
This should work. Change class definition to this:
```
import csv
class consolidate(object):
def __init__(self, filename):
self.filename = filename
def create_master_list(self):
with open(self.filename, 'rU') as f:
f_d = csv.DictReader(f, delimiter='\t')
m_l = []
for d in f_d:
m_l.append(d)
return m_l
```
and then use it like this:
```
c = consolidate('Abilities.txt')
a = c.create_master_list()
```
This is one way of achieving the fix.
**Note**: I also changed the naming (`self.file` suggested it is file object, while it actually is a file name, thus `self.filename`). Also keep in mind that the path is relative to from where you execute the script. | You should pass the file as a parameter to `__init__`.
```
c = consolidate ('abilities.txt')
```
Then inside `create_master_list` you should open `self.file`.
```
with (open (self.file, 'rU') ) as f:
```
Now you can call
```
a = c.create_master_list ()
``` | Passing a file to a class | [
"",
"python",
"class",
""
] |
I have a program that opens an account and there are several lines but i want it to update this one line `credits = 0` Whenever a purchase is made I want it to add one more to the amount this is what the file looks like
```
['namef', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2']
credits = 0
```
this bit of info is kept inside of a text file I don't care if you replace it (as long as it has 1 more) or whether you just update it. Please help me out :) sorry if this question is trival | The below code snippet should give you an idea on how to go about. This code updates, the value of the counter variable present within a file **counter\_file.txt**
```
import os
counter_file = open(r'./counter_file.txt', 'r+')
content_lines = []
for line in counter_file:
if 'counter=' in line:
line_components = line.split('=')
int_value = int(line_components[1]) + 1
line_components[1] = str(int_value)
updated_line= "=".join(line_components)
content_lines.append(updated_line)
else:
content_lines.append(line)
counter_file.seek(0)
counter_file.truncate()
counter_file.writelines(content_lines)
counter_file.close()
```
Hopefully, this sheds some light on how to go about with solving your problem | You can create a general text file replacer based on a dictionary containing what to look for as keys and as corresponding values what to replace:
In the template text file put some flags where you want variables:
```
['<namef>', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2']
credits = <credit_var>
```
Then create a mapping dictionary:
```
map_dict = {'<namef>':'New name', '<credit_var>':1}
```
Then rewrite the text file doing the replacements:
```
newfile = open('new_file.txt', 'w')
for l in open('template.txt'):
for k,v in map_dict.iteritems():
l = l.replace(k,str(v))
newfile.write(l)
newfile.close()
``` | how to update a variable in a text file | [
"",
"python",
"file",
"variables",
"python-3.x",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.