questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
Converting IntVar() value to Int Iam trying to convert Intvar() value to Int by self.var1 = IntVar()self.scale = ttk.LabeledScale(self.frame1, from_ = 3, to = 7, variable = self.var1).grid(row = 2, sticky = 'w')value = int(self.var1)but got an error saying TypeError: int() argument must be a string, a bytes-like object or a number, not 'IntVar' | You need to invoke the .get method of IntVar which returns the object's value as an integer. |
Get the last coordinates list from a text file I am reading coordinates of nodes (76 nodes). Basically I split the string of all coordinates. After splitting I have a result for coordinates of node, first number is the number of node, accordingly coordinates (x,y). Example:['1', '3600', '2300']I only want to get coordinates of node from 61 to the end node. I try to use the number of node by convert it into integer to compare. I don't want to delete the line "while line != "EOF" cause it shows up at the end of text file. How can I do it? def read_coordinates(self, inputfile): coord = [] iFile = open(inputfile, "r") for i in range(6): # skip first 6 lines iFile.readline() line = iFile.readline().strip() while line != "EOF": values = line.split() while int(values[0]) > 61: coord.append([float(values[1]), float(values[2])]) line = iFile.readline().strip() iFile.close() return coord | You should use an if statement inside while line != "EOF": like so:while line != "EOF": values = line.split() if int(values[0]) > 61: coord.append([float(values[1]), float(values[2])]) line = iFile.readline().strip()An alternate solution would be to read in the hole file then use list slicing to remove the first 6 lines and list comprehension to remove nodes larger than 61. with open(iFile, 'r') as f: coords = [ line.split(' ') for line in f] # Read every line coords = coords[6:] # Skip first 6 lines coords = [ [x,y] for nr, x, y in coords if int(nr) > 61] # Remove every node large r than after 61 |
True privateness in Python PEP 8 states that (emphasis mine): We don't use the term "private" here, since no attribute is really private in Python (without a generally unnecessary amount of work).I guess it refers to defining the actual class in some other language and then exposing only the public members to the interpreter. Is there some other way to achieve true privateness in Python?I'm asking just out of curiosity. | No, nothing is truly private in Python.If you know the method name, you can get it.I think you might come up with a clever hack, but it would be just that - a hack. No such functionality exists in the language. |
how to write in next row using python? 1234 5678 9876 542 1231; 2333 1234 5678 579i want to write this data in a csv (myfile.csv) file from a .txt (murtuz.txt) file such that after semi colon it starts from the next row.I have tried the code mentioned below, it helps me to write data in csv file but i am failed to change the line after semi colon.import csv from StringIO import StringIOimport sysdata = open("murtuz.txt","r").read() data1 = StringIO(data) print data1 reader = csv.reader(data1, delimiter=' ',) for row in reader: print row out = csv.writer(open("myfile.csv","w"), delimiter=',') out.writerow(row) sys.exit()need some experts' help.thanks in advance. | csv.reader can accept any iterable as input. In particular, you can feed it lines yielded from a generator. You can use that generator to massage your input data -- for example, to split on semicolons:input csvinput textwrapinput iodef splitter(iterable): for line in iterable: for part in line.split(';'): yield part.strip()f=open("murtuz.txt","r")# with open("murtuz.txt","r") as f: # for Python 2.6 or better csvobj=csv.reader(splitter(f),delimiter=' ') for row in csvobj: print(row)f.close()yields['1234', '5678', '9876', '542', '1231']['2333', '1234', '5678', '579']['1234', '5678', '9876', '542', '1231']['2333', '1234', '5678', '579']Once you've got the reader under control, writing it out again is easy. |
Extract rows as column from pandas data frame after melt I'm working with pandas and I have this table:ID 1-May-2016 1-Jun-2016 20-Jul-2016 Class1 0.2 0.52 0.1 H2 0.525 0.20 0.01 L...and I'd like to obtain this table:ID Date Value Class1 1-May-2016 0.2 H1 1-Jun-2016 0.52 H...2 1-May-2016 0.525 L...I tried:pandas.melt(df,id_vars["ID"], var_name = "Class")and I obtain almost what I'd like:ID Class Value 1 1-May-2016 0.2 1 1-Jun-2016 0.52 ...1 Class L2 Class Hexcept that the bottom part of the table contains the information that should be considered as an "extra" column.Is this the right process/approach? If yes, how can I "shift" the bottom part of the table to be a column that contains the class of my samples? | You need add Class to id_vars in melt:print (pd.melt(df,id_vars=["ID", 'Class'], var_name = "Date", value_name='Vals')) ID Class Date Vals0 1 H 1-May-2016 0.2001 2 L 1-May-2016 0.5252 1 H 1-Jun-2016 0.5203 2 L 1-Jun-2016 0.2004 1 H 20-Jul-2016 0.1005 2 L 20-Jul-2016 0.010Then use sort_values if necessary:print (pd.melt(df,id_vars=["ID", 'Class'], var_name = "Date", value_name='Vals') .sort_values(['ID', 'Class'])) ID Class Date Vals0 1 H 1-May-2016 0.2002 1 H 1-Jun-2016 0.5204 1 H 20-Jul-2016 0.1001 2 L 1-May-2016 0.5253 2 L 1-Jun-2016 0.2005 2 L 20-Jul-2016 0.010Another possible solution with stack:print (df.set_index(["ID", 'Class']) .stack() .reset_index(name='Vals') .rename(columns={'level_2':'Date'})) ID Class Date Vals0 1 H 1-May-2016 0.2001 1 H 1-Jun-2016 0.5202 1 H 20-Jul-2016 0.1003 2 L 1-May-2016 0.5254 2 L 1-Jun-2016 0.2005 2 L 20-Jul-2016 0.010 |
Loop that will create new Pandas.DataFrame column Following the scikit-learn tutorial here, if we have a Pandas.DataFrame that has a column named colors, how can we create a loop to loop through all of the DataFrame's columns (or a list containing the required columns) so that all categorial variables (eg. variable colors that can have values blue, red, purple) will be replaced by len(colors) number of dummy variable columns colors#blue, colors#red, colors#purple?Just learnt python, so I'll write my idea in some psuedo code.Attempt (psuedo code)cols_to_process = ['colors']# Create new columns for dummy variables// if listings.keyname in cols_to_process: // unique_values = list of unique values in listings[col] // listings = listings.join(unique_values, axis=1)# Populate dummy variable columns# Remove old columns that have dummy variable columns created | You can use the pandas.get_dummies function to do that:>>> import pandas as pd>>> pd.get_dummies(listings['color'], 'color') |
How to "listen" to a multiprocessing queue in Python I will start with the code, I hope it is simple enough:import Queueimport multiprocessingclass RobotProxy(multiprocessing.Process): def __init__(self, commands_q): multiprocessing.Process.__init__(self) self.commands_q = commands_q def run(self): self.listen() print "robot started" def listen(self): print "listening" while True: print "size", self.commands_q.qsize() command = self.commands_q.get() print command if command is "start_experiment": self.start_experiment() elif command is "end_experiment": self.terminate_experiment() break else: raise Exception("Communication command not recognized") print "listen finished" def start_experiment(self): #self.vision = ds.DropletSegmentation( ) print "start experiment" def terminate_experiment(self): print "terminate experiment"if __name__ == "__main__": command_q = Queue.Queue() robot_proxy = RobotProxy( command_q ) robot_proxy.start() #robot_proxy.listen() print "after start" print command_q.qsize() command_q.put("start_experiment") command_q.put("end_experiment") print command_q.qsize() raise SystemExitSo basically I launch a process, and I want this process to listen to commands put on the Queue.When I execute this code, I get the following:after start02listeningsize 0it seems that I am not sharing the Queue properly, or that I am doing any other error. The program gets stuck forever in that "self.commands_q.get() when in theory the queue has 2 elements | You need to use multiprocessing.Queue instead of Queue.Queue in order to have the Queue object be shared across processes.See here: Multiprocessing Queues |
Python ValueError: Substring not found I have been working on this code and I have tried to debug it for almost a day by now, but I can't seen to find where the problem lies. I stop the debugger at line 66. When I step into or over the code I get an error message. Traceback (most recent call last): File "/home/johan/pycharm-community-4.5.3/helpers/pydev/pydevd.py", line 2358, in <module> globals = debugger.run(setup['file'], None, None, is_module) File "/home/johan/pycharm-community-4.5.3/helpers/pydev/pydevd.py", line 1778, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/home/johan/sdp/lets_learn_python/20_a_star_algorithm.py", line 87, in <module> a.solve() File "/home/johan/sdp/lets_learn_python/20_a_star_algorithm.py", line 66, in solve closest_child.create_children() File "/home/johan/sdp/lets_learn_python/20_a_star_algorithm.py", line 48, in create_children child = StateString(val, self) File "/home/johan/sdp/lets_learn_python/20_a_star_algorithm.py", line 32, in __init__ self.dist = self.get_dist() File "/home/johan/sdp/lets_learn_python/20_a_star_algorithm.py", line 40, in get_dist dist += abs(i - self.value.index(letter))ValueError: substring not foundThis is the code I have been working on. This is from a tutorial by Trevor Payne. #!usr/bin/env pythonfrom Queue import PriorityQueueclass State(object): def __init__(self, value, parent, start=0, goal=0): self.children = [] self.parent = parent self.value = value self.dist = 0 if parent: self.path = parent.path[:] self.path.append(value) self.start = parent.start self.goal = parent.goal else: self.path = [value] self.start = start self.goal = goal def get_dist(self): pass def create_children(self): passclass StateString(State): def __init__(self, value, parent, start=0, goal=0): super(StateString, self).__init__(value, parent, start, goal) self.dist = self.get_dist() def get_dist(self): if self.value == self.goal: return 0 dist = 0 for i in range(len(self.goal)): letter = self.goal[i] dist += abs(i - self.value.index(letter)) return dist def create_children(self): if not self.children: for i in xrange(len(self.goal)-1): val = self.value val = val[:i] + val[i+1] + val[i] + val[i+2] child = StateString(val, self) self.children.append(child)class AStarSolver: def __init__(self,start, goal): self.path = [] self.visited_queue = [] self.priority_queue = PriorityQueue() self.start = start self.goal = goal def solve(self): start_state = StateString(self.start, 0, self.start, self.goal) count = 0 self.priority_queue.put((0, count, start_state)) while not self.path and self.priority_queue.qsize(): closest_child = self.priority_queue.get()[2] closest_child.create_children() self.visited_queue.append(closest_child.value) for child in closest_child.children: if child.value not in self.visited_queue: count += 1 if not child.dist: self.path = child.path break self.priority_queue.put(child.dist, count) if not self.path: print "Goal of {0} is not possible!".format(self.goal) return self.path# ================================# MAINif __name__ == "__main__": start1 = "acbda" goal1 = "dabcd" print "Starting..." a = AStarSolver(start1, goal1) a.solve() for i in xrange(len(a.path)): print "%d) " % i + a.path[i]I really hope someone could help me figure this out. | You should process the case when letter doesn't exist in self.value (line 41):dist += abs(i - self.value.index(letter))string.index raise an exception when the letter doesnt exist. Better use string.find instead, it gives -1 if the letter not found:dist += abs(i - self.value.find(letter))If you want to give another value than -1 , you should test it before using it:f = self.value.find(letter)if f == -1: # DO what you want |
Python Socket Timing out I am attempting to open a socket to google on port 80 but for the life of me I can't figure out why this is timing out. If I don't set a timeout it just hangs indefinitely. I don't think my companies firewall is blocking this request. I can navigate to google.com in the browser so there shouldn't be any hangups here. import socketHOST = '173.194.121.39' PORT = 80 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)s.connect((HOST, PORT))s.settimeout(20)data = s.recv(1024)print(data)s.close() | This will work:import socketHOST = 'www.google.com'PORT = 80IP = socket.gethostbyname(HOST)s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)s.connect((IP, PORT))message = b"GET / HTTP/1.1\r\n\r\n"s.sendall(message)data = s.recv(1024)print(data.decode('utf-8'))s.close()What was wrong with you code:Your code was directly using the IP, i tried looking up for the IP by hostname using socket.gethostbyname(). (i dont know why :P)You were expecting data to be returned without sending any. I used s.sendall method to send a HTTP request (since you're using python3, the data must be sent in bytes). Also decoded the data returned to print.I hope, it will work for you too. |
Why Do I need Range(Len)) for a nested if? This works as is but when I didn't have range(len..)) in and do for i in arr and for j in arr, I get IndexError: list index out of range on the if statement. Why is this?arr = [2,2,2,2,4,5,2,45,3,4,4]mostFrq = 0mostFrqAmount = 0for i in range(len(arr)): amountOfCurrent = 0 # now look at temp element for j in range(len(arr)): if (arr[j] == arr[i]): amountOfCurrent = amountOfCurrent + 1 if (amountOfCurrent > mostFrqAmount): mostFrqAmount = amountOfCurrent mostFrq = arr[i]print(mostFrq)print(mostFrqAmount) | When you're iterating over a list, you get its elements. Since you're then accessing list elements at the index of that element, it won't work when it's out of bounds (and wouldn't do what you expected anyways).Instead, just use the items you're provided by the loop:mostFrq = 0mostFrqAmount = 0for i in arr: amountOfCurrent = 0 # now look at temp element for j in arr: if (j == i): amountOfCurrent += 1 if (amountOfCurrent > mostFrqAmount): mostFrqAmount = amountOfCurrent mostFrq = i |
trying to fill a new column in a dataframe with a for loop Based on a value of another column I'd like to fill out a new column with a for loop. Regrettably not getting the results I need;profit = []# For each row in the column,for row in df3['Result']: # if value is; if row == 'H': # Append a Profit/Loss profit.append(df3['column value H']) # else, if value is, elif row == 'D': # Append a Profit/Loss profit.append(df3['column value D']) # otherwise, else: # Append a Profit/Loss profit.append(df3['column value A'])df3['profit'] = profit | I think you need double numpy.where:df3['profit'] = np.where(df3['Result'] == 'H', df3['column value H'], np.where(df3['Result'] == 'D', df3['column value D'], df3['column value A']))Sample:df3 = pd.DataFrame({'Result':['H','D','E'], 'column value H':[4,5,6], 'column value D':[7,8,9], 'column value A':[1,3,5]})print (df3) Result column value A column value D column value H0 H 1 7 41 D 3 8 52 E 5 9 6df3['profit'] = np.where(df3['Result'] == 'H', df3['column value H'], np.where(df3['Result'] == 'D', df3['column value D'], df3['column value A']))print (df3) Result column value A column value D column value H profit0 H 1 7 4 41 D 3 8 5 82 E 5 9 6 5Timings:In [198]: %timeit (jez(df3))100 loops, best of 3: 7.59 ms per loopIn [199]: %timeit (wwii(df4))1 loop, best of 3: 1.49 s per loopIn [200]: %timeit (wwii1(df5))1 loop, best of 3: 4.48 s per loopCode for testing:df3 = pd.DataFrame({'Result':['H','D','E'], 'column value H':[4,5,6], 'column value D':[7,8,9], 'column value A':[1,3,5]})print (df3)df3 = pd.concat([df3]*10000).reset_index(drop=True)df4 = df3.copy()df5 = df3.copy()def jez(df3): df3['profit'] = np.where(df3['Result'] == 'H', df3['column value H'], np.where(df3['Result'] == 'D', df3['column value D'], df3['column value A'])) return (df3)def foo(series): # d maps Result column values to DataFrame/Series column names d = {'H':'column value H', 'D':'column value D'} try: return series[d[series['Result']]] except KeyError as e: return series['column value A']def wwii(df3): df3['Profit'] = df3.apply(foo, axis = 1) return df3def wwii1(df3): profit = [] for row in df3.iterrows(): series = row[1] if series.Result == 'H': # Append a Profit/Loss profit.append(series['column value H']) # else, if value is, elif series.Result == 'D': # Append a Profit/Loss profit.append(series['column value D']) # otherwise, else: # Append a Profit/Loss profit.append(series['column value A']) df3['profit'] = profit return df3 print (jez(df3)) print (wwii(df4)) print (wwii1(df5)) |
Detection of available non-standard hash algorithms using hashlib in Python According to the Python documentation, only a few hash algorithms are guaranteed to be supported by the hashlib module (MD5 and SHA***). How would I go about detecting if other algorithms are available? (like RIPEMD-160) Of course, I could try to use it using the RIPEMD-160 example from the documentation, but I'm not sure how it would complain. Would it throw an exception, if yes, which exception? | Just try it in a shell:>>> h = hashlib.new('ripemd161') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/hashlib.py", line 124, in __hash_new return __get_builtin_constructor(name)(string) File "/usr/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type %s' % name)ValueError: unsupported hash type ripemd161 |
Parsing a json gives JSONDecodeError: Unterminated string I have a document with new-line-delimited json's, to which I apply some functions. Everything works up until this line, which looks exactly like this:{"_id": "5f114", "type": ["Type1", "Type2"], "company": ["5e84734"], "answers": [{"title": " answer 1", "value": false}, {"title": "answer 2
", "value": true}, {"title": "This is a title.", "value": true}, {"title": "This is another title", "value": true}], "audios": [null], "text": {}, "lastUpdate": "2020-07-17T06:24:50.562Z", "title": "This is a question?", "description": "1000 €.", "image": "image.jpg", "__v": 0}The entire code:import json def unimportant_function(d): d.pop('audios', None) return {k:v for k,v in d.items() if v != {}}def parse_ndjson(data): return [json.loads(l) for l in data.splitlines()]with open('C:\\path\\the_above_example.json', 'r', encoding="utf8") as handle: data = handle.read() dicts = parse_ndjson(data)for d in dicts: new_d = unimportant_function(d) json_string=json.dumps(new_d, ensure_ascii=False) print(json_string)The error JSONDecodeError: Unterminated string starting at: line 1 column 260 (char 259) happens at dicts = parse_ndjson(data). Why? I also have no idea what that symbol after "answer 2" is, it didn't appear in the data but it appeared when I copy pasted it.What is the problem with the data? | The unprintable character embedded in the "answer 2" string is a paragraph separator, which is treated as whitespace by .splitlines():>>> 'foo\u2029bar'.splitlines()['foo', 'bar'](Speculation: the ndjson file might be exploiting this to represent "this string should have a newline in it", working around the file format. If so, it should probably be using a \n escape instead.)The character is, however, not treated specially if you iterate over the lines of the file normally:>>> # For demonstration purposes, I create a `StringIO`>>> # from a hard-coded string. A file object reading>>> # from disk will behave similarly.>>> import io>>> for line in io.StringIO('foo\u2029bar'):... print(repr(line))...'foo\u2029bar'So, the simple fix is to make parse_ndjson expect a sequence of lines already - don't call .splitlines, and fix the calling code appropriately. You can either pass the open handle directly:def parse_ndjson(data): return [json.loads(l) for l in data]with open('C:\\path\\the_above_example.json', 'r', encoding="utf8") as handle: dicts = parse_ndjson(handle)or pass it to list to create a list explicitly:def parse_ndjson(data): return [json.loads(l) for l in data]with open('C:\\path\\the_above_example.json', 'r', encoding="utf8") as handle: dicts = parse_ndjson(list(handle))or create the list using the provided .readlines() method:def parse_ndjson(data): return [json.loads(l) for l in data]with open('C:\\path\\the_above_example.json', 'r', encoding="utf8") as handle: dicts = parse_ndjson(handle.readlines()) |
Is it possible to get the source code of a (possibly decorated) Python function body, including inline comments? I am trying to figure out how to only get the source code of the body of the function.Let's say I have:def simple_function(b = 5): a = 5 print("here") return a + bI would want to get (up to indentation):""" a = 5 print("here") return a + b"""While it's easy in the case above, I want it to be agnostic of decorators/function headers, etc. However, still include inline comments. So for example:@decorator1@decorator2def simple_function(b: int = 5): """ Very sophisticated docs """ a = 5 # Comment on top print("here") # And in line return a + bWould result in:""" a = 5 # Comment on top print("here") # And in line return a + b"""I was not able to find any utility and have been trying to play with inspect.getsourcelines for few hours now, but with no luck.Any help appreciated!Why is it different from How can I get the source code of a Python function?This question asks for a whole function source code, which includes both decorators, docs, def, and body itself. I'm interested in only the body of the function. | I wrote a simple regex that does the trick. I tried this script with classes and without. It seemed to work fine either way. It just opens whatever file you designate in the Main call, at the bottom, rewrites the entire document with all function/method bodies doc-stringed and then save it as whatever you designated as the second argument in the Main call.It's not beautiful, and it could probably have more efficient regex statements. It works though. The regex finds everything from a decorator (if one) to the end of a function/method, grouping tabs and the function/method body. It then uses those groups in finditer to construct a docstring and place it before the entire chunk it found.import reFUNC_BODY = re.compile(r'^((([ \t]+)?@.+\n)+)?(?P<tabs>[\t ]+)?def([^\n]+)\n(?P<body>(^([\t ]+)?([^\n]+)\n)+)', re.M)BLANK_LINES = re.compile(r'^[ \t]+$', re.M)class Main(object): def __init__(self, file_in:str, file_out:str) -> None: #prime in/out strings in_txt = '' out_txt = '' #open resuested file with open(file_in, 'r') as f: in_txt = f.read() #remove all lines that just have space characters on them #this stops FUNC_BODY from finding the entire file in one shot in_txt = BLANK_LINES.sub('', in_txt) last = 0 #to keep track of where we are in the file #process all matches for m in FUNC_BODY.finditer(in_txt): s, e = m.span() #make sure we catch anything that was between our last match and this one out_txt = f"{out_txt}{in_txt[last:s]}" last = e tabs = m.group('tabs') if not m.group('tabs') is None else '' #construct the docstring and inject it before the found function/method out_txt = f"{out_txt}{tabs}'''\n{m.group('body')}{tabs}'''\n{m.group()}" #save as requested file name with open(file_out, 'w') as f: f.write(out_txt) if __name__ == '__main__': Main('test.py', 'test_docd.py')EDIT:Apparently, I "missed the entire point" so I wrote it again a different way. Now you can get the body while the code is running and decorators don't matter, at all. I left my other answer here because it is also a solution, just not a "real time" one.import re, inspectFUNC_BODY = re.compile('^(?P<tabs>[\t ]+)?def (?P<name>[a-zA-Z0-9_]+)([^\n]+)\n(?P<body>(^([\t ]+)?([^\n]+)\n)+)', re.M)class Source(object): @staticmethod def investigate(focus:object, strfocus:str) -> str: with open(inspect.getsourcefile(focus), 'r') as f: for m in FUNC_BODY.finditer(f.read()): if m.group('name') == strfocus: tabs = m.group('tabs') if not m.group('tabs') is None else '' return f"{tabs}'''\n{m.group('body')}{tabs}'''" def decorator(func): def inner(): print("I'm decorated") func() return inner @decorator def test(): a = 5 b = 6 return a+bprint(Source.investigate(test, 'test')) |
Fast Fourier Transforms on existing dataframe is showing unexpexted results I have a .csv file with voltage data, when I plot the data with time I can see that it is a sinusoidal wave with 60hz frequency.Now when I try to perform fft using the scipy/numpy fft modules, I get a spike at near 0 frequency while logically it should be at 60. (shown below)When I tried it with a sin wave created in python I get proper results but I'm not getting it with my actual data.I'm sharing my code below, please let me know if I am doing something wrong. Thanks in advance.import csvfrom matplotlib import pyplot as pltimport pandas as pdimport numpy as npfrom scipy.fftpack import fftfrom scipy.fftpack import fftfreqdf = pd.read_csv('Va_data.csv')print(df.head())N = df.shape[0]frequency = np.linspace(0.0,100, int(N/2))freq_data = fft(df['Va'])y = (2/N)*np.abs(freq_data[0:np.int(N/2)])plt.plot(frequency, y)plt.title('Frequency Domain Signal')plt.xlabel('Frequency in Hz')plt.ylabel('Amplitude')plt.show()Voltage data | Data should be fine and FFT calculation (upto a constant) is fine too. It is about how the the results are plotted. To make the x-axis values represent the frequency information in terms of Hertz, you needfrequency = np.arange(N) / N * sampling_rateand then you can crop the half of itfrequency = frequency[:N//2]and give it to plt.plot(frequency, y). The equation for frequency above comes from the fact that each DFT coefficient X(k) for k = 0, .., N-1 has a exp(-j 2pi kn/N) in it where k/N gives you the normalized frequency. Multiplying by sampling rate recovers the frequency corresponding to the continous domain.A sample:# sample x dataxs = np.linspace(0, 4, 1_000)# sampling rate in this casefs = 1 / np.diff(xs)[0]# sine of itys = np.sin(2 * np.pi * 60 * xs)# taking FFTdft = np.fft.fft(ys)# getting x-axis values to represent freq in HzN = len(xs)x_as_freq = np.arange(N) / N * fs# now plotting itplt.plot(x_as_freq, np.abs(dft))plt.xlabel("Frequency (Hz)")plt.ylabel("DFT magnitude")# to see that peak is indeed at 60Hzplt.xticks(np.arange(0, 250, 20))which gives |
Django Query Performance I have a rather performance related question about django queries. Say I have a table of employees with 10,000 records. Now If I'm looking to select 5 random employees that are of age greater than or equal to 20, let's say some 5,500 employees are 20 or older. The django query would be:Employee.objects.filter(age__gte=20).order_by('?')[:5]and the raw counterpart of this query in mysql will be:SELECT * FROM `database`.`employee` WHERE `employee`.`age` >= 20ORDER BY RAND ()LIMIT 5;From the looks of django query the database first returns the 5,500 records, then python sorts these records on random or whatever order we select and a chunk of first five records is returned whereas the raw query will return only five records from the database directly.My question is that is there any performance difference between both the queries? If so which one is better and why? | I did a quick check on my existing project:queryset = BlahModel.objects.order_by('?')[:5]print queryset.queryThe result is:SELECT `blah_model`.`id`, `blah_model`.`date` FROM `blah_model` ORDER BY RAND() LIMIT 5;So, they are the same.I wouldn't be too surprise at the result, because django ORM is a direct mapping between sql query result and django object, so order_by('?') would be equal to ORDER BY RAND(), even the [:5] statement is translated to LIMIT in mysql (here's the doc and doc). |
Hashing Pandas dataframe breaks First import:import pandas as pdimport numpy as npimport hashlibNext, consider the following:np.random.seed(42)arr = np.random.choice([41, 43, 42], size=(3,3))df = pd.DataFrame(arr)print(arr)print(df)print(hashlib.sha256(arr.tobytes()).hexdigest())print(hashlib.sha256(df.values.tobytes()).hexdigest())Multiple executions of this snippet yield the same hash twice all the time: ddfee4572d380bef86d3ebe3cb7bfa7c68b7744f55f67f4e1ca5f6872c2c9ba1.However, if we consider the following:np.random.seed(42)arr = np.random.choice(['foo', 'bar', 42], size=(3,3))df = pd.DataFrame(arr)print(arr)print(df)print(hashlib.sha256(arr.tobytes()).hexdigest())print(hashlib.sha256(df.values.tobytes()).hexdigest())Note that there are strings in the data now. The hash of the arr is fixed (52db9328682317c44370b8186a5c6bae75f2a94c9d0d5b24d61f602857acd3de) for different evaluations, but the one of the pandas.DataFrame changes each time.Any pythonic way around it? No Pythonic? Edit: Related links:Hashable DataFrames | A pandas DataFrame or Series can be hashed using the pandas.util.hash_pandas_object function, starting in version 0.20.1. |
How to wait for RxPy parallel threads to complete Based on this excellent SO answer I can get multiple tasks working in parallel in RxPy, my problem is how do you wait for them to all complete? I know using threading I can do .join() but there doesn't seem to be any such option with Rx Schedulers. .to_blocking() doesn't help either, the MainThread completes before all notifications have fired and the complete handler has been called. Here's an example:from __future__ import print_functionimport os, sysimport timeimport randomfrom rx import Observablefrom rx.core import Schedulerfrom threading import current_threaddef printthread(val): print("{}, thread: {}".format(val, current_thread().name))def intense_calculation(value): printthread("calc {}".format(value)) time.sleep(random.randint(5, 20) * .1) return valueif __name__ == "__main__": Observable.range(1, 3) \ .select_many(lambda i: Observable.start(lambda: intense_calculation(i), scheduler=Scheduler.timeout)) \ .observe_on(Scheduler.event_loop) \ .subscribe( on_next=lambda x: printthread("on_next: {}".format(x)), on_completed=lambda: printthread("on_completed"), on_error=lambda err: printthread("on_error: {}".format(err))) printthread("\nAll done") # time.sleep(2)Expected outputcalc 1, thread: Thread-1calc 2, thread: Thread-2calc 3, thread: Thread-3on_next: 2, thread: Thread-4on_next: 3, thread: Thread-4on_next: 1, thread: Thread-4on_completed, thread: Thread-4All done, thread: MainThreadActual outputcalc 1, thread: Thread-1calc 2, thread: Thread-2calc 3, thread: Thread-3All done, thread: MainThreadActual output if I uncomment the sleep callcalc 1, thread: Thread-1calc 2, thread: Thread-2calc 3, thread: Thread-3All done, thread: MainThreadon_next: 2, thread: Thread-4on_next: 3, thread: Thread-4on_next: 1, thread: Thread-4on_completed, thread: Thread-4 | Posting complete solution here:from __future__ import print_functionimport os, sysimport timeimport randomfrom rx import Observablefrom rx.core import Schedulerfrom threading import current_threadfrom rx.concurrency import ThreadPoolSchedulerdef printthread(val): print("{}, thread: {}".format(val, current_thread().name))def intense_calculation(value): printthread("calc {}".format(value)) time.sleep(random.randint(5, 20) * .1) return valueif __name__ == "__main__": scheduler = ThreadPoolScheduler(4) Observable.range(1, 3) \ .select_many(lambda i: Observable.start(lambda: intense_calculation(i), scheduler=scheduler)) \ .observe_on(Scheduler.event_loop) \ .subscribe( on_next=lambda x: printthread("on_next: {}".format(x)), on_completed=lambda: printthread("on_completed"), on_error=lambda err: printthread("on_error: {}".format(err))) printthread("\nAll done") scheduler.executor.shutdown() # time.sleep(2) |
Omit one column in Google Chart As the title says, I want to display a Google Chart omitting one column of my data table. My table has the following structure:['2016-01-05 12:45:05', 1.187, 20.375, 45.375],['2016-01-05 13:00:04', 1.687, 21.437, 43.937],['2016-01-05 13:15:04', 2.062, 22.062, 43.25],There are four columns, but I only want to display the first, third and fourth column in my chart.My Google Chart snippet looks like this:def print_graph_script(table):# google chart snippetchart_code="""<script type="text/javascript" src="https://www.google.com/jsapi"></script><script type="text/javascript"> google.load("visualization", "1", {packages:["corechart"]}); google.setOnLoadCallback(drawChart); function drawChart() { var data = google.visualization.arrayToDataTable([ ['Time', 'Temp1', 'Temp2', 'Temp3'], %s ]); var options = { animation:{"startup": true, duration: 1500, easing: 'out', }, title: 'Temperature', showAxisLines:true, hAxis: { title: '', slantedText:true, slantedTextAngle:90, textStyle: {fontSize: '12'} }, vAxis: { title: '', textStyle: {fontSize: '12'} }, series: { 0: { color: '#333333' }, 1: { color: '#75baff' }, 2: { color: '#eba314' }, } }; var chart = new google.visualization.AreaChart(document.getElementById('chart_div')); chart.draw(data, options);}</script>"""print chart_code % (table)Any ideas? Thanks in advance | Why not just filter the table like table_filtered = [ [row[0], row[2], row[3]] for row in table]and pass table_filtered to the print |
Python script won't let me exit after executing a exe in os.system def x(): os.system('x.exe') sys.exit()This is what i used in my script but when i run the program it never gets up to sys.exit. | I think os.system function will execute shell command. and sys.exit() command is no problem . I suggest you to make a test ,like the following script:import syssys.exit()print "sys.exit is not Ok!" |
Python Django POST issues I am facing a problem with posting data in Django.I have defined one URL in urls.py: url(r'^lares_conf_123kmk_$', 'lares.call_sta.my_func', name='home'),My function is my_func is defined as:def my_func(request): u = request.POST.get("parsed_news", "") p2 = re.compile("(VP+)\\s[(+](\\w+)\\s(\\w+)[)+]") my_sent = u.split("##")[:-1] zeta = [] for sent in my_sent: m = p2.findall(sent) if(len(m)>=0): zeta = zeta + m return HttpResponse(str(zeta), content_type="application/x-javascript") Now I am calling the above mentioned url like following: import requests payload = {"parsed_news":"Hello World"} response = requests.post("http://127.0.0.1:8080/lares_conf_123kmk_", data=payload) print response.status_codeBut I am unable to POST any data. And the status is constantly showing 403.Kindly help. | That looks to me like you're falling foul of Django's Cross Site Request Forgery protection.You can test that theory by marking the view as exempt, using csrf_exempt. This is probably not a good idea for production (you probably want CSRF protection turned on), but should allow you to identify the problem. |
Change the font style of one item in a wx python ListCtrl I am having issues changing the font of a single item in an wx list ctrl. I have 1 row and 3 columns in my ListCtrl. The code below should change the font of the item located at row = 0 col = 0 to bold. But instead it changes the font style of ALL the items in row 0 to bold. In summary, I only want the first item in the first row to be bold and not the entire row itself. Is it even possible to change one items font without changing the entire row?Thankyouimport wx########################################################################class MyForm(wx.Frame): #---------------------------------------------------------------------- def __init__(self): wx.Frame.__init__(self, None, wx.ID_ANY, "List Control Tutorial") # Add a panel so it looks the correct on all platforms panel = wx.Panel(self, wx.ID_ANY) self.index = 0 self.list_ctrl = wx.ListCtrl(panel, size=(-1,100), style=wx.LC_REPORT |wx.BORDER_SUNKEN ) self.list_ctrl.InsertColumn(0, 'Subject') self.list_ctrl.InsertColumn(1, 'Due') self.list_ctrl.InsertColumn(2, 'Location', width=125) btn = wx.Button(panel, label="Add Line") btn.Bind(wx.EVT_BUTTON, self.add_line) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.list_ctrl, 0, wx.ALL|wx.EXPAND, 5) sizer.Add(btn, 0, wx.ALL|wx.CENTER, 5) panel.SetSizer(sizer) line = "Line %s" % self.index self.list_ctrl.InsertStringItem(self.index, line) self.list_ctrl.SetStringItem(self.index, 1, "01/19/2010") self.list_ctrl.SetStringItem(self.index, 2, "USA") self.index += 1 item = self.list_ctrl.GetItem(0,0) print "itemText", item.GetText() # Get its font, change it, and put it back: font = item.GetFont() font.SetWeight(wx.FONTWEIGHT_BOLD) item.SetFont(font) self.list_ctrl.SetItem(item) #---------------------------------------------------------------------- def add_line(self, event): line = "Line %s" % self.index self.list_ctrl.InsertStringItem(self.index, line) self.list_ctrl.SetStringItem(self.index, 1, "01/19/2010") self.list_ctrl.SetStringItem(self.index, 2, "USA") self.index += 1#----------------------------------------------------------------------# Run the programif __name__ == "__main__": app = wx.App(False) frame = MyForm() frame.Show() app.MainLoop() | with wx.ListCtrl, you can't have control at the sub item level.If you change font, it will change for entire row.Therefore this is not possible.Heres the ticket number:http://trac.wxwidgets.org/ticket/3030 |
Convert google directions into a shapefile line Is it possible to take the json returned from the Google directions API and convert that information into a shapefile line that is the same as the polyline for the route?I would like to plot a trip I took on a map I am making in QGIS. | I finally figured out how to do this using Fiona and Shapely and a function I found on GitHub and modified to suit:import jsonimport fionaimport pandas as pdfrom shapely.geometry import LineString, mappingdef decode_polyline(polyline_str): '''Pass a Google Maps encoded polyline string; returns list of lat/lon pairs''' index, lat, lng = 0, 0, 0 coordinates = [] changes = {'latitude': 0, 'longitude': 0} # Coordinates have variable length when encoded, so just keep # track of whether we've hit the end of the string. In each # while loop iteration, a single coordinate is decoded. while index < len(polyline_str): # Gather lat/lon changes, store them in a dictionary to apply them later for unit in ['latitude', 'longitude']: shift, result = 0, 0 while True: byte = ord(polyline_str[index]) - 63 index+=1 result |= (byte & 0x1f) << shift shift += 5 if not byte >= 0x20: break if (result & 1): changes[unit] = ~(result >> 1) else: changes[unit] = (result >> 1) lat += changes['latitude'] lng += changes['longitude'] coordinates.append((lng / 100000.0, lat / 100000.0)) return coordinatesdef get_linestring(trip_name): with open(trip_name + '.json', 'r') as data_file: data = json.load(data_file, encoding='ISO-8859-1') the_points = [] for step in data['routes'][0]['legs'][0]['steps']: the_points += decode_polyline(step['polyline']['points']) return LineString(the_points)if __name__ == '__main__': trip_names = ['trip1', 'trip2', 'trip3'] driver = 'ESRI Shapefile' crs = {'no_defs': True, 'ellps': 'WGS84', 'datum': 'WGS84', 'proj': 'longlat'} schema = {'geometry': 'LineString', 'properties': {'route': 'str'}} with fiona.open('all_trips.shp', 'w', driver=driver, crs=crs, schema=schema) as layer: for trip_name in trip_names: layer.write({'geometry': mapping(get_linestring(trip_name)), 'properties': {'route': trip_name} })The code assumes that the json files contain the json response from the Google maps API and are located in the same folder as the code. The polylines for a given trip are decoded and then converted into a LineString using Shapely, and each LineString is then saved into a shapefile using Fiona. |
pandas: grouping by key to cluster messy strings I have a table that looks like this:company_id,company_name1,Amazon1,Amazon Ltd2,Google1,Amazon2,Gogle3,Facebook Ltd3,Facebook LTD1,AMAZON1,AMAZON LTD2,GOOGLE3,Facebook3,Face bookSo I have a unique identifier for each company, but their textual representation differs. What I strive for is to eliminate these inconsistencies and have something along the lines of:company_id,company_name1,Amazon1,Amazon2,Google1,Amazon2,Google3,Facebook3,Facebook1,Amazon1,Amazon2,Google3,Facebook3,FacebookI'm not dead set on the selection criterion - it can be the most common value within said group, it can be a random one. But what I need is something efficient, because my table has grown to contain millions of rows.My solution was to create a hash map of unique combos of id -> name and then replacing based on these. Something along the lines of:dd = df.drop_duplicates().set_index('company_id').to_dict()['company_name']df.company_name = df.company_iddf.company_name = df.company_name.replace(dd)Which works fine on smaller sets, but it gets fairly slow and memory inefficient due to the large hash map it creates.I've also tried a groupby based on company_id and replacing all the company_names within each group by a random value, but I couldn't get to modify the underlying dataframe (without .locing it, which is doubly inefficient).One last option that springs to mind is to create a separate frame with unique values (df.drop_duplicates('company_id')) and merge this with the original frame based on company_id, but it doesn't sound terribly efficient either. | I've tested your solution on a fairly large DataFrame using map and it looks pretty efficient:prng = np.random.RandomState(0)df = pd.DataFrame({'company_id': prng.randint(10**6, size=10**7), 'company_name': prng.rand(10**7).astype('str')})# It has 10m unique identifiers each having 10 entries on average# ranges from 1 to 28.df.head() company_id company_name0 985772 0.40971761684427431 305711 0.5066595030510522 435829 0.450496217979638463 117952 0.217568253142201744 963395 0.07977409062048224Now you can create a mapper between the company_id and the first occurrence of the company_name for that id:%%timeitmapper = df.drop_duplicates(subset='company_id').set_index('company_id')['company_name']df['company_id'].map(mapper)1 loop, best of 3: 1.86 s per loopAs Nickil Maveli mentioned, transform is also a possibility although the performance for this particular dataset I created is not as good as map:%timeit df.groupby('company_id')['company_name'].transform('first')1 loop, best of 3: 2.33 s per loop.loc looks pretty inefficient:%timeit df.groupby('company_id').first().loc[df.company_id]1 loop, best of 3: 26.4 s per loopIn general, you might be better off using categoricals for this type of data. See another discussion here. |
How to not match substrings with regex String to match:{abc}Strings to not match:$${abc{abc}{abc}}$$How do I satisfy this requirement with regex?The context is trying to match {abc} elements for replacement with Python, but I don't want them mixed up with MathJax equations $${abc{abc}{abc}}$$ in a HTML file.I understand [^$]{.+} will work somewhat for strings such as "$${}$$", but not others with nested brackets (with content inside nested brackets) such as "$${{abc}}$$". Which shouldn't be matched.My HTML file looks like this:abc abc {element 1} abc abc {element 2} abc{element_abc} abc abc$${abc{abc}{abc}}$$ {element_3} abc abc $${mathjax{}{mathjax}}$$abc $${mathjax{}{}{}{}{{{mathjax}}}}$$ abc abcExpected resultsMatch:{element 1}{element 2}{element_abc}Don't match:$${abc{abc}{abc}}$$$${mathjax{}{mathjax}}$$$${mathjax{}{}{}{}{{{mathjax}}}}$$The search doesn't need to scan recursively for intermixed elements:{$${}$$} can match (not possible in my actual text, so a match can be made if necessary)A line of a {abc} and a $${abc}$$ such as {abc} abc $${abc}$$ may be possibleEach example is over one line only. E.g. {element 1} not {element\n 1}So each search can be an anchored search, with consideration to {abc} abc $${abc}$$Using regex 2021.11.10 via pip | If you simply need to match the expressions on individual lines, all you need is to add line anchors.^\{[^{}]+\}$If your input is a single string with multiple lines in it, you'll need to add the re.MULTILINE flag to say that ^ and $ should match at internal newlines, too.>>> import re>>> re.findall(r'^\{[^{}]+\}$', '''... {foo}... $${bar{baz}}... {quux}... ick... ''', re.MULTILINE)['{foo}', '{quux}']This is portable back to the standard Python re module, too. |
running python web application on aws I'm to use amazon web services with some python scripts I have created.I have used PHP in the past and now I want to write a web application using python but without a web frame work such as Django (I found many tutorials for that). While using PHP, it was enought to place my files in a dedicated directory (such as /var/www) and then the server would process them.How is the same thing achieved with Python scripts?What are the Django python scripts doing?Thanks | It varies slightly based on the framework that you're using, but in general you need to point your web server at a wsgi file. For doing this with Django, see: https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/ and http://lucumr.pocoo.org/2007/5/21/getting-started-with-wsgi/ for an overview of WSGI. |
Implementing Logistic Regression with Scipy: Why does this Scipy optimization return all zeros? I am trying to implement a one versus many logistic regression as in Andrew Ng's machine learning class, He uses an octave function called fmincg in his implementation. I have tried to use several functions in the scipy.optimize.minimize, but I keep getting all zeros in the classifier output, no matter what I put in.In the last many hours, I've checkout out a ton of resources, but the most helpful have been this stack overflow post, and this blog post. Is there any obvious or not so obvious place where my implementation has gone astray?import scipy.optimize as opdef sigmoid(z): """takes matrix and returns result of passing through sigmoid function""" return 1.0 / (1.0 + np.exp(-z))def lrCostFunction(theta, X, y, lam=0): """ evaluates logistic regression cost function: theta: coefficients. (n x 1 array) X: data matrix (m x n array) y: ground truth matrix (m x 1 array) lam: regularization constant """ m = len(y) theta = theta.reshape((-1,1)) theta = np.nan_to_num(theta) hypothesis = sigmoid(np.dot(X, theta)) term1 = np.dot(-y.T,np.log(hypothesis)) term2 = np.dot(1-y.T,np.log(1-hypothesis)) J = (1/m) * term1 - term2 J = J + (lam/(2*m))*np.sum(theta[1:]**2) return Jdef Gradient(theta, X, y, lam=0): m = len(y) theta = theta.reshape((-1,1)) hypothesis = sigmoid(np.dot(X, theta)) residuals = hypothesis - y reg = theta reg[0,:] = 0 reg[1:,:] = (lam/m)*theta[1:] grad = (1.0/m)*np.dot(X.T,residuals) grad = grad + reg return grad.flatten()def trainOneVersusAll(X, y, labels, lam=0): """ trains one vs all logistic regression. inputs: - X and Y should ndarrays with a row for each item in the training set - labels is a list of labels to generate probabilities for. - lam is a regularization constant outputs: - "all_theta", shape = (len(labels), n + 1) """ y = y.reshape((len(y), 1)) m, n = np.shape(X) X = np.hstack((np.ones((m, 1)), X)) all_theta = np.zeros((len(labels), n + 1)) for i,c in enumerate(labels): initial_theta = np.zeros(n+1) result, _, _ = op.fmin_tnc(func=lrCostFunction, fprime=Gradient, x0=initial_theta, args=(X, y==c, lam)) print result all_theta[i,:] = result return all_thetadef predictOneVsAll(all_theta, X): passa = np.array([[ 5., 5., 6.],[ 6., 0., 8.],[ 1., 1., 1.], [ 6., 1., 9.]])k = np.array([1,1,0,0])# a = np.array([[1,0,1],[0,1,0]])# k = np.array([0,1])solution = np.linalg.lstsq(a,k)print 'x', solution[0]print 'resid', solution[1]thetas = trainOneVersusAll(a, k, np.unique(k)) | The problem lies in your Gradient function. In numpy assignment is not copying objects, so your linereg = thetamakes reg a reference to theta, so each time you compute gradient you actually modify your current solution. It should bereg = theta.copy()I would also suggest starting from random weightsinitial_theta = np.random.randn(n+1)Now solution is no longer zeros (although I did not check each formula, so there still might be mathematical error). It is also worth noting, that for linearly separable problems, logistic regression without regularization is ill-posed (its objective is unbounded), so I suggest testing with lam>0. |
How to create masked array with a vector that separates sections of a 2d array? Let's say I have a standard 2d numpy array, let's call it my2darray with values. In this array there are two major sections. Let's say for each column, there is a specific row which separates "scenario1" and "scenario2". How can i create 2 masked arrays that represent the top section of my2darray and the bottom of my2darray. For example, i am interested in calculating the mean of the top half and the mean of the second half. One idea is to have a mask that is of the same shape as my2darray but that seems like a waste of memory. Is there a better idea? Let's say I have a vector, in which the length is equal to the number of rows in my2darray (in this case 6), i.e. I havemyvector=np.array([9, 15, 5,7,11,11])I am using python 2.6 with numpy 1.5.0 | Using NumPy's broadcasted comparison, we can create such a 2D mask in a vectorized manner. Rest of the work is all about sum-reduction along the first axis for which we can take help from np.einsum. Thus, we would have an implementation like so -N = my2darray.shape[0]mask = myvector <= np.arange(N)[:,None]uout = np.true_divide(np.einsum('ij,ij->j',my2darray,~mask),myvector)lout = np.true_divide(np.einsum('ij,ij->j',my2darray,mask),N-myvector)Sample run to verify results -In [184]: N = my2darray.shape[0] ...: mask = myvector <= np.arange(N)[:,None] ...: uout = np.true_divide(np.einsum('ij,ij->j',my2darray,~mask),myvector) ...: lout = np.true_divide(np.einsum('ij,ij->j',my2darray,mask),N-myvector) ...: In [185]: uoutOut[185]: array([ 6. , 4.6, 4. , 0. ])In [186]: [my2darray[:item,i].mean() for i,item in enumerate(myvector)]Out[186]: [6.0, 4.5999999999999996, 4.0, 0.0] # Loopy version resultsIn [187]: loutOut[187]: array([ 5.2 , 4. , 2.66666667, 2. ])In [188]: [my2darray[item:,i].mean() for i,item in enumerate(myvector)]Out[188]: [5.2000000000000002, 4.0, 2.6666666666666665, 2.0] # Loopy versionAnother potentially faster way would be to calculate the summations for the upper mask, store it and from it, subtract the sum along the first axis along the entire length of the 2D input array. This could be then used for the calculation of the lower part average. Thus, after we store N and calculate mask, we would have -usum = np.einsum('ij,ij->j',my2darray,~mask)uout = np.true_divide(usums,myvector)lout = np.true_divide(my2darray.sum(0) - usums,N-myvector) |
Sending email with optional attachment in django I am working on a form to send email where have optional attachment.When I try to send the email without attach a file.I got this errorKey 'file' not found in MultiValueDict: {}Any idea what I am doing wrong? I would like to send it directly to the email address without upload the file to our serverforms.pyclass jopcion(forms.Form): subject = forms.CharField(max_length=100) thecontent = forms.CharField(widget=forms.Textarea) file = forms.FileField(widget= forms.FileInput (attrs={'name': 'file'}),required=False)views.pydef novo(request, template_name='mailme.html'): if request.method == "POST": formulario = jopcion(request.POST or None, request.FILES or None) if formulario.is_valid(): subject = request.POST['subject'] message = request.POST['thecontent'] attach = request.FILES['file'] destination = 'testing@gmail.com' html_content = (subject,message,attach) msg = EmailMultiAlternatives('Customer email address', html_content, 'from@server.com', [destination]) msg.attach(attach.name, attach.read(), attach.content_type) msg.attach_alternative(html_content, 'text/html')#definimos el contenido como html msg.send() #enviar en correo return render_to_response('done.html', context_instance=RequestContext(request)) else: formulario = jopcion() ctx = {'form': formulario, 'text_dc': file} return render_to_response(template_name, ctx , context_instance = RequestContext(request)) | If you don't send a file, request.FILE will be a blank dictionary-like object. DocumentationBased on this, you need to check if key is present at this dict. examples: if 'file' in request.FILES: attach = request.FILES['file'] #or attach = request.FILES.get('file') |
Using Elixir, erlport with Python 2.7.9, receiving an arity error I am trying to use Python with Elixir and I wrote the following functional code (you can find the repo I'm building here: https://github.com/arthurcolle/elixir_with_erlport) defmodule Snake do use Application def start(_type, _args) do import Supervisor.Spec, warn: false children = [ # Define workers and child supervisors to be supervised # worker(Snake.Worker, [arg1, arg2, arg3]), ] opts = [strategy: :one_for_one, name: Snake.Supervisor] Supervisor.start_link(children, opts) end def py do {:ok, pp} = :python.start() :python.call(pp, :__builtin__, :print, ["hey there"]) endendI can run iex -S mix run, then type in Snake.py, and I will get this output:"hey there":undefinedOkay, great.Then I try to make it print out the current version of Python by swapping out the two lines above with:{:ok, pp} = :python.start():python.call(pp, :sys, :version, [])But when I run it, it gives me this arity error** (FunctionClauseError) no function clause matching in :erlport.call/5 src/erlport.erl:87: :erlport.call(#PID<0.108.0>, :sys, 'version.__str__', [], [])Which doesn't make any sense to me because my call only is a :erlport.call/4, with one single list at the end (not 2 as it is saying). | {:ok, pp} = :python.start_link():python.call(pp, :sys, String.to_atom("version.__str__"), []) |
How to use list comprehensions for for-loop with additional operations I want to simplify this construction with list comprehensions:words = {}counter = 0for sentence in text: for word in sentence: if word not in words: words[word] = counter counter += 1If there was something like post-increment, it could be written like:words = {word: counter++ for sentence in text for word in sentence if word not in words}How should I do it in pythonic way?For example:text =[['aaa', 'bbb', 'ccc'],['bbb', 'ddd'],['aaa', 'ccc', 'eee']]Desired result:words = {'aaa': 1, 'bbb': 2, 'ccc': 3, 'ddd': 4, 'eee': 5}Order does not matter.UPD:I found an interesting solution:words = {}counter = (x for x in range(10**6))[words.update({word: counter.next()}) for sentence in text for word in sentence if word not in words]update method allows to check if word in dictionary already.Maybe I should use len(words) instead of counter.next(), but I thought that counter will be faster (O(1) vs. O(dict_size)). | There are many ways to do this. This one is without using any external modules, one liner:s = "a a a b b a a b a b a b"d = [[(out, out.update([(v, out.get(v, 0) + 1)])) for v in s.split()] for out in [{}]][0][0][0]print(d)Prints:{'a': 7, 'b': 5} |
Get Host name using IP address -Python I am trying to display all the connected machine names using ip address, I could get the IP address by checking the connections = socket.socket(socket.AF_INET, socket.SOCK_STREAM)s.connect((addr,80))I have tried using s.getsockname,socket.gethostname and s.getpeernamethese all are returning similiar resultsWhat should I do if I need to display the names? For example 192.168.1.1 - 192.168.1.1192.168.1.50 - 192.168.1.50192.168.1.113 - 192.168.1.113192.168.1.114 - 192.168.1.114192.168.1.139 - 192.168.1.139I need to display this like 192.168.1.1 - tom123192.168.1.50 - allec192.168.1.113 - john-pc192.168.1.114 - bob192.168.1.139 - annyI have tried with socket.gethostbyaddr("196.168.1.114") -- it is giving me an exception sayingprint socket.gethostbyaddr("196.168.1.114") socket.herror: [Errno 1] Unknown host | I do not know if this will help, but socket.getfqdn(IP_ADDRESS) returns the hostname. |
Adding numpy 3D array across one dimension I have a numpy array with following shape:(365L, 280L, 300L)I want to sum up the array across the first dimension (365), so that I get 365 values as result.I can do np.sum(), but how to specify which axis?--EDIT:The answer should have shape: (365,) | NumPy version >= 1.7np.sum allows the use of a tuple of integer as axis argument to calculate the sum along multiple axis at once:import numpy as nparr = ... # somearraynp.sum(arr, axis=(1, 2)) # along axis 1 and 2 (remember the first axis has index 0)np.sum(arr, axis=(2, 1)) # the order doesn't matterOr directly use the sum method of the array:arr.sum(axis=(1, 2))The latter only works if arr is already a numpy array. np.sum works even if your arr is a python-list.NumPy version < 1.7:The option to use a tuple as axis argument wasn't implemented yet but you could always nest multiple np.sum or .sum calls:np.sum(np.sum(arr, axis=1), axis=1) # Nested sumsarr.sum(axis=2).sum(axis=1) # identical but more "sequential" than "nested" |
How to reset numbered sections in Sphinx? I have several documents that are independant from each others: index.rstfoo.rstbar.rstconf.pyMakefileI would like to access foo.rst from index.rst, but I would like the two subdocuments to start their numbering at 1.In index.rst I have:.. toctree:: :maxdepth: 2 :numbered: foo barBut, bar will take the number 2. and with this bar.rst I will get 2.1 Tomatoes. =====Title=====Tomatoes========Cucumbers=========and I would like this rendering: 1. Tomatoes2. CucumbersHow is that possible? | You cannot have it both ways. See Sphinx documentation for Section numbering under the toctree directive for the explanation:Section numberingIf you want to have section numbers even in HTML output, give the toplevel toctree a numbered option. For example:.. toctree:: :numbered: foo barNumbering then starts at the heading of foo. Sub-toctrees are automatically numbered (don’t give the numbered flag to those). |
Can sklearn Random Forest classifier adjust sample size by tree, to handle class imbalance? Perhaps this is too long-winded. Simple question about sklearn's random forest: For a true/false classification problem, is there a way in sklearn's random forest to specify the sample size used to train each tree, along with the ratio of true to false observations?More details are below:In the R implementation of random forest, called randomForest, there's an option sampsize(). This allows you to balance the sample used to train each tree based on the outcome. For example, if you're trying to predict whether an outcome is true or false and 90% of the outcomes in the training set are false, you can set sampsize(500, 500). This means that each tree will be trained on a random sample (with replacement) from the training set with 500 true and 500 false observations. In these situations, I've found models perform much better predicting true outcomes when using a 50% cut-off, yielding much higher kappas. It doesn't seem like there is an option for this in the sklearn implementation. Is there any way to mimic this functionality in sklearn? Would simply optimizing the cut-off based on the Kappa statistic achieve a similar result or is something lost in this approach? | In version 0.16-dev, you can now use class_weight="auto" to have something close to what you want to do. This will still use all samples, but it will reweight them so that classes become balanced. |
mimetools.Message() to python 3 email.message.Message I try to port a python 2.x code to python 3.The line im struggeling with is from mimetools import Message...headers = Message(StringIO(data.split('\r\n', 1)[1]))i have figured out that mimetools are no longer present in python 3 and that the replacement is the email class.I tried out to replace it like this:headers = email.message_from_file(io.StringIO(data.split('\r\n', 1)[1]))but with that i get this error: headers = email.message_from_file(io.StringIO(data.split('\r\n', 1)[1]))TypeError: Type str doesn't support the buffer APIi am searching for an hint to do this porting from mimetools to email correct.The original code is not from me. It can be found here :https://gist.github.com/jkp/3136208 | Alex's own solution from his comment:import emailstream = io.StringIO() rxString = data.decode("utf-8").split('\r\n', 1)[1]stream.write(rxString) headers = email.message_from_string(rxString) |
get some substring from readline() in python with regular expression I use tcpdump to sniff my network packet and I want to get some info out of the stored file. My file have 2 separated lines but they repeated many times.23:30:43.170344 IP (tos 0x0, ttl 64, id 55731, offset 0, flags [DF], proto TCP (6), length 443)192.168.98.138.49341 > 201.20.49.239.80: Flags [P.], seq 562034569:562034972, ack 364925832, win 5840, length 403I want to get timestamp(23:30:43.170344) and id(id 55731) and offset(23:30:43.170344) from first line(all line like this on my file). and store in a different list.And get 2 separated ip (192.168.98.138.49341 and 201.20.49.239.80) and seq (seq 562034569:562034972) and ack(ack 364925832) from the second (all line like this on my file) line and store in a different list.If I can do this with a regular expression it would be best. | For the first portion, to get timestamp, id and offset.I am sure this is a crude regex. >>> import re>>> l = '23:30:43.170344 IP (tos 0x0, ttl 64, id 55731, offset 0, flags [DF], proto TCP (6), length 443)'>>> k = re.compile(r'^([0-9:]+\.[0-9]+) IP \(.* id ([0-9]+), offset ([0-9]+).*\)')>>> x = k.match(l)>>> x.groups()('23:30:43.170344', '55731', '0')>>> x.groups()[0]'23:30:43.170344'>>> x.groups()[1]'55731'>>> x.groups()[2]'0'>>> For the second part:>>> l = '192.168.98.138.49341 > 201.20.49.239.80: Flags [P.], seq 562034569:562034972, ack 364925832, win 5840, length 403'>>> k = re.compile(r'^([0-9.]+) > ([0-9.]+): .* seq ([0-9:]+), ack ([0-9]+).*')>>> x = k.match(l)>>> for y in x.groups(): print y... 192.168.98.138.49341201.20.49.239.80562034569:562034972364925832For a read up on re module:http://www.doughellmann.com/PyMOTW/re/ |
Dynamically instantiate Player class in a loop I am making a simple Python game. I have a text file with the following on each line:player name, player IP, player health, player itemsI have a loop which goes through each line in the file and get the variables for each player (each line in the text file is a player).I have a class called Player, I need one instance of this for each player.I wish to have a list which contains all the instances of Player. | Sven has a good answer but you can even do away with the first line and just doconfig = [line.split(',') for line in open("config")]Or as you may want to actually instantiate the players:config = [Player(line.split(',')) for line in open("config")]If you're going to be doing a lot more csv configs for your game, look into the csv module. |
Getting Error while writing a Large datarame of 60K rows into csv in Pandas I am trying to build an prediction model. I am trying to do that in 2 partsPreprocesing of the data in python file(.ipynb) and saving this preprocesed data into a csv fileCalling this preprocessed file in the Step 1 Model Prediction (.ipynb) file.Preprocessing File#saving preproccesed dataframe to csvtrain.to_csv('C:/Users/Documents/Tesfile_Preprocessed.csv')Error ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\sparse\array.py in __getitem__(self, key) 417 return self._get_val_at(key) 418 elif isinstance(key, tuple):--> 419 data_slice = self.values[key] 420 else: 421 if isinstance(key, SparseArray):IndexError: too many indices for arrayPrediction Model FileX_test=pd.read_csv('C:/Users/Documents/Tesfile_Preprocessed.csv')predicted_dtree = fit_decision.predict(X_test)How can I Solve This isue | A better approach is to use to_pickle:train.to_pickle('C:/Users/Documents/Tesfile_Preprocessed.pickle')Too many indices' means you've given too many index values. You've given 2 values as you're expecting data to be a 2D array. Numpy is complaining because data is not 2D (it's either 1D or None).I recommend you to check the array dimensions before accessing it.self.values.shapeorlen(self.values) This is NOT related to your Pandas, you are trying to access an not existing index on your array.Please try to change the sep during the export.train.to_csv('C:/Users/Documents/Tesfile_Preprocessed.csv', sep='\t') |
Drawing multiple charts using for loop in bokeh I need to draw a couple of horizontal bar charts. I applied the following for loop to draw them, but I get an errorchart_cols = 'respondent_age respondent_gender respondent_edu respondent_occupation Religion Caste_cat CM_choice Likely_winner'.split()for f in chart_cols: count = df[f].value_counts() p = figure(plot_height=400, plot_width=400, title='Chart',toolbar_location=None) p.title.align = "right" p.xaxis.axis_label = 'Number of respondents' p.yaxis.axis_label = 'Something' p.hbar(y=sorted(df[f].unique()), height=0.7, left=0, right=count, color=Category20, alpha=0.7) show(p) print('Done')Error# TODO (bev) implement thisValueError: expected an element of either String, Dict(Enum('expr', 'field', 'value', 'transform'), Either(String, Instance(Transform), Instance(Expression), Color)) or Color, got {3: ['#1f77b4', '#aec7e8', '#ff7f0e'], 4: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78'], 5: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c'], 6: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a'], 7: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728'], 8: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896'], 9: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd'], 10: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5'], 11: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b'], 12: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94'], 13: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2'], 14: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2'], 15: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f'], 16: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f', '#c7c7c7'], 17: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f', '#c7c7c7', '#bcbd22'], 18: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f', '#c7c7c7', '#bcbd22', '#dbdb8d'], 19: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f', '#c7c7c7', '#bcbd22', '#dbdb8d', '#17becf'], 20: ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c', '#98df8a', '#d62728', '#ff9896', '#9467bd', '#c5b0d5', '#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f', '#c7c7c7', '#bcbd22', '#dbdb8d', '#17becf', '#9edae5']}How do I draw multiple charts using for loop? The examples given in the documentation shows the same done for 2 plots using row function. | The following code works now.for f in chart_cols: count = df[f].value_counts()p = figure(plot_height=400, plot_width=400, title='Chart',toolbar_location=None)p.title.align = "right"p.xaxis.axis_label = 'Number of respondents'p.yaxis.axis_label = str(f)p.hbar(y=sorted(df[f].unique()), height=0.7, left=0, right=count, color=Category20[len(df[f].unique())], alpha = 0.7)show(p) |
How can I get rows that present outliers for the column? First I needed to create a function that returns True when the z-score is lower than -3 or larger than 3, and False otherwise. Then apply that function to dataframe. But nowI want to show the rows that present outliers for the column stand_Gross.SqFt. subset by passing the outliers series. How do I do that? Everything I tried is for numeric and this is a string function (true/false).def zscore (x): if (x > 3): return 'True' elif (x < -3): return 'True' else : return 'False'applying the function to dataframe:housing['stand_Gross.SqFt'].apply(zscore) | you do not want to do "apply" as you slow your code way downstart withhousing['is_outlier'] = housing['stand_Gross.SqFt'] > 3# print outliersprint(housing[housing['is_outlier']])you can of coarse simply skip the first stepoutliers = housing[housing['stand_Gross.SqFt'] > 3]print(outliers) |
Numpy: applying a function to axis 0 of a 2D matrix I have the function test_outlier that returns True if an (x, y) coordinate is greater than some threshold value away from a line segment that connects two points, otherwise False:import numpy as npdef test_outlier(point1: np.ndarray, point2: np.ndarray, point3: np.ndarray, threshold: float) -> bool: line_connecting_point1_point2 = np.linalg.norm(point1 - point2) distance_to_line_from_point3 = (np.cross(point2-point1, point3-point1, axis=0) / np.linalg.norm(point2-point1)).astype(float)point1 = np.array([[2],[3]])print(point1.shape)point1(2, 1)array([[2], [3]])point2 = np.array([[5],[7]])print(point2.shape)point2(2, 1)array([[5], [7]])point3 = np.array([[7],[5]])print(point3.shape)point3(2, 1)array([[7], [5]])test_outlier(point1, point2, point3, 5)FalseI'd like to pass the test_outlier() function to another function named count_outliers(), which counts the number of outliers if a variable number of (x, y) coordinates is passed in via a (2, N) sized matrix:def count_outliers(point1: np.ndarray, point2: np.ndarray, coordinates: np.ndarray, threshold: float) -> int: num_outliers = 0 if np.apply_along_axis(test_outlier(point1, point2, coordinates, threshold), 0, coordinates): num_outliers += 1 return num_outliersI'm attempting to use np.apply_along_axis() to apply test_outlier() along axis 0.Let coordinates be defined as:coordinates = np.array([[7, 3, 9, 30],[5, 17, 10, 500]])print(coordinates.shape)coordinates (2, 4)array([[ 7, 3, 9, 30], [ 5, 17, 10, 500]])Calling count_outliers(point1, point2, coordinates, 4) results in this error:ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()In the case above, the test_outlier function called from within count_outliers would be applied to the following coordinate pairs:(7, 5)(3, 17)(9, 10)(30, 500)How do I do this?Thanks! | This is the answer:def count_outliers(point1: np.ndarray, point2: np.ndarray, coordinates: np.ndarray, threshold: float) -> int: num_outliers = 0 for elem in range(coordinates.shape[1]): if test_outlier(point1, point2, coordinates[:, elem, None], threshold) is True: num_outliers += 1 return num_outliers |
Extract month level precision from a date time column and make all day to 1 in pandas I have a time series, month level data as shown below.df: date0 1997-01-01 00:00:001 1997-02-02 00:00:002 1997-03-03 00:00:003 1997-04-02 00:00:004 1997-05-02 00:00:005 1997-06-01 00:00:006 1997-07-01 00:00:007 1997-08-31 00:00:008 1997-09-30 00:00:009 1997-10-31 00:00:0010 1997-11-30 00:00:0011 1997-12-31 00:00:0012 1998-01-01 00:00:0013 1998-02-28 00:00:0014 1998-03-31 00:00:00 from the above data I would like to extract month level precision and make all the day to 01Expected output: date0 1997-01-011 1997-02-012 1997-03-013 1997-04-014 1997-05-015 1997-06-016 1997-07-017 1997-08-018 1997-09-019 1997-10-0110 1997-11-0111 1997-12-0112 1998-01-0113 1998-02-0114 1998-03-01I tried below code:df['date'] = pd.to_datetime(df['date'], format='%Y-%m')But it did not replace the dates as shown below1997-02-02 to 1997-02-011997-04-02 to 1997-04-011997-12-31 to 1997-12-011998-02-28 to 1998-02-01 | Use:df['date'] = pd.to_datetime(df['date']) + pd.offsets.DateOffset(day=1)print (df) date0 1997-01-011 1997-02-012 1997-03-013 1997-04-014 1997-05-015 1997-06-016 1997-07-017 1997-08-018 1997-09-019 1997-10-0110 1997-11-0111 1997-12-0112 1998-01-0113 1998-02-0114 1998-03-01Or:d = pd.to_datetime(df['date'])df['date'] = d - pd.to_timedelta(d.dt.day - 1, 'd')print (df) date0 1997-01-011 1997-02-012 1997-03-013 1997-04-014 1997-05-015 1997-06-016 1997-07-017 1997-08-018 1997-09-019 1997-10-0110 1997-11-0111 1997-12-0112 1998-01-0113 1998-02-0114 1998-03-01 |
Create a function to calculate median cost across different years I have a sample dataset which contains id and costs in diff years as the one below:Id2015-042015-052015-062015-072016-042016-052016-062016-072017-042017-052017-062017-072018-042018-052018-062018-071058500585005830057800575005770057800578005780057900584005900059500595005900058500111046001046001057001061001063001073001080001076001078001083001092001096001093001087001090001107001210490010670010790010750010610010520010570010640010670010710010720010710010750010830010920011050013505004960048900484004810048000477004750047400476004780047800476004760048100484001449800499005030050800511005120051200514005160051900524005260052300518005110050900How can I create a function in Python to find the median cost of each year belonging to their respective id? I want the function to be dynamic in terms of the start and end year so that if new data comes for different years, the code will calculate the changes accordingly. For example, if new data comes for 2019, the end date would automatically be considered as 2019 instead of 2018 and calculate its median respectively.With the current data sample given above, the result should look something like one below:Id20152016201720181058400577505815059250111051501074501087501091501210710010590010710010875013492504785047700478501450100512005215051450 | First we split the column names on - and get only the year. Then we groupby over axis=1 based on these years and take the median:df = df.set_index("Id")df = df.groupby(df.columns.str.split("-").str[0], axis=1).median().reset_index()# or get first 4 characters# df = df.groupby(df.columns.str[:4], axis=1).median().reset_index() Id 2015 2016 2017 20180 10 58400 57750 58150 592501 11 105150 107450 108750 1091502 12 107100 105900 107100 1087503 13 49250 47850 47700 478504 14 50100 51200 52150 51450 |
Multiple foreign key lookups I have the following models in my appAccountclass Account(CommonModel): # Accounts received from Client client = models.ForeignKey('Client', on_delete=models.RESTRICT) reference = models.CharField(db_index=True, max_length=50) def __str__(self): return f"{self.client} {self.reference}"Personclass Person(CommonModel): title = models.CharField(max_length=100,choices=choi.person_title()) name = models.CharField(db_index=True, max_length=100) birth_date = models.DateField() def __str__(self): return f"{self.title} {self.name}"AccountPersonclass AccountPerson(CommonModel): # Account -> Person link account = models.ForeignKey("core.Account", on_delete=models.RESTRICT, related_name="accountperson_account") person = models.ForeignKey("core.Person", on_delete=models.RESTRICT, related_name="accountperson_person") contact_type = models.CharField(max_length=50, choices=choi.contact_type()) def __str__(self): return f"{self.account} - {self.person} ({self.contact_type})"The AccountPerson model holds relationships between accounts and people (one person can have multiple accounts). I'm trying to return a query set containing a list of Accounts, and the Person they're linked to (if any). My background is SQL, so I'm thinking of a query that would hit Account -> AccountPerson --> Person, but I'm stuck.I've tried prefetch_related() but I'm only returning details in the Account table - I'm unsure of how to access Person from there and put those fields into my HTML file.Viewdef account_list(request): data = Account.objects.all().prefetch_related('accountperson_account') return render(request, 'core/account_list.html', {'data': data})account_list.htmlCode condensed for readability...{% for i in data %}<tr><td>{{i.client}}</td><td>{{i.reference}}</td>{% endfor %}...I'm currently in a position where my page loads, and I see the entries in my Account model, but that's it.UpdateI changed my view to thisdef account_list(request): data = AccountPerson.objects.all().select_related('account').select_related('person') return render(request, 'core/account_list.html', {'data': data})And I can now access fields in Account and Person in my HTML like so{% for i in data %}<tr><td>{{i.account.client}}</td><td>{{i.account.reference}}</td><td>{{i.contact_type}}</td><td>{{i.person.name}}</td>{% endfor %}I just want to check that this is the right way (or one of them)? | I'd change the datamodel slightly to be more Django-y. Django has the concept of ManyToMany fields which is what you're trying to accomplish. (https://docs.djangoproject.com/en/3.1/ref/models/fields/#django.db.models.ManyToManyField)You would define the Person model as you did and change the Account model to have a ManyToMany field (you could also switch it around, that won't matter).You can also defined the intermediate model like you intended. Use the through argument on the ManyToMany for this.You can use Related Manager to handle all lookups both ways: account.person and person.account (the 'account' part is set by the related_name).class Account(CommonModel): # Accounts received from Client client = models.ForeignKey('Client', on_delete=models.RESTRICT) reference = models.CharField(db_index=True, max_length=50) person = models.ManyToManyField(Person, through=AccountPerson, related_name='account') |
Convert DataFrame column date from 2/3/2007 format to 20070223 with python I have a dataframe with 'Date' and 'Value', where the Date is in format m/d/yyyy. I need to convert to yyyymmdd. df2= df[["Date", "Transaction"]]I know datetime can do this for me, but I can't get it to accept my format. example data files:6/15/2006,-4.27,6/16/2006,-2.27,6/19/2006,-6.35, | You first need to convert to datetime, using pd.datetime, then you can format it as you wish using strftime:>>> df Date Transaction0 6/15/2006 -4.271 6/16/2006 -2.272 6/19/2006 -6.35df['Date'] = pd.to_datetime(df['Date'],format='%m/%d/%Y').dt.strftime('%Y%m%d')>>> df Date Transaction0 20060615 -4.271 20060616 -2.272 20060619 -6.35 |
Seaborn lineplot unexpected behaviour in the range of xticks I am starting my studies on pandas and seaborn. I'm testing the lineplot, but the plot's x-axis does not show the range I expected for this attribute (num_of_elements). I expected that each value of this attribute shows up on the x-axis. Can someone explain what I'm missing on this plot? Thanks.This is the code I'm using:import pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsfrom matplotlib import tickerdf_scability = pd.DataFrame()df_scability['num_of_elements'] = [13,28,43,58,73,88,93,108,123,138]df_scability['time_minutes'] = [2,3,5,7,20,30,40,50,60,90]df_scability['dataset'] = ['Top 10 users','Top 10 users','Top 10 users','Top 10 users','Top 10 users','Top 10 users', 'Top 10 users','Top 10 users','Top 10 users','Top 10 users']dpi = 600fig = plt.figure(figsize=(3, 2),dpi=dpi)ax = sns.lineplot(x = "num_of_elements", y = "time_minutes", hue='dataset', err_style='bars', data = df_scability)ax.legend(loc='upper left', fontsize=4)sns.despine(offset=0, trim=True, left=True)ax.yaxis.set_major_locator(ticker.MultipleLocator(10))ax.set_yticklabels(ax.get_ymajorticklabels(), fontsize = 6)ax.yaxis.set_major_formatter(ticker.ScalarFormatter())ax.set_xticklabels(ax.get_xmajorticklabels(), fontsize = 6)ax.xaxis.set_major_formatter(ticker.ScalarFormatter())plt.ylabel('AVG time (min)',fontsize=7)plt.xlabel('Number of elements',fontsize=7)plt.tight_layout()plt.show()This code generates it as output for me: | The line:sns.despine(offset=0, trim=True, left=True)removes the spines from plot, so it could cause confusion. The x axis is actually going from 6.75 to 144.25:print(ax.get_xlim())# (6.75, 144.25)But only ticks for 50 and 100 values are shown.So you can fix x axis range with:ax.set_xticks(range(0, 150 + 50, 50))before call the despine. 0 is the lowest tick, 150 the highest and 50 the step among ticks. You can tailor them on your needs.Complete Codeimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsfrom matplotlib import tickerdf_scability = pd.DataFrame()df_scability['num_of_elements'] = [13,28,43,58,73,88,93,108,123,138]df_scability['time_minutes'] = [2,3,5,7,20,30,40,50,60,90]df_scability['dataset'] = ['Top 10 users','Top 10 users','Top 10 users','Top 10 users','Top 10 users','Top 10 users', 'Top 10 users','Top 10 users','Top 10 users','Top 10 users']dpi = 600fig = plt.figure(figsize=(3, 2),dpi=dpi)ax = sns.lineplot(x = "num_of_elements", y = "time_minutes", hue='dataset', err_style='bars', data = df_scability)ax.legend(loc='upper left', fontsize=4)ax.set_xticks(range(0, 150 + 50, 50))sns.despine(offset=0, trim=True, left=True)ax.yaxis.set_major_locator(ticker.MultipleLocator(10))ax.set_yticklabels(ax.get_ymajorticklabels(), fontsize = 6)ax.yaxis.set_major_formatter(ticker.ScalarFormatter())ax.set_xticklabels(ax.get_xmajorticklabels(), fontsize = 6)ax.xaxis.set_major_formatter(ticker.ScalarFormatter())plt.ylabel('AVG time (min)',fontsize=7)plt.xlabel('Number of elements',fontsize=7)plt.tight_layout()plt.show() |
i need helping having it print the random names im trying to make it so the code prints out the random roles and the random name but when i run the code it gives <main.players object at 0x7f7b87db0bb0> get the <main.items object at 0x7f7b87dbf3d0> this responsei tried to add .name to the print but it would say that they have no attributesi want it to print out the random item and roles nameimport randomclass players: def __init__ (self, role, inkey): self.name = role self.key=inkeyclass items: def __init__ (self, name): self.itemname = name self.owner=""Roles=[["Hacker","Q"],["Lookout","P"],["Money Man","Z"],["Explosive expert","M"]]objectroles=[]for i in Roles: object1 = players(i[0],i[1]) objectroles.append(object1)Items=["Goggles","Headset","Torch","Explosives","Map","Laptop","Gloves","Drill"]objectitems = []for i in Items: object2 = items(i) objectitems.append(object2)templist=[]def get(): global templist item1=random.choice(objectitems) role1=random.choice(objectroles) print(role1,"get the", item1) templist.append(item1)get()def take(): global templist item1=random.choice(objectitems) role1=random.choice(objectroles) print(role1,"take the", item1) templist.append(item1)take() | Add __str__ methods to your classes to change the way that they're printed:class players: def __init__ (self, role, inkey): self.name = role self.key=inkey def __str__(self): return self.nameclass items: def __init__ (self, name): self.itemname = name self.owner="" def __str__(self): return self.itemname |
Writing to a global variable with different processes in Python I have a global variable to which I want to write in with different processes (code below).I use a reentrant lock to avoid race conditions (writing to the same place with many threads).Interestingly it seems that the global variable doesn't get modified at all, but using only the main thread modifies it successfully (bar function).How can I write to a global variable with different processes?import multiprocessing as mpfrom multiprocessing import Poolfrom threading import RLocklock = RLock()global_counter = 0def foo(num): global global_counter print(num) # prints 3,4,5 with lock: global_counter = global_counter + 1def bar(): global global_counter with lock: global_counter = global_counter + 1if __name__ == '__main__': p = Pool(mp.cpu_count()) list(p.map(foo, [3,4,5])) print(global_counter) # prints 0 bar() print(global_counter) # prints 1 | Processes behave differently from threads, and specifically, they don’t share memory. Global variables are copied into the respective process which means their value in the main process doesn’t change - which is what caused the confusion in the question.However, as stated in the documentation: https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes there are ways to share memory between processes if that is necessary.In this case, we can simply use a shared memory map using a "Value" objectimport multiprocessing as mpfrom multiprocessing import Pool, Valuefrom threading import RLocklock = RLock()global_counter = 0def foo(num): global global_counter print(num) # prints 3,4,5 with lock: global_counter.value = global_counter.value + 1def bar(): global global_counter with lock: global_counter.value = global_counter.value + 1def global_test(): global global_counter global_counter = Value('d', 0) p = Pool(mp.cpu_count()) list(p.map(foo, [3,4,5])) print(global_counter) # prints 3 bar() print(global_counter) # prints 4 |
Pandas: Subtract Date by factor until greater than another date As title suggest, I want to subtract factor from the date until date is just less than another date. op_d = {'ADate':[20200301,20200301,20200301,20200301,20200301,20200301], 'MDate':[20520801,20531001,20550405,20540701,20540910,20510701] , 'EDate':[20200201,20200201,20200205,20200101,20190910,20200401] , 'Frequency':[2,4,2,6,12,1]}df = pd.DataFrame(data=op_d)df['MDate'] = pd.to_datetime(df['MDate'], format='%Y%m%d')df['ADate'] = pd.to_datetime(df['ADate'], format='%Y%m%d')df['EDate'] = pd.to_datetime(df['EDate'], format='%Y%m%d')in the above data frame, I'm reducing months ('Frequency') from 'MDate' until it is just less then 'ADate'. Expected output is stored in 'EDate' field.My idea is to take months difference between 'Mdate' and 'ADate', then divide it by frequency and remove remainder from 'ADate' which is a lengthy process. df['tempDate'] = df.apply(lambda x: x['MDate'] - pd.DateOffset(months = x['Frequency']) , axis=1)Above code subtracts frequency only one time. is there way to run this in a while loop for each row or something like that? i.e.df['tempDate'] = df.apply(lambda x: x['MDate'] - pd.DateOffset(months = x['Frequency']) if df['tempDate'] < ['ADate'] , axis=1) | You can compose a function, then apply:def reduce(row): a,m,f = row[['ADate','MDate','Frequency']] offset = pd.DateOffset(months=f) while m > a: m -= offset return mdf['EDate'] = df.apply(reduce, axis=1)Output: ADate MDate EDate Frequency0 2020-03-01 2052-08-01 2020-02-01 21 2020-03-01 2053-10-01 2020-02-01 42 2020-03-01 2055-04-05 2020-02-05 23 2020-03-01 2054-07-01 2020-01-01 64 2020-03-01 2054-09-10 2019-09-10 125 2020-03-01 2051-07-01 2020-03-01 1 |
Python: how to avoid double loop in this case and speed up the performance? I am new to python. How do I vectorize or apply the code below, instead of using a double for loop? It would be great if i can get a solution that significantly reduce the runtime and speed up the performance. Can this be done using vectorized or apply ?Every iteration of the double for loop basically populates an entry in the matrix CD_train. The outer for loop is sweeping through the columns, while the inner for loop is sweeping through rows.Supposep = 10N = 20K = 11RDA_Sigma_Hat_k_Det is 1 x 11 Series. # create a N x K matrix of zeros CD_train = pd.DataFrame(np.zeros((N,K)),index = np.arange(N),columns = np.arange(1,K+1)) # training class discriminantsfor c_i in np.arange(K): for s_i in range(N): # each entry is a scalar CD_train.at[s_i,c_i+1] = -0.5 * np.log(RDA_Sigma_Hat_k_Det.iloc[c_i]) - 0.5 * (X_minus_MU.iloc[:,s_i].T).dot(SX.iloc[:,s_i])The data is as follow:RDA_Sigma_Hat_k_Det.to_dict()Out[8]: {1: 1.0, 2: 1.0, 3: 1.0, 4: 1.0, 5: 1.0, 6: 1.0, 7: 1.0, 8: 1.0, 9: 1.0, 10: 1.0, 11: 1.0}SX.iloc[0:11,0:20].to_dict()Out[2]: {0: {'x.1': -0.6777063201168626, 'x.2': -0.9017176491315951, 'x.3': -0.21627653765171262, 'x.4': 1.8309852247892016, 'x.5': 0.2944516810197635, 'x.6': 1.3920553697352451, 'x.7': -0.6361820038552793, 'x.8': 0.03585340860595837, 'x.9': -0.882879689655477, 'x.10': -1.049097984029799}, 1: {'x.1': -0.35170722429987955, 'x.2': -0.8344687172494253, 'x.3': -0.24868004543878153, 'x.4': 1.292366090376059, 'x.5': 0.1599772301293251, 'x.6': 1.8643265683002037, 'x.7': -0.5839681741154302, 'x.8': 0.003921466566276677, 'x.9': -0.4741540510775525, 'x.10': -0.4667330454332436}, 2: {'x.1': 0.9094495341972316, 'x.2': -0.4913267314916866, 'x.3': -1.4395089566135637, 'x.4': -0.29226701521622966, 'x.5': -0.45278263887659, 'x.6': 1.2871062144985885, 'x.7': -1.0350956630677273, 'x.8': 0.2829058022813902, 'x.9': -0.7778711263844686, 'x.10': 0.31749458660322155}, 3: {'x.1': 0.7349564284233978, 'x.2': 0.29755496943376786, 'x.3': -1.3341975563055901, 'x.4': 0.8331087124827273, 'x.5': -0.9324543915373287, 'x.6': 0.5169646194531178, 'x.7': -1.0100330247926, 'x.8': -0.4582513787449055, 'x.9': 0.38206961867066846, 'x.10': -1.0151564692036192}, 4: {'x.1': 0.4100022015032635, 'x.2': 0.4087743567773565, 'x.3': -0.4539022614235512, 'x.4': 0.8981593325809331, 'x.5': -1.7365284277894346, 'x.6': 0.060127120187668165, 'x.7': 0.9970665904072039, 'x.8': -1.1052933621805605, 'x.9': 0.9765796384203767, 'x.10': -0.30238465785384777}, 5: {'x.1': 0.1446055017035405, 'x.2': 0.38808237773668885, 'x.3': -0.3310389610642483, 'x.4': 0.5898193933154383, 'x.5': -1.6741433732526334, 'x.6': 0.20057378381319488, 'x.7': 0.6273926758490715, 'x.8': -1.2666337009073732, 'x.9': 0.9135745004577716, 'x.10': -0.24700639682165998}, 6: {'x.1': -0.5136619033115216, 'x.2': 0.9140035116869917, 'x.3': 0.10370810174559295, 'x.4': 0.8799451589534354, 'x.5': -2.2383815331743695, 'x.6': 0.27311217052088455, 'x.7': 0.8466907607564382, 'x.8': -0.09691624513798125, 'x.9': 1.1381312742219276, 'x.10': -0.24879279233882728}, 7: {'x.1': -0.9932567269653532, 'x.2': 0.7251892029408994, 'x.3': 0.8557395116371507, 'x.4': 1.8205771255734884, 'x.5': -1.9888413150271642, 'x.6': -0.20378913783392602, 'x.7': -0.734344003766196, 'x.8': 0.8139844172571485, 'x.9': 1.4580035131089988, 'x.10': -0.13267708372294978}, 8: {'x.1': -0.908622346320559, 'x.2': 0.5622398679956415, 'x.3': -0.5794658540984434, 'x.4': 0.41158069424635496, 'x.5': -0.4084199334281979, 'x.6': 1.4723106060926894, 'x.7': -0.8951625993649315, 'x.8': 0.7064241914392734, 'x.9': 0.2592903754614896, 'x.10': -0.3702676875062069}, 9: {'x.1': -0.6871101401885067, 'x.2': 0.300141466813851, 'x.3': -1.1721800173702455, 'x.4': 1.569481731994415, 'x.5': 0.617467630065868, 'x.6': 1.3148868732377033, 'x.7': -0.799089152643609, 'x.8': 0.32660214401990206, 'x.9': 0.6260125887310106, 'x.10': -1.0883986854074803}, 10: {'x.1': -0.0434708997293346, 'x.2': 0.23720336389848712, 'x.3': -0.8521953779729399, 'x.4': 0.4753303019425965, 'x.5': -0.6288471261248959, 'x.6': 1.0201032166170922, 'x.7': -1.0267414503093515, 'x.8': 0.2879476878666032, 'x.9': 0.5000023128058007, 'x.10': -1.0651755436843047}, 11: {'x.1': -0.692334484672753, 'x.2': -0.9405151098328469, 'x.3': -0.12176630660609486, 'x.4': 1.7347103070438572, 'x.5': 0.21958961557560192, 'x.6': 1.6050404200684618, 'x.7': -0.5630826422194906, 'x.8': 0.049298436833192766, 'x.9': -0.7584849300882825, 'x.10': -1.0473115885126312}, 12: {'x.1': -0.25766902358344224, 'x.2': -0.8861986648510946, 'x.3': -0.4728043076326747, 'x.4': 1.2897640655721312, 'x.5': 0.17938591376299665, 'x.6': 1.8365459095610885, 'x.7': -0.6508018761824372, 'x.8': 0.4089529419117124, 'x.9': -0.7213280538539257, 'x.10': -0.17912336716930083}, 13: {'x.1': 0.8937765007444923, 'x.2': -0.43959678389001744, 'x.3': -1.515117141450058, 'x.4': -0.32609333766729653, 'x.5': -0.44862363524080323, 'x.6': 1.350384381626573, 'x.7': -1.137434769357832, 'x.8': 0.3450890578323492, 'x.9': -0.9297296640379267, 'x.10': 0.3567952879809032}, 14: {'x.1': 0.7171936571769593, 'x.2': 0.2760008245997389, 'x.3': -1.3449987255679463, 'x.4': 0.7927773280218399, 'x.5': -0.8049116133732016, 'x.6': 0.12494865724560347, 'x.7': -0.4398580040334463, 'x.8': -0.8565603399767243, 'x.9': 0.7213280538539257, 'x.10': -0.952632626102762}, 15: {'x.1': 0.4706045975205235, 'x.2': 0.5863805102097538, 'x.3': -0.6942282775109789, 'x.4': 0.7303287327275624, 'x.5': -1.6450303478021255, 'x.6': 0.46140330197488744, 'x.7': 0.5668246333508464, 'x.8': -0.9321886237549177, 'x.9': 1.028276161876873, 'x.10': -0.7257603954225089}, 16: {'x.1': 0.1717720930216222, 'x.2': 0.46481513334583135, 'x.3': -0.30808647638174114, 'x.4': 0.5585950956682996, 'x.5': -1.7864364714188756, 'x.6': 0.33021685792906585, 'x.7': 0.6858721651577027, 'x.8': -1.2851206147198204, 'x.9': 0.9426537949020508, 'x.10': -0.41314117991822313}, 17: {'x.1': -0.6181487929964522, 'x.2': 0.7846786426828188, 'x.3': 0.2049690635801833, 'x.4': 1.0282605727773442, 'x.5': -2.1801554822733547, 'x.6': -0.16057477979530238, 'x.7': 0.8821961649795357, 'x.8': -0.25657595533638966, 'x.9': 1.1995208958265169, 'x.10': -0.05764847200192125}, 18: {'x.1': -1.003705415933846, 'x.2': 0.6777700843060359, 'x.3': 0.8597899501105342, 'x.4': 1.8218781379754525, 'x.5': -2.1149977586460285, 'x.6': -0.2439167560126478, 'x.7': -0.7280783441974141, 'x.8': 0.9030577292625762, 'x.9': 1.4951603893433556, 'x.10': -0.11838591958561101}, 19: {'x.1': -0.8281674412631627, 'x.2': 0.4398123253383578, 'x.3': -0.9318540012828176, 'x.4': 0.3048976772852978, 'x.5': -0.3737615697966416, 'x.6': 1.2022208683512918, 'x.7': -0.10568949369841144, 'x.8': 0.03417278007755406, 'x.9': 1.1623640195921603, 'x.10': -0.6310814330126395}}X_minus_MU.iloc[0:11,0:20].to_dict()Out[6]: {0: {'x.1': -0.6777063201168626, 'x.2': -0.9017176491315951, 'x.3': -0.21627653765171262, 'x.4': 1.8309852247892016, 'x.5': 0.2944516810197635, 'x.6': 1.3920553697352451, 'x.7': -0.6361820038552793, 'x.8': 0.03585340860595837, 'x.9': -0.882879689655477, 'x.10': -1.049097984029799}, 1: {'x.1': -0.35170722429987955, 'x.2': -0.8344687172494253, 'x.3': -0.24868004543878153, 'x.4': 1.292366090376059, 'x.5': 0.1599772301293251, 'x.6': 1.8643265683002037, 'x.7': -0.5839681741154302, 'x.8': 0.003921466566276677, 'x.9': -0.4741540510775525, 'x.10': -0.4667330454332436}, 2: {'x.1': 0.9094495341972316, 'x.2': -0.4913267314916866, 'x.3': -1.4395089566135637, 'x.4': -0.29226701521622966, 'x.5': -0.45278263887659, 'x.6': 1.2871062144985885, 'x.7': -1.0350956630677273, 'x.8': 0.2829058022813902, 'x.9': -0.7778711263844686, 'x.10': 0.31749458660322155}, 3: {'x.1': 0.7349564284233978, 'x.2': 0.29755496943376786, 'x.3': -1.3341975563055901, 'x.4': 0.8331087124827273, 'x.5': -0.9324543915373287, 'x.6': 0.5169646194531178, 'x.7': -1.0100330247926, 'x.8': -0.4582513787449055, 'x.9': 0.38206961867066846, 'x.10': -1.0151564692036192}, 4: {'x.1': 0.4100022015032635, 'x.2': 0.4087743567773565, 'x.3': -0.4539022614235512, 'x.4': 0.8981593325809331, 'x.5': -1.7365284277894346, 'x.6': 0.060127120187668165, 'x.7': 0.9970665904072039, 'x.8': -1.1052933621805605, 'x.9': 0.9765796384203767, 'x.10': -0.30238465785384777}, 5: {'x.1': 0.1446055017035405, 'x.2': 0.38808237773668885, 'x.3': -0.3310389610642483, 'x.4': 0.5898193933154383, 'x.5': -1.6741433732526334, 'x.6': 0.20057378381319488, 'x.7': 0.6273926758490715, 'x.8': -1.2666337009073732, 'x.9': 0.9135745004577716, 'x.10': -0.24700639682165998}, 6: {'x.1': -0.5136619033115216, 'x.2': 0.9140035116869917, 'x.3': 0.10370810174559295, 'x.4': 0.8799451589534354, 'x.5': -2.2383815331743695, 'x.6': 0.27311217052088455, 'x.7': 0.8466907607564382, 'x.8': -0.09691624513798125, 'x.9': 1.1381312742219276, 'x.10': -0.24879279233882728}, 7: {'x.1': -0.9932567269653532, 'x.2': 0.7251892029408994, 'x.3': 0.8557395116371507, 'x.4': 1.8205771255734884, 'x.5': -1.9888413150271642, 'x.6': -0.20378913783392602, 'x.7': -0.734344003766196, 'x.8': 0.8139844172571485, 'x.9': 1.4580035131089988, 'x.10': -0.13267708372294978}, 8: {'x.1': -0.908622346320559, 'x.2': 0.5622398679956415, 'x.3': -0.5794658540984434, 'x.4': 0.41158069424635496, 'x.5': -0.4084199334281979, 'x.6': 1.4723106060926894, 'x.7': -0.8951625993649315, 'x.8': 0.7064241914392734, 'x.9': 0.2592903754614896, 'x.10': -0.3702676875062069}, 9: {'x.1': -0.6871101401885067, 'x.2': 0.300141466813851, 'x.3': -1.1721800173702455, 'x.4': 1.569481731994415, 'x.5': 0.617467630065868, 'x.6': 1.3148868732377033, 'x.7': -0.799089152643609, 'x.8': 0.32660214401990206, 'x.9': 0.6260125887310106, 'x.10': -1.0883986854074803}, 10: {'x.1': -0.0434708997293346, 'x.2': 0.23720336389848712, 'x.3': -0.8521953779729399, 'x.4': 0.4753303019425965, 'x.5': -0.6288471261248959, 'x.6': 1.0201032166170922, 'x.7': -1.0267414503093515, 'x.8': 0.2879476878666032, 'x.9': 0.5000023128058007, 'x.10': -1.0651755436843047}, 11: {'x.1': -0.692334484672753, 'x.2': -0.9405151098328469, 'x.3': -0.12176630660609486, 'x.4': 1.7347103070438572, 'x.5': 0.21958961557560192, 'x.6': 1.6050404200684618, 'x.7': -0.5630826422194906, 'x.8': 0.049298436833192766, 'x.9': -0.7584849300882825, 'x.10': -1.0473115885126312}, 12: {'x.1': -0.25766902358344224, 'x.2': -0.8861986648510946, 'x.3': -0.4728043076326747, 'x.4': 1.2897640655721312, 'x.5': 0.17938591376299665, 'x.6': 1.8365459095610885, 'x.7': -0.6508018761824372, 'x.8': 0.4089529419117124, 'x.9': -0.7213280538539257, 'x.10': -0.17912336716930083}, 13: {'x.1': 0.8937765007444923, 'x.2': -0.43959678389001744, 'x.3': -1.515117141450058, 'x.4': -0.32609333766729653, 'x.5': -0.44862363524080323, 'x.6': 1.350384381626573, 'x.7': -1.137434769357832, 'x.8': 0.3450890578323492, 'x.9': -0.9297296640379267, 'x.10': 0.3567952879809032}, 14: {'x.1': 0.7171936571769593, 'x.2': 0.2760008245997389, 'x.3': -1.3449987255679463, 'x.4': 0.7927773280218399, 'x.5': -0.8049116133732016, 'x.6': 0.12494865724560347, 'x.7': -0.4398580040334463, 'x.8': -0.8565603399767243, 'x.9': 0.7213280538539257, 'x.10': -0.952632626102762}, 15: {'x.1': 0.4706045975205235, 'x.2': 0.5863805102097538, 'x.3': -0.6942282775109789, 'x.4': 0.7303287327275624, 'x.5': -1.6450303478021255, 'x.6': 0.46140330197488744, 'x.7': 0.5668246333508464, 'x.8': -0.9321886237549177, 'x.9': 1.028276161876873, 'x.10': -0.7257603954225089}, 16: {'x.1': 0.1717720930216222, 'x.2': 0.46481513334583135, 'x.3': -0.30808647638174114, 'x.4': 0.5585950956682996, 'x.5': -1.7864364714188756, 'x.6': 0.33021685792906585, 'x.7': 0.6858721651577027, 'x.8': -1.2851206147198204, 'x.9': 0.9426537949020508, 'x.10': -0.41314117991822313}, 17: {'x.1': -0.6181487929964522, 'x.2': 0.7846786426828188, 'x.3': 0.2049690635801833, 'x.4': 1.0282605727773442, 'x.5': -2.1801554822733547, 'x.6': -0.16057477979530238, 'x.7': 0.8821961649795357, 'x.8': -0.25657595533638966, 'x.9': 1.1995208958265169, 'x.10': -0.05764847200192125}, 18: {'x.1': -1.003705415933846, 'x.2': 0.6777700843060359, 'x.3': 0.8597899501105342, 'x.4': 1.8218781379754525, 'x.5': -2.1149977586460285, 'x.6': -0.2439167560126478, 'x.7': -0.7280783441974141, 'x.8': 0.9030577292625762, 'x.9': 1.4951603893433556, 'x.10': -0.11838591958561101}, 19: {'x.1': -0.8281674412631627, 'x.2': 0.4398123253383578, 'x.3': -0.9318540012828176, 'x.4': 0.3048976772852978, 'x.5': -0.3737615697966416, 'x.6': 1.2022208683512918, 'x.7': -0.10568949369841144, 'x.8': 0.03417278007755406, 'x.9': 1.1623640195921603, 'x.10': -0.6310814330126395}}Update: One thing I realised is that I should always use numpy arrays when it comes to matrix operations with loops because it is a lot faster compared to pandas dataaframe / series. This speeds up the whole function a lot, although it is still slow. | From the top of my head, I don't know how to solve the problem efficiently, using pandas.DataFrame.apply() method. However, using vectors, it can be done much simpler. All you have to do is to make CD_train data frame consisting of adjusted RDA_Sigma_Hat_k_Det series as rows. After that, you use pandas.DataFrame.add() method to add series of X_minus_MU and SX columns product to every column in CD_train data frame.The code is following:CD_train = pd.DataFrame()CD_train = CD_train.append( [-0.5 * np.log(RDA_Sigma_Hat_k_Det).to_frame().T] * N, ignore_index=True)CD_train = CD_train.add(-0.5 * (X_minus_MU * SX).sum(), axis=0) |
How to separate Pandas column that contains values stored as text and numbers into two seperate columns I have a Pandas column that contains results from a survey, which are either free text or numbers from 1-5. I am retrieving these from an API in JSON format and convert them into a DataFrame. Each row represents one question with the answer of a participant like this:Memberid | Question | Answer 1 Q1 3 1 Q2 2 1 Q3 Test Text 2 Q1 3 2 Q2 2 2 Q3 Test TextThe column that has the results stores all of them as string for now, so when exporting them to excel the numbers are stored as text. My goal is to have a separate column for the text answers and leave the field they were originally in empty, so that we have separate columns for the text results and the numeric results for calculation purposes.Memberid | Question | Numeric Answers | Freetext answers 1 Q1 3 1 Q2 2 1 Q3 Test Text 2 Q1 3 2 Q2 2 2 Q3 Test TextI am generating this df from lists like this: d = {'Memberid':memberid, 'Question':title, 'Answer':results}df = pd.DataFrame(d)So the first thing I tried was to convert the numeric values in the column from string to numbers via this:df["Answer"] = pd.to_numeric(df['Answer'], errors='ignore')Idea was that if it works I can simply do a for loop to check if a value in the answer column is string and then move that value into a new column.The issue is, that the errors command does not work as intended for me. When I leave it on ignore, nothing gets converted. When I change it to coerce, the numbers get converted from str to numeric, but the fields where there freetext answers are now empty in Excel. | You can use Series.str.extract with a regex pattern:(\d+)? will extract consecutive digits(\D+) will extract consecutive non-digit charactersThe ?P<text> syntax will name your match group - making this the column heading.df.join(df.pop('Answer').str.extract('(?P<numbers>\d+)?(?P<text>\D+)?').fillna(''))[out] Memberid Question numbers text0 1 Q1 3 1 1 Q2 2 2 1 Q3 Test Text3 2 Q1 3 4 2 Q2 2 5 2 Q3 Test Text |
creating a Field based on the data in choice field in django I am trying to write a program for registering plates. However we have three types of plate and i've created a choice field for these three. the thing i wanna do is that i need to create a OneToOne field which it's given model is based on the data of the choice filed for example if the user chose 1 i need to have OnetoOne field to CarPlate if 2 OneToOne field to MotorPlate and so on....VEHICLE_CHOICES = (("1", "سواری ملی"),("2", "سواری منظقه ازاد انزلی"),("3", "موتور سیکلت"),)class Vehicle(models.Model):vehicle_type = models.CharField( max_length=3, choices=VEHICLE_CHOICES, blank=False)# below is an example of what i want to doif vehicle_type == 1 : plate_car = models.OneToOneField(CarPlate, on_delete=models.CASCADE, related_name="savari", blank=True)elif vehicle_type == 2: plate_anzali = models.OneToOneField(AnzaliPlate, on_delete=models.CASCADE, related_name="mantaqe", blank=True)else: plate_motor = models.OneToOneField(MotorPlate, on_delete=models.CASCADE, related_name="motor", blank=True)the code above works but it doesn't give me the right answer. | Apparently these kind of settings are not allowed in models, but you can write a global field for every one of them and then in the front-end part of the project you can only get the fields that you want for the chosen choice field |
numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Type of variable 'argmax' cannot be determined I try to speed up my python code by using numba. But after days of trying and hundreds of error messages, I still fail to get it work.My current problem is this error message:Traceback (most recent call last): File "E:/Studium/Masterarbeit/Masterarbeit/code/fast_simulation.py", line 152, in <module> epsis, ks, vn, ln = monte_carlo(n, m, alpha, epsilon_max, delta, aufloesung, fehlerposition) File "C:\Python\lib\site-packages\numba\dispatcher.py", line 401, in _compile_for_args error_rewrite(e, 'typing') File "C:\Python\lib\site-packages\numba\dispatcher.py", line 344, in error_rewrite reraise(type(e), e, None) File "C:\Python\lib\site-packages\numba\six.py", line 668, in reraise raise value.with_traceback(tb)numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)Type of variable 'argmax' cannot be determined, operation: $300unpack_sequence.5, location: E:/Studium/Masterarbeit/Masterarbeit/code/fast_simulation.py (126)File "fast_simulation.py", line 126:def monte_carlo(n: int, m: int, alpha: float, epsilon_max: float, delta: float, aufloesung: int, x_position: float): <source elided> argmax, max_value = get_max_ks(random_values)My code is@njitdef insort(a, x, lo=0, hi=None): """Insert item x in list a, and keep it sorted assuming a is sorted. If x is already in a, insert it to the right of the rightmost x. Optional args lo (default 0) and hi (default len(a)) bound the slice of a to be searched. """ if lo < 0: raise ValueError('lo must be non-negative') if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 if x < a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x)@njitdef get_max_ks(data: list) -> (float, float): def f(x, data): return sqrt(len(data)) * abs(np.searchsorted(data, x, side='right') / n - x) max_value = -1000 argmax = 0.0 epsilon = 1 / (n * 10 ** 6) for i in range(n): data_i = data[i] fx1 = f(data_i, data) fx2 = f(data_i + epsilon, data) fx3 = f(data_i - epsilon, data) if fx1 > max_value: max_value = fx1 argmax = data_i if fx2 > max_value: max_value = fx2 argmax = data_i + epsilon if fx3 > max_value: max_value = fx3 argmax = data_i - epsilon return argmax, max_value@njitdef get_max_vn(data: list) -> (float, float): def g(x, data): return sqrt(len(data)) * abs(np.searchsorted(data, x, side='right') / n - x) / sqrt( x * (1 - x)) max_value = -1000 argmax = 0.0 epsilon = 1 / (n * 10 ** 6) for i in range(n): data_i = data[i] fx1 = g(data_i, data) fx2 = g(data_i + epsilon, data) fx3 = g(data_i - epsilon, data) if fx1 > max_value: max_value = fx1 argmax = data_i if fx2 > max_value: max_value = fx2 argmax = data_i + epsilon if fx3 > max_value: max_value = fx3 argmax = data_i - epsilon return argmax, max_value@njitdef get_critical_value_vn(alpha: float, n: int) -> float: loglogn = log(log(n)) an = sqrt(2 * loglogn) dn = 2 * loglogn + 0.5 * log(loglogn) - 0.5 * log(pi) return (dn - log(-0.5 * log(1 - alpha))) / an@njitdef monte_carlo(n: int, m: int, alpha: float, epsilon_max: float, delta: float, aufloesung: int, x_position: float): epsilons = np.linspace(min(0.0, epsilon_max), max(0.0, epsilon_max), aufloesung) res_ks = np.zeros(n) res_vn = np.zeros(n) res_ln = np.zeros(n) ks_critical_value = 1.2238478702170836 # TODO vn_critical_value = get_critical_value_vn(alpha, n) ln_critical_value = 2.7490859400901955 # TODO for epsilon in epsilons: for i in range(m): uniform_distributed_values = np.random.uniform(0.0, 1.0, n) random_values = [] for x in uniform_distributed_values: if x < max(0, x_position - delta) or x > min(1, x_position + delta): insort(random_values, x) elif max(0, x_position - delta) <= x and x <= x_position + epsilon: insort(random_values, (x - x_position - epsilon) / (1 + (epsilon / min(delta, x_position))) + x_position) else: insort(random_values, (x - x_position - epsilon) / ( 1 - (epsilon / min(delta, 1 - x_position))) + x_position) argmax, max_value = get_max_ks(random_values) vn_argmax, vn_max = get_max_vn(random_values) ks_statistic = max_value vn_statistic = vn_max ln_statistic = max_value / sqrt(argmax * (1 - argmax)) if ks_statistic > ks_critical_value: # if test dismisses H_0 res_ks[i] += 1 / m if vn_statistic > vn_critical_value: # if test dismisses H_0 res_vn[i] += 1 / m if ln_statistic > ln_critical_value: # if test dismisses H_0 res_ln[i] += 1 / m return epsilons, res_ks, res_vn, res_lnif __name__ == '__main__': # some code epsis, ks, vn, ln = monte_carlo(n, m, alpha, epsilon_max, delta, aufloesung, fehlerposition) # some other codeAny idea, how I can fix this? | I opened an issue on the numba github page for this question. The problem was that numba could not determine the type of the list random_values.The solution for this is to use a typed list as follows:random_values = typed.List.empty_list(types.float64). |
Error importing googleanalytics on Python 3.8 pip install googleanalyticsandpip3 install googleanalyticsboth work fine, but import googleanalyticsreturns:optional_warn_function.func_name = f.func_nameAttributeError: 'function' object has no attribute 'func_name' | Reinstall the package using the following comment.pip install -e git+https://github.com/dvska/gdata-python3#egg=gdataOr use f.__name__ instead of f.func_name in the code.For more information, see https://github.com/google/gdata-python-client/issues/29 |
Simple bot in python, i think i got the 'or' operator, and some other stuff messed up I'm trying to create a simple, veery simple bot. But its All messed up, pls helpinp = input('')if inp == ('Hello' or 'hello' or 'hi' or 'Hi'): inp1 = input('Hello, How are you? \n')else: sys.exit('hmmm')if inp1 == "I'm Fine" or "i'm fine" or "i'm Fine" or "I'm fine" or "fine" or "Fine": input("Cool, Wassup? \n")elif inp1 == "not fine" or "Not Fine" or "not Fine" or "Not fine": inp3 = input("Why?\n")else: sys.exit("hmmm")if inp3 == str: print("Lmao")else: print('wut?')if inp2 == str: print('Noice') | or is applied between logical values, such as:inp == "Hello" or inp == "hello"However, you can also achieve what you want using the in operator, which checks if a value is in a listinp in ('Hello', 'hello', 'hi', 'Hi')But even better, given that you seem to want to do this case-insensitive, you can make the input lower case, and only match lower case valuesinp.lower() in ('hello', 'hi') |
Reference to the pushed button I wanted to make an easy python project with the use of tkinter. In the screen, I need 81 buttons, so I thought the easiest way to perform this is by double "for" cycle but, when one of the buttons is pressed I need to configure its text but I don't know how to refer to it. Thank you for the answers. Here is my take on the project:from tkinter import *root = Tk()root.geometry("700x400")root.title = "go"lista=[]def buttonfunction(): configure(text="t")for i in range(9): for e in range(9): Button(root, text="a", command = buttonfunction).grid(row = e, column = i) | You can assign function with argument using lambda but to use button you have to do it after creating this button. You have to also use x=btn (if you run it in loop) to copy value from btn to new variable x. Without this all command will have access to the same (last) button.def buttonfunction(widget): widget["text"] = "t"btn = Button(root, text="a")btn["command"] = lambda x=btn:buttonfunction(x)btn.grid(...) |
How to merge multiple dataframes of different length I want to merge multiple dataframe by a common column such that all the non matching data has NA.D1: D2: ID val1 val2 ID Target 1 x y 1 0 1 x y 1 1 1 a b 1 a c D3: D4: ID random new ID Targetnew 1 x y 1 1 1 x y 1 0 1 a b 1 a cSo the merge will become ID val1 val2 Target Targetnew random new 1 x y 0 1 x y 1 x y 1 0 x y 1 a b NA NA a b 1 a c NA NA a c | Use this code:df1 = D1.merge(D2, how='left', on='ID')df2 = D3.merge(D4, how='left', on='ID')then merge df1 and df2 |
How to shift an entire column down (including the header) and then rename the header 'Name'? I am still learning coding and would be grateful for any help that I can get.I have a dataframe where a person's name is the column header. I would like to shift the column header down 1 row and rename the column 'Name'. The column header will be different with each dataframe, though. It won't always be the same person's name.Here is an example of one of the dataframes:index Patrick0 Stan1 Frank2 Emily3 TamiI am hoping to shift the entire column with names down 1 row and rename the column header 'Names' but am unable to find this during my research.Here is an example of the desired output:index Names0 Patrick1 Stan2 Frank3 Emily4 TamiI have seen the option to use 'shift', however, I do not know if this will work correctly in this case.I will post the url below for a similar problem that was been asked. In this problem they want to shift up by one. In my problem I want to shift down by one.Shift column in pandas dataframe up by one?The code example I found online is below:df.gdp = df.gdp.shift(-1)The issue I see is that I will be using multiple dataframes so the person's name (column header) will be different. I won't have the same name each time.Any help? I will continue to work and this and will post the answer if I am able to find it. Thanks in advance for any help that you may offer. | Maybe this helps:df = pd.DataFrame({'index': range(4), 'Patrick': ['Stan', 'Frank', 'Emily', 'Tami']})names = pd.concat([pd.Series(df.columns[1]),df.iloc[:, 1]]).reset_index(drop=True)df = pd.DataFrame({'Names': names})df index Names 0 Patrick 1 Stan 2 Frank 3 Emily 4 Tami |
Is it possible to fit a special size for tkinter window? I use tkinter in python and I want to fix the size of a window in tkinter for always it means that if user wants to change it by restore button the window size not changed.from tkinter import *root=Tk()root.geometry("1800x900")root.mainloop() | I'm not sure I've fully understood your question, but to enforce a fixed window size I've used to set min and max to the same value, as follows.from Tkinter import Tkroot=Tk()fixed_geometry=180,90root.minsize(*fixed_geometry)root.maxsize(*fixed_geometry)root.mainloop() |
Tensorflow Java API in windows I was trying to configure Tensorflow API for java in windows.As per the read mehttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/README.mdIt says we have to build the native library will need to be built from source for windows. But its not have detailed instructions for it.Anyone has any luck getting it compiled?Is there any steps I can follow? | From the official website: We don't officially support building TensorFlow on Windows; however, you may try to build TensorFlow on Windows if you don't mind using the highly experimental Bazel on Windows or TensorFlow CMake build.Also, a related github issue: https://github.com/tensorflow/tensorflow/issues/17, where one user rongjiecomputer mentioned (in Sep 2016): Tensorflow requires Bazel to build from source, If I am not mistaken, Bazel is a build system like GNU Make, but not a compiler. Based on what I saw as I briefly scanned through the code, source code of Tensorflow itself uses mostly standard C++ library for things like threading so it should be no problem to compile on Windows, essential third-party libraries it uses all have Windows support. Therefore, I think the main problem lies on lack of build method for Windows rather than source code itself. There are works on using CMake to build instead of Bazel but not complete yet. If someone can translate Bazel build rules to CMake's, I think we will be able to build it on Windows.So I assume, if Bazel Windows works as it should, you could follow the same steps as for building it on other platforms with Bazel. I haven't tried it out my-self though. |
Testing my classifier on a review Okay so I have been able to train my movie review classifier using the NaiveBayes Algorithm. The task is to: Test your classifier against a negative review of the walking dead. http://metro.co.uk/2017/02/27/the-walking-dead-season-7-episode-11-hostiles-and-calamities-wasnt-as-exciting-as-it-sounds-6473911/#mv-aNow my book gave an example of classifying documents and it used classifier.classify(df)....now i understand this was document features and had to be tokenized etc. My question: Is there some way to test my classifier against the review just using the url? Or do i have to highlight all the words of the review, store as a string or document then tokenize etc? | Your program can read the contents of a URL like this:with urllib.urlopen("http://example.com/review.html") as rec: data = rec.read()However, the URL you suggest points to an HTML document, so you'll need to "scrape" the contents (i.e., extract the body of the review and convert it to "plain text" by removing boldface etc.) before you go any further. For this you can use BeautifulSoup or something similar. (The NLTK used to have a scraping function but dropped it in favor of BeautifulSoup.) Unless you've already learned how to do this, it would indeed be simpler to grab a few test documents by copy-pasting them from your browser to a text-only editor like Notepad, which will remove all markup. |
python Weather API location id a=pywapi.get_loc_id_from_weather_com("pune"){0: (u'TTXX0257', u'Pune, OE, Timor-leste'), 1: (u'INXX0102', u'Pune, MH, India'), 2: (u'BRPA0444', u'Pune, PA, Brazil'), 3: (u'FRBR2203', u'Punel, 29, France'), 4: (u'IDVV9705', u'Punen, JT, Indonesia'), 5: (u'IRGA2787', u'Punel, 19, Iran'), 6: (u'IRGA2788', u'Punes, 19, Iran'), 7: (u'IDYY7030', u'Punen, JI, Indonesia'), 8: (u'RSUD1221', u'Punem, UD, Russia'), 9: (u'BUXX2256', u'Punevo, 09, Bulgaria'), 'count': 10}For the above command, I'm getting 10 results. I want a specific location like Pune,MH,India. How do I get it? | I look the source code of pywapi, and found that the searchstring would be quoted(url encode, e.g. ',' will be quoted to "%2C") in get_loc_id_from_waather_com.So when you call pywapi.get_loc_id_from_weather_com(" Pune,MH,India") it will request the url:http://xml.weather.com/search/search?where=Pune%2CMH%2CIndia but not http://wxdata.weather.com/wxdata/search/search?where=Pune,MH,India. And the formmer is certain no results.A solution is that you can modify(hack) the pywapi. Just edit the pywapi.py and find the get_loc_id_from_weather_com function. replace the line url = LOCID_SEARCH_URL % quote(search_string) to url = LOCID_SEARCH_URL % quote(search_string, ',') And now you can:In [2]: import pywapiIn [3]: pywapi.get_loc_id_from_weather_com("Pune,MH,India") # no spacesOut[3]: {0: (u'INXX0102', u'Pune, MH, India'), 'count': 1}PS:The source code of pywapi:def get_loc_id_from_weather_com(search_string): """Get location IDs for place names matching a specified string. Same as get_location_ids() but different return format. Parameters: search_string: Plaintext string to match to available place names. For example, a search for 'Los Angeles' will return matches for the city of that name in California, Chile, Cuba, Nicaragua, etc as well as 'East Los Angeles, CA', 'Lake Los Angeles, CA', etc. Returns: loc_id_data: A dictionary of tuples in the following format: {'count': 2, 0: (LOCID1, Placename1), 1: (LOCID2, Placename2)} """ # Weather.com stores place names as ascii-only, so convert if possible try: # search_string = unidecode(search_string.encode('utf-8')) search_string = unidecode(search_string) except NameError: pass url = LOCID_SEARCH_URL % quote(search_string) # change to:url = LOCID_SEARCH_URL % quote(search_string, ',') try: handler = urlopen(url) except URLError: return {'error': 'Could not connect to server'} if sys.version > '3': # Python 3 content_type = dict(handler.getheaders())['Content-Type'] else: # Python 2 content_type = handler.info().dict['content-type'] try: charset = re.search('charset\=(.*)', content_type).group(1) except AttributeError: charset = 'utf-8' if charset.lower() != 'utf-8': xml_response = handler.read().decode(charset).encode('utf-8') else: xml_response = handler.read() dom = minidom.parseString(xml_response) handler.close() loc_id_data = {} try: num_locs = 0 for loc in dom.getElementsByTagName('search')[0].getElementsByTagName('loc'): loc_id = loc.getAttribute('id') # loc id place_name = loc.firstChild.data # place name loc_id_data[num_locs] = (loc_id, place_name) num_locs += 1 loc_id_data['count'] = num_locs except IndexError: error_data = {'error': 'No matching Location IDs found'} return error_data finally: dom.unlink() return loc_id_data |
Django - Leaving socket open while receiving data When I try to load in a largish (50mb) video, the server throws this error:[14/Mar/2016 02:16:13] "GET /media/media/uploads/SampleVideo_1280x720_50mb.mp4 HTTP/1.1" 200 52464391[14/Mar/2016 02:16:13] "GET /media/media/uploads/SampleVideo_1280x720_50mb.mp4 HTTP/1.1" 200 286720Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 86, in run self.finish_response() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 128, in finish_response self.write(data) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 217, in write self._write(data) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 328, in write self.flush() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 307, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size])error: [Errno 41] Protocol wrong type for socket[14/Mar/2016 02:16:13] "GET /media/media/uploads/SampleVideo_1280x720_50mb.mp4 HTTP/1.1" 500 59- Broken pipe from ('127.0.0.1', 52070)As you can see the video is requested twice before the error is thrown. It seems this is caused when the socket closes before the whole video is loaded in. This is also supported by the fact small videos don't throw errors (on Chrome - everything throws an error on Safari)I'm using django 1.9 in the development server and html5 to display the videos onto the page.How would I be able to keep the socket open until all the packets have been received? And why would this even be the default behaviour? I can't think of any useful application of having the socket close before all the required data is sent. | Using AWS S3 to store and handle the media this problem was fixed on all browsers. It must have something to do with the development server. |
tensorflow variable declaration what does point do? when we declare the Tensorflow variableW = tf.Variable([.3], dtype = tf.float32)b = tf.Variable([-.3], dtype = tf.float32)what does .3 and -.3 mean in this declaration? | Initialize the variable with the given array. |
How to exact pattern matches? I have a file list like this: aaa.txt bbb.doc ccc.gjf ddd.exe. I want the file whoes extention is gjf. For some reasons, this file can not be the first or the last file in the file list. In other words, there must be a space before and after the file.However, I tried many regular expressions and can not get the file:pattern = re.compile(r'\s+(.*?gjf) ')print pattern.findall('aaa.txt bbb.doc ccc.gjf ddd.exe')result: ['bbb.doc ccc.gjf']pattern = re.compile(r' (.*?gjf) ')print pattern.findall('aaa.txt bbb.doc ccc.gjf ddd.exe')result: ['bbb.doc ccc.gjf']what should I do? | I do not think regex is entirely necessary for this problem:s = 'aaa.txt bbb.doc ccc.gjf ddd.exe'final_data = [i for i in s.split()[1:-1] if i.endswith('.gjf')]Output:['ccc.gjf']However, if you really need regex, you can try this:import res = 'aaa.txt eee.gjf bbb.doc ccc.gjf ddd.exe'final_data = re.findall("(?<!^)[a-zA-Z0-9]+\.gjf(?!$)", s)Output:['eee.gjf', 'ccc.gjf'] |
Pseudowire ethernet control word support in scapy Is there any support for PW ethernet control word in scapy? I need to create a packet that contains this control word. Thank you! | Did some research into the scapy manual about how to build new layers and I have written this code. I took some example from the mpls code in scapy. I have tested it and it seems to add the PW Ethernet Control Word in the packet. from scapy.packet import Packet, bind_layers, Paddingfrom scapy.fields import IntFieldfrom scapy.layers.inet import IPfrom scapy.layers.inet6 import IPv6class PseudowireControlWord(Packet): name = "PseudowireControlWord" fields_desc = [IntField("SeqNumber", 0)] def guess_payload_class(self, payload): if len(payload) >= 1: ip_version = (ord(payload[0]) >> 4) & 0xF if ip_version == 4: return IP elif ip_version == 6: return IPv6 return PaddingAlso I added a modification in the mpls.py code in scapy in the guess_payload_class function. I think the following code needs to be added:elif ip_version == 0: return PseudowireControlWord |
Python Zipline : "pandas_datareader._utils.RemoteDataError" & local data It's my first post, I hope it will be well done.I'm trying to run the following ZipLine Algo with local AAPL data : import pandas as pdfrom collections import OrderedDictimport pytzfrom zipline.api import order, symbol, record, order_targetfrom zipline.algorithm import TradingAlgorithmdata = OrderedDict()data['AAPL'] = pd.read_csv('AAPL.csv', index_col=0, parse_dates=['Date'])panel = pd.Panel(data)panel.minor_axis = ['Open', 'High', 'Low', 'Close', 'Volume', 'Price']panel.major_axis = panel.major_axis.tz_localize(pytz.utc)print panel["AAPL"]def initialize(context): context.security = symbol('AAPL')def handle_data(context, data): MA1 = data[context.security].mavg(50) MA2 = data[context.security].mavg(100) date = str(data[context.security].datetime)[:10] current_price = data[context.security].price current_positions = context.portfolio.positions[symbol('AAPL')].amount cash = context.portfolio.cash value = context.portfolio.portfolio_value current_pnl = context.portfolio.pnl# code (this will come under handle_data function only) if (MA1 > MA2) and current_positions == 0: number_of_shares = int(cash / current_price) order(context.security, number_of_shares) record(date=date, MA1=MA1, MA2=MA2, Price= current_price, status="buy", shares=number_of_shares, PnL=current_pnl, cash=cash, value=value) elif (MA1 < MA2) and current_positions != 0: order_target(context.security, 0) record(date=date, MA1=MA1, MA2=MA2, Price=current_price, status="sell", shares="--", PnL=current_pnl, cash=cash, value=value) else: record(date=date, MA1=MA1, MA2=MA2, Price=current_price, status="--", shares="--", PnL=current_pnl, cash=cash, value=value)#initializing trading enviromentalgo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data)#run algoperf_manual = algo_obj.run(panel)#code#calculationprint "total pnl : " + str(float(perf_manual[["PnL"]].iloc[-1]))buy_trade = perf_manual[["status"]].loc[perf_manual["status"] == "buy"].count()sell_trade = perf_manual[["status"]].loc[perf_manual["status"] == "sell"].count()total_trade = buy_trade + sell_tradeprint "buy trade : " + str(int(buy_trade)) + " sell trade : " + str(int(sell_trade)) + " total trade : " + str(int(total_trade))I was inspired by https://www.quantinsti.com/blog/introduction-zipline-python/ and https://www.quantinsti.com/blog/importing-csv-data-zipline-backtesting/.I get this error :Traceback (most recent call last):File "C:/Users/main/Desktop/docs/ALGO_TRADING/_DATAS/_zipline_data_bundle /temp.py", line 51, in <module>algo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data)File "C:\Python27-32\lib\site-packages\zipline\algorithm.py", line 273, in __init__self.trading_environment = TradingEnvironment()File "C:\Python27-32\lib\site-packages\zipline\finance\trading.py", line 99, in __init__self.bm_symbol,File "C:\Python27-32\lib\site-packages\zipline\data\loader.py", line 166, in load_market_dataenviron,File "C:\Python27-32\lib\site-packages\zipline\data\loader.py", line 230, in ensure_benchmark_datalast_date,File "C:\Python27-32\lib\site-packages\zipline\data\benchmarks.py", line 50, in get_benchmark_returnslast_dateFile "C:\Python27-32\lib\site-packages\pandas_datareader\data.py", line 137, in DataReadersession=session).read()File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 181, in readparams=self._get_params(self.symbols))File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 79, in _read_one_dataout = self._read_url_as_StringIO(url, params=params)File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 90, in _read_url_as_StringIOresponse = self._get_response(url, params=params)File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 139, in _get_responseraise RemoteDataError('Unable to read URL: {0}'.format(url))pandas_datareader._utils.RemoteDataError: Unable to read URL: http://www.google.com/finance/historical?q=SPY&startdate=Dec+29%2C+1989&enddate=Dec+20%2C+2017&output=csvI don't understand : "http://www.google.com/finance/historical?q=SPY&startdate=Dec+29%2C+1989&enddate=Dec+20%2C+2017&output=csv". I don't ask for online data request... and not 'SPY' stock but 'APPL'...What does this error mean to you ?Thanks a lot for your help !C. | Only reference and workaround I found regarding this issue is here:from pandas_datareader.google.daily import GoogleDailyReader@propertydef url(self): return 'http://finance.google.com/finance/historical'GoogleDailyReader.url = url |
which python IDE is close to R for running a line code after the whole file I switched from R to python. I can not find any proper IDE that can run a single run after running the whole file. Spyder seems promissing but the text editor is terrible. Atom is good but everytime I have to run whole .py file. here is exact my problme:I run machine learning and load data through .py file and after that I want to performed some test without running again my file; just need to write on python shell. Which python IDE can provide this single practicality?Thanks. | You don't need any special IDE to do this. Just launch your python program with the -i option. If you are looking for an IDE that makes it easy to do this (although you don't need an IDE to do it) in PyCharm you can just click on tab and be at the terminal where you can run this command. Example code:someList = [1,2,3,4,5,7]someString = "hello this is a string"def some_function(a, b): return a+bLaunching the program: (type this into terminal / cmd prompt / etc)python -i test.pyChecking variables, using functions, etc etc (all after the execution is done)C:\Users\Admin\Desktop\2\pyprojects>python -i test.py>>> someList[1, 2, 3, 4, 5, 7]>>> someString'hello this is a string'>>> some_function(1,2)3 |
Checking whether user did like the post or not through template tag Django I am using template tag to check whether user liked the question in the feed previously in order to show him liked "red heart" img or otherwise not. I send user profile and pk of question as arguments to the functions, I want to know whether that user profile is one of that who liked the question. How can I implement it in the right way?That's my template tag codefrom django import templatefrom blog.models import UserProfilefrom blog.models import Questionregister = template.Library()def likecheck(theuser, question_pk): question_object = Question.objects.get(pk=question_pk) # just rude logic of what I seek # if theuser in question_object.who_liked: # return true # else: # return falseregister.filter('likecheck', like check)Relevant piece of class in models.py:class Question(models.Model): who_liked = models.ManyToManyField('UserProfile', related_name='who_liked_QUESTION', blank=True, null=True) | I think you're just missing .all() on the end of question_object.who_liked in your pseudo code |
How to extract django objects having maximal value? Supposing I have a django model:class Score(models.Model): name = models.CharField(max_length=32) value = models.IntegerField() status = models.BooleanField() def __unicode__(self): return u"{}: {} {}".format(self.name, self.value, self.status)And the table score looks line this:id | name | value | status---+-------+-------+-------1 | john | 15 | 02 | ivan | 21 | 03 | david | 14 | 14 | john | 11 | 15 | john | 25 | 06 | david | 8 | 1I want to extract the objects with maximal value for each name given as a list of names. For example:names = ['john', 'david', 'paul']dct = process(names) # How to implement this function???for name, obj in dct.items(): print name, objshould print:john john: 25 Falsedavid david: 14 Truepaul NoneThe performance matters. I am supposed to process around 10k names having around 1k records for each name in score. What is the best way to do it with django? Does it make a sense to use django ORM in this task?I use MySQL database.Thanks for advance. | Solution:def process(names): # Finding max scores result = Score.objects.filter(name__in=names).\ values('name').annotate(Max('value')) value_max_items = set( row['value__max'] for row in result ) # Query for objects having max scores result = Score.objects.filter(value__in=value_max_items) # Creating dct dct = {} for obj in result: if obj.name not in dct or obj.value > dct[obj.name].value: dct[obj.name] = obj # If some names missed, set None for them for name in names: if name not in dct: dct[name] = None return dct |
Is there a more elegant or simple way to accomplish my goal? I know the brute force way to do what I want but I am pretty sure there is a much more elegant way to accomplish my task. So I’m looking for help on an approach that’s better than the brute force way.I have a spreadsheet like application with 21 rows and 5 columns on a grid. The first columns in the first row simply take user entered weight values (w1, w2, w3, w4). The 5th column sums the weight values. I have this working fine and don’t need much help.The complexity comes in for rows 2 to 20. For each row, the user enters values in columns 1 : 4 and then a weighted average for the row is calculated in column 5 (using the weights from row 1). For example, for any given row, if the user entered values go into QLineEdit widgets named va1, va2, va3, va4, then va_wa= va1*w1 +va2*w2 +va3*w3 +va4*w4.This is easy to do in code for a single row. But I’m not sure how to accomplish is for another row without copying the code over and over and changing the names for each row (the brute force way).Here’s my code:class MyForm(QtGui.QMainWindow): def __init__(self, parent=None): super(MyForm,self).__init__(parent) self.ui=Ui_MainWindow() self.ui.setupUi(self) self.ui.mdiArea.addSubWindow(self.ui.subwindow) self.ui.mdiArea.addSubWindow(self.ui.subwindow_2) QtCore.QTimer.singleShot(10, lambda: self.ui.mdiArea.setActiveSubWindow(self.ui.mdiArea.subWindowList()[0])) self.ui.wt1.editingFinished.connect(self.runBoth) self.ui.wt2.editingFinished.connect(self.runBoth) self.ui.wt3.editingFinished.connect(self.runBoth) self.ui.wt4.editingFinished.connect(self.runBoth) self.ui.ca1.editingFinished.connect(self.waCalc) self.ui.ca2.editingFinished.connect(self.waCalc) self.ui.ca3.editingFinished.connect(self.waCalc) self.ui.ca4.editingFinished.connect(self.waCalc) def runBoth(self): self.wtResult() self.waCalc() def wtResult(self): if len(self.ui.wt1.text())!=0: a=float(self.ui.wt1.text()) else: a=0 if len(self.ui.wt2.text())!=0: b=float(self.ui.wt2.text()) else: b=0 if len(self.ui.wt3.text())!=0: c=float(self.ui.wt3.text()) else: c=0 if len(self.ui.wt4.text())!=0: d=float(self.ui.wt4.text()) else: d=0 sum=a+b+c+d self.ui.wt_total.setText(str(sum)) def waCalc(self): if len(self.ui.ca1.text())!=0: ca1=float(self.ui.ca1.text()) else: ca1=0 if len(self.ui.ca2.text())!=0: ca2=float(self.ui.ca2.text()) else: ca2=0 if len(self.ui.ca3.text())!=0: ca3=float(self.ui.ca3.text()) else: ca3=0 if len(self.ui.ca4.text())!=0: ca4=float(self.ui.ca4.text()) else: ca4=0 if len(self.ui.wt1.text())!=0: wt1=float(self.ui.wt1.text()) else: wt1=0 if len(self.ui.wt2.text())!=0: wt2=float(self.ui.wt2.text()) else: wt2=0 if len(self.ui.wt3.text())!=0: wt3=float(self.ui.wt3.text()) else: wt3=0 if len(self.ui.wt4.text())!=0: wt4=float(self.ui.wt4.text()) else: wt4=0 wa=(wt1*ca1)+(wt2*ca2)+(wt3*ca3)+(wt4*ca4) self.ui.ca_wa.setText(str(wa)) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) myapp=MyForm() myapp.show() app.exec_()So I've shown the example where the row has ca1, ca2, ca3,ca4,ca_wa. What would I do for the next 19 rows (other than copy the wa_Calc code 19 times and change the variables to nx1:4,nx_wa ab1:4,ab_wa,ba1:4,ba_wa ...etc. I know there is a more elegant approach. | This is quite involved, so I''ll just give you an overview and some pointerson how to complete it.The general outline is this:Create an Equation object to record the functional relationship between widgets.Write a function which takes an Equation object and recomputes the target value.Write a function which re-evaluates all of the equations which depend on a widget.Hook up all of the editingFinished callbacks to the function in #3.Step 1. The Equation class.Creating a new Equation object might look like:eq1 = Equation("wt_total", computeSum, ["wt1", "wt2", "wt3", "wt4"])eq2 = Equation("ca_wa", computeDot, ["wt1", "wt2", "wt3", "wt4", "ca1", "ca2", "ca3", "ca4"])computeSum and computeDor might look like:def computeSum(arr): return sum(arr)def computDot(arr): xs = arr[0:3] ys = arr[4:7] return sum ([ x*y for (x,y) in zip(xs,ys) ])You will need the following slots/methods for the Equation class:eq.target -- name of the target widgeteq.argWidgets() -- return the list of widgets used in the formulaeq.compute(vals) -- run th computation function with a list of valueseq.affected(wname) -- return True if the equation depends on the widget wnameYou will need a place to store all of the equations. In the code belowI use self.equations where self is a MyForm object.Step 2. - Updating a single equation.A method to update a single equation would look like: # update a single equation def update(self, eq): args = [] for wname in eq.argWidgets(): val = ...lookup and convert value stored in wname... args.append(val) result = eq.compute(args) # store result in target widget eq.targetStep 3. Updated affected equations.First we develop a method to determine all of the affected equations: # return the equations affected by a change in widget wname def affected(self, wname): return [ e | if e.affected(wname) for e in self.equations ]The handleEditingFinished method will be called with a widget name: def handleEditingFinished(self, wname): eqs = self.affected(wname) for e in eqs: self.update(e)Step 4. Hook up all of the callbacks.This code is untested, but hopefully the intent is clear.We just route all of the editingFinished callbacks to ourhandleEditingFinished method with the widget named passed as thefirst argument. from functools import partial def callback(self, wname): self.handleEditingFinished(wname) for e in self.equations: for wname in e.argWidgets(): w = ... get the widget named wname... w.editingFinished(partial(callback, self, wname)) |
IN clause for Oracle Prepared Statement in Python cx_Oracle I'd like to use the IN clause with a prepared Oracle statement using cx_Oracle in Python.E.g. query - select name from employee where id in ('101', '102', '103')On python side, I have a list [101, 102, 103] which I converted to a string like this ('101', '102', '103') and used the following code in python - import cx_Oracleids = [101, 102, 103]ALL_IDS = "('{0}')".format("','".join(map(str, ids)))conn = cx_Oracle.connect('username', 'pass', 'schema')cursor = conn.cursor()results = cursor.execute('select name from employee where id in :id_list', id_list=ALL_IDS)names = [x[0] for x in cursor.description]rows = results.fetchall()This doesn't work. Am I doing something wrong? | This concept is not supported by Oracle -- and you are definitely not the first person to try this approach either! You must either:create separate bind variables for each in value -- something that is fairly easy and straightforward to do in Pythoncreate a subquery using the cast operator on Oracle types as is shown in this post: https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:210612357425use a stored procedure to accept the array and perform multiple queries directly within PL/SQLor do something else entirely! |
Adapting matrix array multiplication to use Numpy Tensordot I'm trying to speed up my code to perform some numerical calculations where I need to multiply 3 matrices with an array. The structure of the problem is the following:The array as a shape of (N, 10)The first matrix is constant along the dynamic dimension of the array and has a shape of (10, 10)The other two matrices vary along the first dimension of the array and have a (N, 10, 10) shapeThe result of the calculation should be an array with (N, shape)I've implemented a solution using for loops that is working, but I'd like to have a better performance so I'm trying to use the numpy functions. I've tried using numpy.tensordot but when multiplying the dynamic matrices with the array I get a shape of (N, 10, N) instead of (N, 10)My for loop is the following:res = np.zeros(temp_rho.shape, dtype=np.complex128)for i in range(temp_rho.shape[0]): res[i] = np.dot(self.constMatrix, temp_rho[i]) res[i] += np.dot(self.dinMat1[i], temp_rho[i]) res[i] += np.dot(self.dinMat2[i], np.conj(temp_rho[i]))#temp_rho.shape = (N, 10)#res.shape = (N, 10)#self.constMatrix.shape = (10, 10)#self.dinMat1.shape = (N, 10, 10)#self.dinMat2.shape = (N, 10, 10)How should this code be implemented dot products of numpy, returning the correct dimensions? | Here's an approach using a combination of np.dot and np.einsum -parte1 = constMatrix.dot(temp_rho.T).Tparte2 = np.einsum('ijk,ik->ij',dinMat1, temp_rho)parte3 = np.einsum('ijk,ik->ij',dinMat2, np.conj(temp_rho))out = parte1 + parte2 + parte3Alternative way to get parte1 would be with np.tensordot -parte1 = np.tensordot(temp_rho, constMatrix, axes=([1],[1]))Why doesn't numpy.tensordot work for the later two sum-reductions? Well, we need to keep the first axis between dinMat1/dinMat2 aligned against the first axis of temp_rho/np.conj(temp_rho), which isn't possible with tensordot as the axes that are not sum-reduced are elementwise multiplied along two separate axes. Therefore, when used with np.tensordot, we would end up with two axes of length N corresponding to the first axis each from the two inputs. |
How to restructure a pandas dataframe even in the presence of missing data If I had a pandas DataFrame which looks like this: df=pandas.DataFrame(range(8)) 00 01 12 23 34 45 56 67 7What would be the best way to restructure this frame into two columns of five rows, regardless of the fact I only have 8 numbers? so the output would be : 0 10 0 51 1 6 2 2 73 3 NaN4 4 NaN5 5 NaN | Try this:pd.DataFrame([df[0].values[:6], df[0].values[6:]]).T 0 10 0.0 5.01 1.0 6.02 2.0 7.03 3.0 NaN4 4.0 NaNAnd if you really want to repeat the 5 twice:pd.DataFrame([df[0].values[:6], df[0].values[5:]]).T 0 10 0.0 5.01 1.0 6.02 2.0 7.03 3.0 NaN4 4.0 NaN5 5.0 NaN |
Bisection algorithm to find multiple roots Is there a way to find all the roots of a function using something on the lines of the bisection algorithm?I thought of checking on both sides of the midpoint in a certain range but it still doesn't seem to guarantee how deep I would have to go to be able to know if there is a root in the newly generated range; also how would I know how many roots are there in a given range even when I know that the corresponding values on applying the function are of opposite sign?Thanks. | The bisection algorithm can be used to find a root in a range where the function is monotonic. You can find such segments by studying the derivative function, but in the general case, no assumptions can be made as to the monotonicity of a given function over any range.For example, the function f(x) = sin(1/x) has an infinite number of roots between -1 and 1. To enumerate these roots, you must first determine the ranges where it is monotonic and these ranges become vanishingly small as x comes closer to 0. |
how to pass subprocess object to array I need to pass the subprocess object to an array, so I can process the contents of the path.here is my codeunder "/file/path" the content is as such:id1 name1 location1 hive 1 2014-10-01 4:02 /file/pathp = subprocess.Popen(["hdfs", "dfs", "-ls", "/file/path"], stdout=subprocess.PIPE)result = p.communicate()[0]path = []p = path.append(result)but when I print p, it returns "None" ,so what can I do if I need to pass the contents to an array, can I even do that in python?Thanks! | OK, so the results of the screen output from the command hdfs dfs -ls /some/path is going to be a string. If your intention is to actually obtain file names you'll have to parse that string. If you want to add those paths to an array that contains the path a cheap way to do that is:import subprocessbase_dir = "/user/root/input" # NOTE: This is my hdfs path, yours may differp = subprocess.Popen(["hdfs", "dfs", "-ls", base_dir], stdout=subprocess.PIPE)result = p.communicate()[0]lines = result.splitlines()paths = [word.strip() for word in ' '.join(line for line in lines).split(' ') if base_dir in word]Outputprint paths# ['/user/root/input/file1.txt', '/user/root/input/file2.txt']CAUTION This is a hacky way of doing this, but it should do what it is you are trying to accomplish. |
How to restore a tensorflow model? I am trying to restore a model using a .ckpt file, which I got by running word2vec_optimized.py in tensorflow/models/embedding. I am not sure how to go about restoring the variables so that I can load the model and use it because all of the tf variables are encapsulated and initialized in classes in tensorflow/models/embedding/word2vec_optimized.py. Any help would be appreciated. Also if I do "restore" the .ckpt created, do I now have a Wor2Vec instance or what do I actually get when I restore a model using a .ckpt? | When you call the save function on your saver you pass it the tf.Session that you were using to train the model on. This contains a reference to the graph which contains all the variables. Don't confuse python variables with tensorflow variables. Even if you no longer have a variable in python which points to a tensorflow variable you created, it still exists if it is part of the computational graph. After you create your model try running the following code. for v in tf.all_variables(): print(v.name)This will print out the name of every variable you created. The saver will by default save all of these. So long as the variables have the same name when you restore them it does not matter where they were created. Just make sure you do the restore after all variables have been added to the model. When you give a variable an initializer the initialization is only run when you call sess.run(tf.initialize_all_variables()). You don't need to call this if you are just restoring the values. I often use the following code.sess = tf.Session()saver = tf.train.Saver()if 'restore' in sys.argv: saver.restore(sess, '/media/chase/98d61322-9ea7-473e-b835-8739c77d1e1e/model.chk')else: sess.run(tf.initialize_all_variables())This code works fine when I am using the thensorflow RNN classes which create variables inside of them. |
generate a list of permutations that preserve a given partitioning (context: Graph Isomorphism) I'm working on a python program that tests two given networkx graphs G and H for an isomorphism by using a brute force method. Each node in each graph has been assigned a label and color attribute, and the program should test all possible bijections between graph G, for which the labeling is fixed, and graph H, for which the labeling can be varied. In addition, i only need to examine the bijections which make sure that for a given node color 'i' in graph G is mapped onto a node in H which also has color 'i'. To that end, i've created a class which inherits all the methods/attributes from a nx.Graph, and written several methods of my own.So far what I've done is gone through both graphs, and created a dictionary which gives the possible valid mappings for each node in G onto H.e.g. for the graph G == 1-2-3 the coloring would be: color_g = {1: 1, 2: 2, 3:1} because '1' and '3' have the same degree, and 2 has a different degree.if graph H == 2-1-3 thencolor_h = {2:1, 1: 2, 3:1}and when i run a group_by_color function to give possible mappings from G to H i would get the following dictionarymap = {1: [2, 3], 2: [1], 3:[2, 3]}what that means is that due to the color partitioning node '1' from G could be mapped onto either '2' or '3' from H, '2' from G can only be mapped onto '1' from H, and so on.Here is the problem: I am struggling to generate a list of all valid permutations from G to H that preserve the partitioning given by coloring, and am struggling to think of how to do it. I am well aware of python's permutations function and in a previous version of the brute force method where i didn't consider the color, which meant the list of permutations was significantly larger (and the run-time much slower), but the implimentation also easier. Now i want to speed things up by only considering permutations which are possible according to the given colorings. the question: How can i take my map dictionary, and use it to generate the bijection functions that are color-conserving/preserving (german: 'farbe-erhaltend')? Or would you suggest a different method all together? some other facts: the nodes in both graphs are labled consecutively and ascendingthe 'colors' i'm using are numbers because the graphs can become arbitrarily large.I'd be grateful for any help,ItsAnApe | To answer the algorithmic part of your question: Say your partition has k cells: C_1, ..., C_k. There is a 1 to 1 correspondence between permutations of the overall set that preserve the partition and the Cartesian product P_1 x P_2 x ... x P_k where P_i is the set of permutations of the cell C_i. itertools contains a method for generating the Cartesian product. Exactly how you want to use a tuple like (p_1, p_2, ..., p_k) where each p_i is a permutation of cell C_i depends on your purposes. You could write a function to combine them into a single permutation if you want -- or just iterate over them if you are going to be using the permutations on a cell by cell basis anyway.The following is proof of concept. It represents a partition as a list of tuples, where each tuple represents a cell, and it lists all permutations of the overall set which preserves the partition. In the test case it lists the 2x6x2 = 24 permutations of {1,2,3,4,5,6,7} which preserve the partition [(1,4), (2,3,5),(6,7)]. No need to step through and filter all 7! = 5040 permutations:#list_perms.pyimport itertoolsdef combinePermutations(perms): a = min(min(p) for p in perms) b = max(max(p) for p in perms) d = {i:i for i in range(a,b+1)} for p in perms: pairs = zip(sorted(p),p) for i,j in pairs: d[i] = j return tuple(d[i] for i in range(a,b+1))def permute(cell): return [p for p in itertools.permutations(cell)]def listGoodPerms(cells): products = itertools.product(*(permute(cell) for cell in cells)) return [combinePermutations(perms) for perms in products]#to test:myCells = [(1,4), (2,3,5), (6,7)]for p in listGoodPerms(myCells): print(p)The output when the module is run:(1, 2, 3, 4, 5, 6, 7)(1, 2, 3, 4, 5, 7, 6)(1, 2, 5, 4, 3, 6, 7)(1, 2, 5, 4, 3, 7, 6)(1, 3, 2, 4, 5, 6, 7)(1, 3, 2, 4, 5, 7, 6)(1, 3, 5, 4, 2, 6, 7)(1, 3, 5, 4, 2, 7, 6)(1, 5, 2, 4, 3, 6, 7)(1, 5, 2, 4, 3, 7, 6)(1, 5, 3, 4, 2, 6, 7)(1, 5, 3, 4, 2, 7, 6)(4, 2, 3, 1, 5, 6, 7)(4, 2, 3, 1, 5, 7, 6)(4, 2, 5, 1, 3, 6, 7)(4, 2, 5, 1, 3, 7, 6)(4, 3, 2, 1, 5, 6, 7)(4, 3, 2, 1, 5, 7, 6)(4, 3, 5, 1, 2, 6, 7)(4, 3, 5, 1, 2, 7, 6)(4, 5, 2, 1, 3, 6, 7)(4, 5, 2, 1, 3, 7, 6)(4, 5, 3, 1, 2, 6, 7)(4, 5, 3, 1, 2, 7, 6) |
Python - Grab Random Names Alright, so I have a question. I am working on creating a script that grabs a random name from a list of provided names, and generates them in a list of 5. I know that you can use the commanditems = ['names','go','here']rand_item = items[random.randrange(len(items))]This, if I am not mistaken, should grab one random item from the list. Though if I am wrong correct me, but my question is how would I get it to generate, say a list of 5 names, going down like below;randomnamesgeneratedusingcodeAlso is there a way to make it where if I run this 5 days in a row, it doesn't repeat the names in the same order?I appreciate any help you can give, or any errors in my existing code.Edit:The general use for my script will be to generate task assignments for a group of users every day, 5 days a week. What I am looking for is a way to generate these names in 5 different rotations. I apologize for any confusion. Though some of the returned answers will be helpful.Edit2:Alright so I think I have mostly what I want, thank you Markus Meskanen & mescalinum, I used some of the code from both of you to resolve most of this issue. I appreciate it greatly. Below is the code I am using now.import randomitems = ['items', 'go', 'in', 'this', 'string']rand_item = random.sample(items, 5)for item in random.sample(items, 5):print item | You could use random.choice() to get one item only:items = ['names','go','here']rand_item = random.choice(items)Now just repeat this 5 times (a for loop!)If you want the names just in a random order, use random.shuffle() to get a different result every time. |
Python os.listdir and path with escape character I have a string variable that I have read from a file which is a path that contains an escape character i.e. dir="...Google\\ Drive"I would like to then list all the files and directories in that path with os.listdir i.e.os.listdir(dir)But I get this error (because I really only want one escape):OSError: [Errno 2] No such file or directory: '....Google\\ Drive/'I have tried using os.listdir(r'....Google\ Drive')os.listdir(dir.replace("\\","\"))os.listdir(dir.decode('string_escape'))all to no avail. I read this Python Replace \\ with \, but "...Google\ drive".decode('string_escape') != "...Google\ Drive".Any ideas? | If the path could be arbitrary , you can split the the strings using \\ removing any '' you may get along the way and then do os.path.join , Example ->>> import os.path>>> l = "Google\Drive\\\\ Temp">>> os.path.join(*[s for s in l.split('\\') if l != ''])'Google\\Drive\\ Temp'Then you can use that in os.listdir() to list the directories. |
Python os.walk topdown true with regular expression I am confused as to why the following ONLY works with topdown=False and returns nothing when set to True ?The reason I want to use topdown=True is because it is taking a very long time to traverse through the directories. I believe that going topdown will increase the time taken to produce the list. for root, dirs, files in os.walk(mypath, topdown=False): #Why doesn't this work with True? dirs[:] = [d for d in dirs if re.match('[DMP]\\d{8}$', d)] for dir in dirs: print(dir) | That is because your root directory doesn't match the regex, so after the first iteration, dirs is set to empty.If what you want is to find all subdirectories which match the pattern, you should either:use topdown = False, ordo not prune the directories |
How do I iterate through a list of strings and print each item? I have a list of strings and print each of the strings in the list, meaning not ['word1','word2','word3'] but instead: word1, word2, word3.I tried doing this:for i in list: print list[i]but I get the message "list indices must be integers, not str"I am really confused on how I should actually do this? | for i in list: print iI is the list element: in other words, it takes on the values of the member strings, in order. |
Draw breaklines(dotted line/dashed line) in Opencv How to add breakline(dotted line/dashed line) in OpenCV drawing functions like cv2.line(),cv2.rectangle() ?Is there a line type for break lines? | If the line is horizontal or vertical, you can do something like this. Kind of hacky, but gets the job done in just a few lines if you don't need anything fancy.y = 100 # vertical position of the linethickness = 2 # thickness of the linex0 = 0 # leftmost part of the linex1 = img.shape[1] # rightmost part of the lineon_time = 10 # pixels out of period that are solidperiod = 20 # period between dashescolour = (0, 0, 255)img[ y - thickness // 2:y + thickness // 2, [x for x in range(x0, x1) if x % period < on_time]] = colour |
Regex to match multiline text between two words including the words I'm editing a dictionary and trying to place every pronunciation tag [s]...[/s] after the transcription tag [c darkslategray]...[/c]. The problem is that not all the words contain both pronunciation and transcription.Here's my current regex and the part of the dictionary:(\s\[s\].*?\[\/s\])(?s)(\s.*?\[c darkslategray\].*?\[\/c\])Then replace with $2$1 to move tags.contrast [s]contra62.wav[/s] [b]con·trast[/b] [c blue][b]I[/b][/c] [m1]({{<vr>}}[p]or[/p] [b]A[/b]{{</vr>}})[c darkslategray]/kənˈtræst, [i]Brit[/i] kənˈtrɑːst/[/c] [p]verb[/p] [m2][b]1[/b] \[[p]no obj[/p]\] [b]:[/b] to be different especially in a way that is very obvious[/m]repellency [s]repell01.wav[/s] [m1][b]re·pel·len·cy[/b] [c darkslategray]/rıˈpɛlənsi/[/c] [p]noun[/p] \[[p]noncount[/p]\][/m] [m2][*][ex]a fabric known for its water [i]repellency[/i][/ex][/*][/m]labyrinth [s]labyri01.wav[/s]charge card [m1][p]noun[/p], [p]pl[/p] [b]⋯ cards[/b] \[[p]count[/p]\] [m2][b]:[/b] ↑<<credit card>>[/m]Antarctic [s]gganta10.wav[/s] ↑<<antarctic>>ant [s]ant00001.wav[/s] [m1][c darkslategray]/ˈænt/[/c] [p]noun[/p], [p]pl[/p] [b]ants[/b] \[[p]count[/p]\] [m2][b]:[/b] a kind of small insect that lives in an organized social group[/m] [m3][*][ex]a colony of [i]ants[/i] = an [i]ant[/i] colony[/ex][/*][/m]ring [s]ring0004.wav[/s]Regex101 Example: https://regex101.com/r/cG3yK3/5As you can see, the first two matches are fine, but the third match is not what I'm looking for. It captures the pronunciation of one word and transcription of another word. Is there any way to fix it? | You regex should have a negative lookahead to make sure no nested [s]...[/s] is matched. Use this regex:(\s\[s\].*?\[\/s\])(?s)(\s(?:(?!\[s\].*?\[\/s\]).)*?\[c darkslategray\].*?\[\/c\])Updated RegEx Demo |
Returning a string containing HTML in Python I am just wondering if anyone could help me to get this function to work. I am wishing to return a string which contains an HTML list for each of the items given in a list. def returnString(l):hi = []hi.append(l)ol = "<ol>"for i in hi: ol += "<li>"+i+"</li>"ol += "</ol>"return ol | This depends on the type of items in the list but assuming that they can be converted to string a possibility would be to do the following:for i in hi: ol += "<li>"+str(i)+"</li>"ol += "</ol>"return ol |
I am having a syntax error I am having a syntax error at if first2 == 1:import timename = raw_input("What is your name? ")print "Hello, " + nametime.sleep(1.5)print "Welcome to Kill the Dragon."time.sleep(2)print "In this game, you will choose an option, and if you make the right choices, you will slay the dragon."time.sleep(5)print "You are walking through a forest on a cold, damp, windy night."time.sleep(3)print "You see a castle, and as you are looking for a shelter, you decide to try your luck there."time.sleep(3)print "You are greeted by a knight in shining armor. He gives you a questioning look. What do you say?"time.sleep(1)first = int(raw_input("1: Say you are looking for shelter from the storm " "\n2: Say you are lost and need food "))time.sleep(2)if first == 1: print "Ok, " + name + ",we could all use some help from this bad storm outside. Please let me lead you to a good place." time.sleep(4) print "After a good dinner, you ask if there is anything you can do to help." time.sleep(2)print "Well... There is one thing you can do. A dragon located in Eucalyptus Cave has been causing many problems lately./nIf you kill the dragon, we will give you a large reward."time.sleep(1)first2 = int(raw_input("1. Tell the knight you will kill the dragon.\n2. Tell the knight you will not kill the dragon. ")if first2 == 1: print "Oh, good. If you had declined, we would have thrown you into the dungeons.if first2 == 2: print "You will not kill the dragon for us? Off to the dungeons it is!" time.sleep(1.2) print "SLAM!"if first == 2: print "Ugg, I hate filthy peasants! Maybe if you kill the dragon living in that cave over there, we will let you stay." time.sleep(4) second2 = int(raw_input("1: Insist on getting inside the castle" + "\n2: Ask the knight for armor, a weapon, and a steed")) if second2 == 1: print "The knight tells you to get lost, and that the deal is off." if second2 == 2: print "The knight gives you things, and you set out to slay the dragon." time.sleep(3) second3 = raw_input ("Once you arrive at the cave, you see two ways to go. Should you go right or left? ") if second3 == "right": print "You are greeted by the carcusses of many older knights who died trying to battle the dragon. \nYou wish you didn't see it, and turn back to go the other way." second3 = "left" if second3 == "left": print "You are greeted by the sleeping, green, slimy, two-headed dragon. \n He is sleeping, but he smells you and wakes up. \nHe is about to stand up. \nWhat do you do? " | if first2 == 1: print "Oh, good. If you had declined, we would have thrown you into the dungeons.Add a quotation mark at the endif first2 == 1: print "Oh, good. If you had declined, we would have thrown you into the dungeons." |
Multivariate time series forecasting with 3 months dataset I have 3 months of data (each row corresponding to each day) generated and I want to perform a multivariate time series analysis for the same : the columns that are available are - Date Capacity_booked Total_Bookings Total_Searches %VariationEach Date has 1 entry in the dataset and has 3 months of data and I want to fit a multivariate time series model to forecast other variables as well. So far, this was my attempt and I tried to achieve the same by reading articles. I did the same - df['Date'] = pd.to_datetime(Date , format = '%d/%m/%Y')data = df.drop(['Date'], axis=1)data.index = df.Datefrom statsmodels.tsa.vector_ar.vecm import coint_johansenjohan_test_temp = datacoint_johansen(johan_test_temp,-1,1).eig#creating the train and validation settrain = data[:int(0.8*(len(data)))]valid = data[int(0.8*(len(data))):]freq=train.index.inferred_freqfrom statsmodels.tsa.vector_ar.var_model import VARmodel = VAR(endog=train,freq=train.index.inferred_freq)model_fit = model.fit()# make prediction on validationprediction = model_fit.forecast(model_fit.data, steps=len(valid))cols = data.columnspred = pd.DataFrame(index=range(0,len(prediction)),columns=[cols]) for j in range(0,4): for i in range(0, len(prediction)): pred.iloc[i][j] = prediction[i][j]I have a validation set and prediction set. However the predictions are way worse than expected. The plots of the dataset are - 1. % VariationCapacity_BookedTotal bookings and searches The output that I am receiving are - Prediction dataframe - Validation Dataframe - As you can see that predictions are way off what is expected. Can anyone advise a way to improve the accuracy. Also, if I fit the model on whole data and then print the forecasts, it doesn't take into account that new month has started and hence to predict as such. How can that be incorporated in here. any help is appreciated. EDITLink to the dataset - DatasetThanks | One manner to improve your accuracy is to look to the autocorrelation of each variable, as suggested in the VAR documentation page:https://www.statsmodels.org/dev/vector_ar.htmlThe bigger the autocorrelation value is for a specific lag, the more useful this lag will be to the process. Another good idea is to look to the AIC criterion and the BIC criterion to verify your accuracy (the same link above has an example of usage). Smaller values indicate that there is a bigger probability that you have found the true estimator.This way, you can vary the order of your autoregressive model and see the one that provides the lowest AIC and BIC, both analyzed together. If AIC indicates the best model is with lag of 3 and the BIC indicates the best model has a lag of 5, you should analyze the values of 3,4 and 5 to see the one with best results. The best scenario would be to have more data (as 3 months is not much), but you can try these approaches to see if it helps. |
Extending array by repeating values if another array is not continues I am tracking some particles on a flat surface using the TrackPy plugin. This results in a dataframe with positions in x and y and a corresponding frame number, here illustrated by a simple list:x=[80.1,80.2,80.1,80.2,80.3]y=[40.1,40.2,40.1,40.2,40.3]frame = [1,2,3,4,5]However, due to the experimental setup, one may lose the particle for a single frame resulting in:x=[80.1,80.2,80.1,80.2,80.3]y=[40.1,40.2,40.1,40.2,40.3]frame = [1,2,3,4,6]Now i would like to extend all list, so that ´frame´ is continues and 'x,y' will repeat the previous value if no frame is present in the original data resulting in something like:x=[80.1,80.2,80.1,80.2,80,2,80.3]y=[40.1,40.2,40.1,40.2,40.2,40.3]frame = [1,2,3,4,5,6] | You can use Pandas, which internally utilizes NumPy arrays:import pandas as pddf = pd.DataFrame({'x': x, 'y': y}, index=frame)df = df.reindex(np.arange(df.index.min(), df.index.max()+1)).ffill()Resultprint(df) x y1 80.1 40.12 80.2 40.23 80.1 40.14 80.2 40.25 80.2 40.26 80.3 40.3You can then extract your result to lists:x = df['x'].tolist()y = df['y'].tolist()frame = df.index.tolist() |
How to join a dataframe and dictionary on two rows I have a dictionary and a dataframe. The dictionary contains a mapping of one letter to one number and the dataframe has a row containing these specific letters and another row containing these specific numbers, adjacent to each other (not that it necessarily matters).I want to update the row containing the numbers by matching each letter in the row of the dataframe with the letter in the dictionary and then replacing the corresponding number (number in the same column as the letter) with the value of that letter from the dictionary.df = pd.DataFrame(np.array([[4, 5, 6], ['a', 'b', 'c'], [7, 8, 9]]))dict = {'a':2, 'b':3, 'c':5}Let's say dict is the dictionary and df is the dataframe I want the result to be df2.df2 = pd.DataFrame(np.array([[3, 2, 5], ['b', 'a', 'c'], [7, 8, 9]]))df 0 1 20 4 5 61 a b c2 7 8 9dict{'a': 2, 'b': 3, 'c': 5}df2 0 1 20 2 3 51 a b c2 7 8 9I do not know how to use merge or join to fix this, my initial thoughts are to make the dictionary a dataframe object but I am not sure where to go from there. | It's a little weird, but:df = pd.DataFrame(np.array([[4, 5, 6], ['a', 'b', 'c'], [7, 8, 9]]))d = {'a': 2, 'b': 3, 'c': 5}df.iloc[0] = df.iloc[1].map(lambda x: d[x] if x in d.keys() else x)df# 0 1 2# 0 2 3 5# 1 a b c# 2 7 8 9I couldn't bring myself to redefine dict to be a particular dictionary. :DAfter receiving a much-deserved smackdown regarding the speed of apply, I present to you the theoretically faster approach below:df.iloc[0] = df.iloc[1].map(d).where(df.iloc[1].isin(d.keys()), df.iloc[0])This gives you the dictionary value of d (df.iloc[1].map(d)) if the value in row 1 is in the keys of d (.where(df.iloc[1].isin(d.keys()), ...), otherwise gives you the value in row 0 (...df.iloc[0])).Hope this helps! |
Python 3: JSON File Load with Non-ASCII Characters just trying to load this JSON file(with non-ascii characters) as a python dictionary with Unicode encoding but still getting this error:return codecs.ascii_decode(input, self.errors)[0]UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 90: ordinal not in range(128)JSON file content = "tooltip":{ "dxPivotGrid-sortRowBySummary": "Sort\"{0}\"byThisRow",}import sys import jsondata = []with open('/Users/myvb/Desktop/Automation/pt-PT.json') as f: for line in f: data.append(json.loads(line.encode('utf-8','replace'))) | You have several problems as near as I can tell. First, is the file encoding. When you open a file without specifying an encoding, the file is opened with whatever sys.getfilesystemencoding() is. Since that may vary (especially on Windows machines) its a good idea to explicitly use encoding="utf-8" for most json files. Because of your error message, I suspect that the file was opened with an ascii encoding.Next, the file is decoded from utf-8 into python strings as it is read by the file system object. The utf-8 line has already been decoded to a string and is already ready for json to read. When you do line.encode('utf-8','replace'), you encode the line back into a bytes object which the json loads (that is, "load string") can't handle.Finally, "tooltip":{ "navbar":"Operações de grupo"} isn't valid json, but it does look like one line of a pretty-printed json file containing a single json object. My guess is that you should read the entire file as 1 json object.Putting it all together you get:import jsonwith open('/Users/myvb/Desktop/Automation/pt-PT.json', encoding="utf-8") as f: data = json.load(f)From its name, its possible that this file is encoded as a Windows Portugese code page. If so, the "cp860" encoding may work better. |
TypeError: unhashable type: 'list' in Django/djangorestframework first of all I know there are some answers about this TypeError but none of them resolved my case. I did the research and that is why I am posting this question.I got sutck at error saying TypeError: unhashable type:'list' in Django/djangorestframework.I am not even sure where the error is located at because traceback is not that clear to me.Here is my code:serializers.py:from rest_framework import serializersfrom .models import Userclass UserSerializer(serializers.ModelSerializer): """User model serializer""" class Meta: model = User fields = ('id', 'email', 'username', 'password') extra_kwargs = { 'password': { 'write_only': True, 'style': {'input': 'password'} } } def create(self, validated_data): """Create and return a new user""" user = User.objects.create_user(email=validated_data['email'], username=validated_data['username'], password=validated_data['password']) return userTraceback:Traceback (most recent call last): File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Python36\lib\threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "C:\Python36\lib\site-packages\django\utils\autoreload.py", line 54, in wrapper fn(*args, **kwargs) File "C:\Python36\lib\site-packages\django\core\management\commands\runserver.py", line 117, in inner_run self.check(display_num_errors=True) File "C:\Python36\lib\site-packages\django\core\management\base.py", line 390, in check include_deployment_checks=include_deployment_checks, File "C:\Python36\lib\site-packages\django\core\management\base.py", line 377, in _run_checks return checks.run_checks(**kwargs) File "C:\Python36\lib\site-packages\django\core\checks\registry.py", line 72, in run_checks new_errors = check(app_configs=app_configs) File "C:\Python36\lib\site-packages\django\contrib\auth\checks.py", line 50, in check_user_model if not cls._meta.get_field(cls.USERNAME_FIELD).unique: File "C:\Python36\lib\site-packages\django\db\models\options.py", line 551, in get_field return self._forward_fields_map[field_name]TypeError: unhashable type: 'list'models.py:from django.db import modelsfrom django.contrib.auth.models import BaseUserManager, AbstractBaseUser, PermissionsMixinfrom django.utils import timezoneclass UserManager(BaseUserManager): """Manager for User model""" def create_user(self, email, username, password=None): """Function for creating a user""" if not email: return ValueError("Email must be provided") email = self.normalize_email(email) user = self.model(email=email, username=username) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, username, password): """Function for creating superusers""" user = self.create_user(email, username, password) user.is_staff = True user.is_superuser = True user.save(using=self._db) return userclass User(AbstractBaseUser, PermissionsMixin): """User database model""" email = models.EmailField(max_length=255) username = models.CharField(max_length=50) created_at = models.DateTimeField(default=timezone.now) upvotes = models.IntegerField(default=0) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) objects = UserManager() USERNAME_FIELD = ['email'] REQUIRED_FIELDS = ['username'] def __str__(self): return self.usernameI think the problem is in my serializer but if you need me to provide any other file please comment and I will update the question. | The error you get is because the underlying code tries to get the username field for the User model, but you have set it to a list instead of a string, which means that it cannot find the specified field.Change USERNAME_FIELD = ['email'] to USERNAME_FIELD = 'email' |
How call result of a function in another function in a class? I have a code like this, in Python 2.7 :class App(ttk.frame): def __init__(self, master=None): ttk.Frame.__init__(self, master) self.grid() self.createWidgets() def createWidgets(self): self.okButton = ttk.Button(self, text = "OK", command = self.function2) self.okButton.grid(column = 1, row = 1) def function1(self, arg1, arg2): # function create fields in frame self.arg1 = arg1 self.arg2 = arg2 def function2(self): #function calcule things with values of fields when Ok button is click doing_thing_to(x, y, z, w)app = App()app.function1("x", "y") # Create first fieldapp.function1("z", "w") # Create another fieldmainloop()When function2 is calling I have a error message : global name x, y not define.I try to putx.get(); y.get()in function2 but have the same error.I try to putreturn arg1, arg2in function1 but have the same issue.How call result of a function in another function in a class ?EDIT : the full code because I don't know how to simplify to be understanding :( function champ et champdouble have the function1 role and function callback has the function2 role.#!/usr/bin/env python# -*- coding: utf-8 -*-from os import getcwd, pathimport Tkinter as tkfrom Tkinter import *import tkFileDialog as filedialogimport shutilimport ttkimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport matplotlibmatplotlib.style.use('ggplot')wd = getcwd() # working directoryclass Application(ttk.Frame): def __init__(self, master=None): ttk.Frame.__init__(self, master) self.grid() self.createWidgets() def createWidgets(self): self.quitButton = ttk.Button(self, text='Quitter', command=self.quit) self.quitButton.grid(column=5, row=10, sticky=W) self.okButton = ttk.Button(self, text="clic !", command=self.callback) self.okButton.grid(column=4, row=10, sticky=W) def champ(self, nom, defaut, col, ran, lab, collab, ranlab, largeur=7): self.nom = nom self.defaut = defaut self.col = col self.ran = ran self.lab = lab self.collab = collab self.ranlab = ranlab self.largeur = largeur self.nom = StringVar() nom = ttk.Entry(mainframe, width=largeur, textvariable=nom) nom.insert(0, defaut) if nom.bind('<FocusIn>'): nom.delete(0, "end") nom.grid(column=col, row=ran, sticky=W) ttk.Label(mainframe, text=lab).grid(column=collab, row=ranlab, sticky=E) def champdouble(self, nom1, defaut1, nom2, defaut2, col, ran, lab, lab2, collab, ranlab, largeur=7): self.nom1 = nom1 self.defaut1 = defaut1 self.nom2 = nom2 self.defaut2 = defaut2 self.col = col self.ran = ran self.lab = lab self.lab2 = lab2 self.collab = collab self.ranlab = ranlab self.largeur = largeur nom1 = StringVar() nom1 = ttk.Entry(mainframe, width=largeur, textvariable=nom1) nom1.insert(0, defaut1) nom1.grid(column=col, row=ran, sticky=W) ttk.Label(mainframe, text=lab).grid(column=collab, row=ranlab, sticky=E) nom2 = StringVar() nom2 = ttk.Entry(mainframe, width=largeur, textvariable=nom2) nom2.insert(0, defaut2) nom2.grid(column=col+2, row=ran, sticky=W) ttk.Label(mainframe, text=lab2).grid(column=collab+2, row=ranlab, sticky=E) def on_entry_click(self, event): """function that gets called whenever entry is clicked""" global dirname if file1.get() == 'Choisissez un fichier...': file1.delete(0, "end") # delete all the text in the entry dirinit = r'C:/' dirname = filedialog.askopenfilename(parent=mainframe, initialdir=dirinit, title='Sélectionnez le fichier') file1.insert(0, dirname) #Insert blank for user input def on_entry_click1(self, event): """function that gets called whenever entry is clicked""" global dirname2 if file2.get() == 'Choisissez un fichier...': file2.delete(0, "end") # delete all the text in the entry dirinit = r'C:/' dirname2 = filedialog.askopenfilename(parent=mainframe, initialdir=dirinit, title='Sélectionnez le fichier') file2.insert(0, dirname2) #Insert blank for user input def callback(self): def traitement(fichier, debut, nif): deb = int(debut.get()) fin = int(nif.get()) df = pd.read_csv(fichier, sep = '\t', engine = 'python', header = deb, skipfooter = fin) # Lecture des fichiers df = df.rename(columns={'$Relations :NumZoneE': 'NumZoneE'}) # Renommage des entêtes de colonnes df = df[(df.NumZoneE != df.NumZoneA)] # supression des intrazonaux df = df[(df.NumZoneE <= 1289)] # supression des zones superieures a 1289 df = df[(df.NumZoneA <= 1289)] df['OD_possible']=np.where(df['JRTA'] < 999999, 'oui', 'non') # creation d'une colonne OD_possible df = pd.merge(df, dvol, on = ['NumZoneE', 'NumZoneA']) # jointure des tables avec dvol dfg = df.groupby('OD_possible') # groupage selon oui ou non return dfg # Chemin d'acces vers les fichiers à traiter dvol = r'c:\ceat_echange\1704_Test_maj_horaire_RERD_Sc2012\090721_DVOL_km.txt' # Traitement de dvol dvol = pd.read_csv(dvol, sep = '\t') # Lecture dvol = dvol.rename(columns = {'ZONEO': 'NumZoneE', 'ZONED': 'NumZoneA'}) # Renommage entete dvol = dvol[(dvol.DVOL != 0)] # Suppression intrazonaux fig = plt.figure() gss_oui = traitement(dirname, file1_deb, file1_fin).get_group('oui') gss_non = traitement(dirname, file1_deb, file1_fin).get_group('non') gac_oui = traitement(dirname2, file2_deb, file2_fin).get_group('oui') gac_non = traitement(dirname2, file2_deb, file2_fin).get_group('non') plt.hist([gss_oui[self.cettecolonne], gac_oui[self.cettecolonne]], range = (int(self.range1), int(self.range2)), bins = int(self.bins), label = [self.legend1l, self.legend2l]) plt.legend(loc = 'best') plt.title(self.titre) plt.xlabel(self.axeXl, labelpad = 5) plt.ylabel(self.axeYl) plt.savefig(path.join(wd, self.sortiel)) plt.show() plt.close()if __name__ == '__main__': app = Application() style = ttk.Style() style.configure("BW.TEntry", foreground="grey", background="white") style.configure("BW1.TEntry", foreground="black", background="white") app.master.title('Comparaison de fichiers') mainframe = ttk.Frame(app, padding="3 3 12 12") mainframe.grid(column=0, row=0, sticky=(N, W, E, S)) mainframe.columnconfigure(0, weight=1) mainframe.rowconfigure(0, weight=1) # construction du champ file1 file1 = StringVar() file1_deb = StringVar() file1_fin = StringVar() file1 = ttk.Entry(mainframe, width=20, style="BW.TEntry") file1.insert(0, 'Choisissez un fichier...') file1.grid(column=2, row=1, sticky=W) file1.bind('<FocusIn>', app.on_entry_click) ttk.Label(mainframe, text="Fichier n° 1 : ").grid(column=1, row=1, sticky=E) file1_deb = ttk.Entry(mainframe, width=5, textvariable=file1_deb) file1_deb.insert(0, "26") file1_deb.grid(column=4, row=1, sticky=W) ttk.Label(mainframe, text="ligne de début").grid(column=3, row=1, sticky=E) file1_fin = ttk.Entry(mainframe, width=5, textvariable=file1_fin) file1_fin.insert(0, "1307") file1_fin.grid(column=6, row=1, sticky=W) ttk.Label(mainframe, text="lignes de fin à supprimer").grid(column=5, row=1, sticky=E) # construction du champ file2 file2 = StringVar() file2_deb = StringVar() file2_fin = StringVar() file2 = ttk.Entry(mainframe, width=20, style="BW.TEntry") file2.insert(0, 'Choisissez un fichier...') file2.grid(column=2, row=2, sticky=W) file2.bind('<FocusIn>', app.on_entry_click1) ttk.Label(mainframe, text="Fichier n° 2 : ").grid(column=1, row=2, sticky=E) file2_deb = ttk.Entry(mainframe, width=5, textvariable=file2_deb) file2_deb.insert(0, "26") file2_deb.grid(column=4, row=2, sticky=W) ttk.Label(mainframe, text="ligne de début").grid(column=3, row=2, sticky=E) file2_fin = ttk.Entry(mainframe, width=5, textvariable=file2_fin) file2_fin.insert(0, "1307") file2_fin.grid(column=6, row=2, sticky=W) ttk.Label(mainframe, text="lignes de fin à supprimer").grid(column=5, row=2, sticky=E) app.champ("cettecolonne", "JRTA", 2, 3, "Champ à comparer :", 1, 3, 20) app.champ("titre", "Titre du graphique", 2, 4, "Titre du graphique :", 1, 4, 20) app.champdouble("range1", 0, "range2", 100, 2, 5, "Xmin :", "Xmax :", 1, 5) app.champ("bins", 20, 2, 6, "Nombre d'intervalle :", 1, 6, 5) app.champdouble("legend1", "file1", "legend2", "file2", 2, 7, "Légende du fichier n°1 :", "Légende du fichier n°2 :", 1, 7, 20) app.champ("axeX", "Axe des X", 2, 8, "Nom de l'axe des x :", 1, 8, 20) app.champ("axeY", "Axe des Y", 2, 9, "Nom de l'axe des y :", 1, 9, 20) app.champ("sortie", "image.png", 2, 10, "Nom du .png sauvegardé :", 1, 10, 20) for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5) mainloop() | You save the values "x" and "y" into self.arg1 and self.arg2 in function1(), so you must refer to them by those names in function2() also:class App(Object) def function1(self, arg1, arg2): self.arg1 = arg1 self.arg2 = arg2 def function2(self): print(self.arg1, self.arg2)app = App()app.function1("x", "y")app.function2()Notice that since you're using a class, you must create an instance first with app = App() (I renamed your app class to App, it's a good idea to start class names with uppercase).At this point it's probably better to use Python's "magical" __init__, which allows you to pass these to the instance when it's created, instead of calling a separate function1():class App(Object) def __init__(self, arg1, arg2): self.arg1 = arg1 self.arg2 = arg2 def function(self): print(self.arg1, self.arg2)app = App("x", "y") # This calls App.__init__(app, "x", "y")app.function() |
Ubuntu "git pull --rebase" gets errors of "can't stat objects" - can you suggest what is the problem I've been installing packages on my VM, (python / dev 3.6 oriented especially), and it seems I corrupted some setup, so now I get the following errors:git pull --rebaseAuto packing the repository in background for optimum performance.See "git help gc" for manual housekeeping.error: The last gc run reported the following. Please correct the root causeand remove .git/gc.log.Automatic cleanup will not be performed until the file is removed.error: Could not stat '.git/objects/4f/6716241438e21094af08213c05290a34cffdd7'error: Could not stat '.git/objects/4f/abf345fc90d14f6f0026cf91bcc4c2fd5c58b8'and lot more of themcould you suggest how to fix it? | "git gc" solved the problem (see git docs) |
flow of for loop in python I just new in python, I code for fetch array value from user, for this reason I asked a question yesterday in stackoverflow. Darius Morawiec and Austin give me the best salutation, but I don't understand the flow of for loop, I google it, but I don't understand those explanation.can any body explain the control of "for" loop for given code below. Thank youarr = [[int(input("Enter value for {}. row and {}. column: ".format(r + 1, c + 1))) for c in range(n_cols)] for r in range(n_rows)]yesterday conversation Link | Despite share the same keywords, that's not a for loop; it's a list comprehension nested in another list comprehension. As such, you need to evaluate the inner list first:[ [int(input("Enter value for {}. row and {}. column: ".format(r + 1, c + 1))) for c in range(n_cols) ] for r in range(n_rows)]If you were to "unroll" the inner one, it would look like[ [ int(input("Enter value for {}. row and {}. column: ".format(r + 1, 1))), int(input("Enter value for {}. row and {}. column: ".format(r + 1, 2))), # ... int(input("Enter value for {}. row and {}. column: ".format(r + 1, n_cols-1))), ] for r in range(n_rows)]and further unrolling the outer one would result in[ [ int(input("Enter value for {}. row and {}. column: ".format(1, 1))), int(input("Enter value for {}. row and {}. column: ".format(1, 2))), # ... int(input("Enter value for {}. row and {}. column: ".format(1, n_cols-1))), ], [ int(input("Enter value for {}. row and {}. column: ".format(2, 1))), int(input("Enter value for {}. row and {}. column: ".format(2, 2))), # ... int(input("Enter value for {}. row and {}. column: ".format(2, n_cols-1))), ], # ... [ int(input("Enter value for {}. row and {}. column: ".format(n_rows-1, 1))), int(input("Enter value for {}. row and {}. column: ".format(n_rows-1, 2))), # ... int(input("Enter value for {}. row and {}. column: ".format(n_rows-1, n_cols-1))), ]] |
OpenCV Python rotate image by X degrees around specific point I'm having a hard time finding examples for rotating an image around a specific point by a specific (often very small) angle in Python using OpenCV.This is what I have so far, but it produces a very strange resulting image, but it is rotated somewhat:def rotateImage( image, angle ): if image != None: dst_image = cv.CloneImage( image ) rotate_around = (0,0) transl = cv.CreateMat(2, 3, cv.CV_32FC1 ) matrix = cv.GetRotationMatrix2D( rotate_around, angle, 1.0, transl ) cv.GetQuadrangleSubPix( image, dst_image, transl ) cv.GetRectSubPix( dst_image, image, rotate_around ) return dst_image | import numpy as npimport cv2def rotate_image(image, angle): image_center = tuple(np.array(image.shape[1::-1]) / 2) rot_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0) result = cv2.warpAffine(image, rot_mat, image.shape[1::-1], flags=cv2.INTER_LINEAR) return resultAssuming you're using the cv2 version, that code finds the center of the image you want to rotate, calculates the transformation matrix and applies to the image. |
Regex not returning specific match I am trying to extract a link from a script tag on a website.currently my regex returns the whole block for some reason..This is the content of the script tag I want to get the link from:<script type="text/javascript">var key = '';var url = 'http://stream1.song365.me/h1/20160129/1772422101/The%20Beatles%20-%20There%27s%20a%20Place%20%28Studio%20Outtake%20Takes%205%20%26%206%29_(song365.cc).mp3';var hqurl = 'http://stream1.song365.me/h1/20160129/1772422101/The%20Beatles%20-%20There%27s%20a%20Place%20%28Studio%20Outtake%20Takes%205%20%26%206%29_(song365.cc).mp3';$(document).ready(function(){ $("div[rel='digg']").click(function(){ var method = $(this).attr("method"); var v = parseInt($(this).find('em').html()); var p = this; $.post("/track/digg/2788951/" + method, function(data){ if(data.status==0) { alert("today you have been digg it!") } else { $(p).find('em').html(data.number); } }, "JSON") }) if(url.length!=0) { $("#download-link").attr("href", url + "?key=" + key).css("display","");; } if(hqurl.length!=0) { $("#download-link-hq").attr("href", hqurl + "?key=" + key).css("display",""); }});</script>This is the code I currently have:request = requests.get(dummy_link) data = request.text soup = BeautifulSoup(data, 'html.parser') link = soup.findAll(text=re.compile('var hqurl.*?mp3'))It is returning the whole script tag, but I only want the link assigned to the hqurl variable.Current code with help from @alecxe:request = requests.get('https://www.song365mp3.biz/download/the-beatles-there039s-a-place-studio-outtake-takes-5-amp-6-2788951.html') data = request.text soup = BeautifulSoup(data, 'html.parser') pattern = re.compile("var hqurl = '(.*?mp3)';$", re.MULTILINE | re.DOTALL) link = soup.find("script", text=pattern) print(pattern.search(link.text).group(1))But throws error: print((link.text).group(1))AttributeError: 'ResultSet' object has no attribute 'text' | Pre-compile the pattern and reuse for both locating the element and extracting the link:pattern = re.compile("var hqurl = '(.*?mp3)';", re.MULTILINE | re.DOTALL)link = soup.find("script", text=pattern)print(pattern.search(link.text).group(1))Note that I've improved the expression and added a capturing group that would save the actual link in a group which we then access via .group(1).Prints:http://stream1.song365.me/h1/20160129/1772422101/The%20Beatles%20-%20There%27s%20a%20Place%20%28Studio%20Outtake%20Takes%205%20%26%206%29_(song365.cc).mp3 |
pandas_datareader.data not returning all stock values from start to end date I am trying to get stock data from yahoo using pandas_datareader.data and i keep getting missing sections of data. here is what i have coded. all i want to do right now is return all the data for the dates between the start and end dates import pandas as pd import pandas_datareader.data as webfrom datetime import datetimeibm = web.DataReader('IBM', 'yahoo', datetime(2015,1,1)datetime(2016,1,1))and right now this is returning this:dataI am confused why i am getting the ellipse with all the missing data when i try to create my set. any help would be greatly appreciated! | This is how pandas displays the result (as explained here). pandas omits rows that exceed the pd.set_option('max_rows', X) setting (default is 50 I believe). You can see the limit using pd.options.display.max_rows. Try ibm.info() and you should see that there are more rows than displayed.Your query results in:<class 'pandas.core.frame.DataFrame'>DatetimeIndex: 252 entries, 2015-01-02 to 2015-12-31Data columns (total 6 columns):Open 252 non-null float64High 252 non-null float64Low 252 non-null float64Close 252 non-null float64Volume 252 non-null int64Adj Close 252 non-null float64dtypes: float64(5), int64(1)memory usage: 13.8 KBNoneBut displays as (not rows x columns info at the bottom despite elipsis): Open High Low Close Volume \Date 2015-01-02 161.309998 163.309998 161.000000 162.059998 5525500 2015-01-05 161.270004 161.270004 159.190002 159.509995 4880400 2015-01-06 159.669998 159.960007 155.169998 156.070007 6146700 2015-01-07 157.199997 157.199997 154.029999 155.050003 4701800 2015-01-08 156.240005 159.039993 155.550003 158.419998 4236800 2015-01-09 158.419998 160.339996 157.250000 159.110001 4484800 2015-01-12 159.000000 159.250000 155.759995 156.440002 4182800 2015-01-13 157.259995 159.970001 155.679993 156.809998 4377500 2015-01-14 154.860001 156.490005 153.740005 155.800003 4690300 2015-01-15 156.690002 156.970001 154.160004 154.570007 4248400 2015-01-16 153.820007 157.630005 153.820007 157.139999 5756000 2015-01-20 156.699997 157.330002 154.029999 156.949997 8392800 2015-01-21 153.029999 154.500000 151.070007 152.089996 11897100 2015-01-22 151.940002 155.720001 151.759995 155.389999 6120100 2015-01-23 155.029999 157.600006 154.889999 155.869995 4834800 2015-01-26 158.259995 159.460007 155.770004 156.360001 7888100 2015-01-27 154.940002 155.089996 152.589996 153.669998 5659600 2015-01-28 154.000000 154.529999 151.550003 151.550003 4495900 2015-01-29 151.380005 155.580002 149.520004 155.479996 8320800 2015-01-30 153.910004 155.240005 153.039993 153.309998 6563600 2015-02-02 154.000000 154.660004 151.509995 154.660004 4712200 2015-02-03 154.750000 158.600006 154.750000 158.470001 5539400 2015-02-04 157.210007 158.710007 156.699997 156.960007 3678500 2015-02-05 157.289993 158.589996 157.149994 157.910004 5253600 2015-02-06 157.339996 158.080002 156.229996 156.720001 3225000 2015-02-09 156.000000 157.500000 155.399994 155.750000 3057700 2015-02-10 156.740005 158.559998 155.080002 158.559998 4440600 2015-02-11 157.759995 159.089996 157.169998 158.199997 3626700 2015-02-12 158.720001 159.500000 158.089996 158.520004 3333100 2015-02-13 158.779999 160.800003 158.639999 160.399994 3706900 ... ... ... ... ... ... 2015-11-18 134.789993 135.910004 134.259995 135.820007 4149200 2015-11-19 136.210007 137.740005 136.009995 136.740005 4753600 2015-11-20 137.369995 138.919998 137.250000 138.500000 5176400 2015-11-23 138.529999 138.869995 137.119995 138.460007 5137900 2015-11-24 137.649994 139.339996 137.309998 138.600006 3407700 2015-11-25 138.369995 138.429993 137.380005 138.000000 3238200 2015-11-27 138.000000 138.809998 137.210007 138.460007 1415800 2015-11-30 138.610001 139.899994 138.520004 139.419998 4545600 2015-12-01 139.580002 141.399994 139.580002 141.279999 4195100 2015-12-02 140.929993 141.210007 139.500000 139.699997 3725400 2015-12-03 140.100006 140.729996 138.190002 138.919998 5909600 2015-12-04 138.089996 141.020004 137.990005 140.429993 4571600 2015-12-07 140.160004 140.410004 138.809998 139.550003 3279400 2015-12-08 138.279999 139.059998 137.529999 138.050003 3905200 2015-12-09 137.380005 139.839996 136.229996 136.610001 4615000 2015-12-10 137.029999 137.850006 135.720001 136.779999 4222300 2015-12-11 135.229996 135.440002 133.910004 134.570007 5333800 2015-12-14 135.309998 136.139999 134.020004 135.929993 5143800 2015-12-15 137.399994 138.970001 137.279999 137.789993 4207900 2015-12-16 139.119995 139.649994 137.789993 139.289993 4345500 2015-12-17 139.350006 139.500000 136.309998 136.750000 4089500 2015-12-18 136.410004 136.960007 134.270004 134.899994 10026100 2015-12-21 135.830002 135.830002 134.020004 135.500000 5617500 2015-12-22 135.880005 138.190002 135.649994 137.929993 4263800 2015-12-23 138.300003 139.309998 138.110001 138.539993 5164900 2015-12-24 138.429993 138.880005 138.110001 138.250000 1495200 2015-12-28 137.740005 138.039993 136.539993 137.610001 3143400 2015-12-29 138.250000 140.059998 138.199997 139.779999 3943700 2015-12-30 139.580002 140.440002 139.220001 139.339996 2989400 2015-12-31 139.070007 139.100006 137.570007 137.619995 3462100 Adj Close Date 2015-01-02 153.863588 2015-01-05 151.442555 2015-01-06 148.176550 2015-01-07 147.208134 2015-01-08 150.407687 2015-01-09 151.062791 2015-01-12 148.527832 2015-01-13 148.879114 2015-01-14 147.920202 2015-01-15 146.752415 2015-01-16 149.192426 2015-01-20 149.012033 2015-01-21 144.397834 2015-01-22 147.530934 2015-01-23 147.986654 2015-01-26 148.451876 2015-01-27 145.897925 2015-01-28 143.885151 2015-01-29 147.616379 2015-01-30 145.556132 2015-02-02 146.837859 2015-02-03 150.455161 2015-02-04 149.021536 2015-02-05 149.923486 2015-02-06 149.837432 2015-02-09 148.910029 2015-02-10 151.596622 2015-02-11 151.252431 2015-02-12 151.558385 2015-02-13 153.355812 ... ... 2015-11-18 133.161622 2015-11-19 134.063613 2015-11-20 135.789160 2015-11-23 135.749949 2015-11-24 135.887208 2015-11-25 135.298946 2015-11-27 135.749949 2015-11-30 136.691151 2015-12-01 138.514746 2015-12-02 136.965669 2015-12-03 136.200937 2015-12-04 137.681377 2015-12-07 136.818611 2015-12-08 135.347970 2015-12-09 133.936153 2015-12-10 134.102824 2015-12-11 131.936088 2015-12-14 133.269455 2015-12-15 135.093050 2015-12-16 136.563691 2015-12-17 134.073412 2015-12-18 132.259616 2015-12-21 132.847878 2015-12-22 135.230309 2015-12-23 135.828370 2015-12-24 135.544053 2015-12-28 134.916580 2015-12-29 137.044105 2015-12-30 136.612715 2015-12-31 134.926379 [252 rows x 6 columns] |
double loop in psp/python/html from a mysql query I'm trying to program a script that will take in a user input of a place they want to go, starting with the country. Then take the user's input and update my list of which state is in the country then, which city is in the state AND country. I'm using python/psp for my backend and html for my front end. I'm having trouble getting this page update to work on my site. Can anyone help?<div id="content"><p>Where would you like to go?</p><form method="post" action="http://localhost/list2.psp">Places to go:<select name = "location"><%cursor.execute("select distint country from location") or die(mysql_error())result = cursor.fetchall()for record in result:cursor.execute("select * from location where country='%s'" % record[0]) or die(mysql_error())result2 = cursor.fetchall()%><optgroup label="<%=record[0]%><%for record2 in result2:%><option value="<%= record2[0] %>"><%= record[1] %>, <%= record[2]%>, <%= record[3]%></option><%print 'hi'%></optgroup></select><br><input type="submit" value="Search">`</form></div>it gives me this error on my webpage:cursor.execute("select * from location where country='%s'" % record[0]) or die(mysql_error()) ^IndentationError: expected an indented blockis my syntax wrong? | I have never heard of Python Server Pages, wow. Anyway, in Python indentation matters, you can think of indentation as the replacement for curly braces in C-style languages. for record in result: cursor.execute("select * from location where country='%s'" % record[0]) or die(mysql_error()) result2 = cursor.fetchall() for record2 in result2: pass #next loop, further indentedSo in your for loop, everything that belongs to the loop, has to be indented on the same level. |
Take indices of non zero elements of matrix I create matrix "adjacency_matrix" with following code:n = int(input())# Initialize matrixadjacency_matrix = [] # For user inputfor i in range(ROWS): a =[] for j in range(COLUMNS): a.append(int(input())) adjacency_matrix.append(a)I want to save indices of non zero elements of above matrix.for example n = 3;adjacency_matrix =2 3 00 0 11 5 0I want to save rows of non zero element in "li_r":li_r = [0 0 1 2 2]and save columns of non zero element in "li_c":li_c = [0 1 2 0 1]My goal is take (0,0) (0,1) (1,2) (2,0) (2,1) as row and column of non-zero elementand give them to g.addEdge(u,v) in my code, I mean I want to haveg.addEdge(0,0)g.addEdge(0,1)g.addEdge(1,2)g.addEdge(2,0)g.addEdge(2,1)I write this code, but not work:for i in range(n):li_r = [];li_c = []for j in range(n):if adjacency_matrix[i][j]!=0:li_r.append(i)li_c.append(j)for i in range(len(li_r)): for j in range(len(li_c)): g.addEdge(li_r[i], li_c[j])I new in Python and if possible, help me and rewrite my code. | Try this list comprehension. It returns a list of positions (x,y) and you can iterate over each one and insert them into g.addEdge()adjacency_matrix =[[2,3,0],[0,0,1],[1,5,0]]pos = [(count1, count2) for count1, lst in enumerate(adjacency_matrix) for count2, num in enumerate(lst) if num != 0]output[(0, 0), (0, 1), (1, 2), (2, 0), (2, 1)]g.addEdge()for i in pos: g.addEdge(*i) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.