questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
Create list of tuples from 2d array I'm looking to create a list of tuples from a 2xn array where the first row is an ID and the second row is that IDs group assignment. I'd like to create a list of the IDs organized to their group assignments. For example:array([[ 0., 1., 2., 3., 4., 5., 6.], [ 1., 2., 1., 2., 2., 1., 1.])In the above example, ID 0 is assigned to group 1, ID 1 to group 2 and so on. The output list would look like:a=[(0,2,5,6),(1,3,4)]Does anyone have any creative, quick ways to do this?Thanks! | The standard (sorry, not creative -- but reasonably quick) numpy way would be an indirect sort:import numpy as npdata = np.array([[ 0., 1., 2., 3., 4., 5., 6.], [ 1., 2., 1., 2., 2., 1., 1.]])index = np.argsort(data[1], kind='mergesort') # mergesort is a bit # slower than the default # algorithm but is stable, # i.e. if there's a tie # it will preserve order# use the index to sort both parts of datasorted = data[:, index]# the group labels are now in blocks, we can detect the boundaries by# shifting by one and looking for mismatchsplit_points = np.where(sorted[1, 1:] != sorted[1, :-1])[0] + 1# could convert to int dtype here if desiredresult = map(tuple, np.split(sorted[0], split_points))# That's Python 2. In Python 3 you'd have to explicitly convert to list:# result = list(result)print(result)Prints:[(0.0, 2.0, 5.0, 6.0), (1.0, 3.0, 4.0)] |
How can we get response of next loading web page I am writing a scraper to get all the movie list available on hungama.comI am requesting "http://www.hungama.com/all/hungama-picks-54/4470/" url to get the response.When you go to this url, this will show 12 movies on the screen but as you sroll down the movie count gets increasing by auto reload.I am parsing the html source page with below coderesponse.css('div.movie-block-artist.boxshadow.clearfix1>div>div>a::text').extract()but I only get 12 items whereas there are more movie items. how can I get all the movies available. Please help. | There seems to be an ajax request as a lazy load feature with url http://www.hungama.com/all/hungama-picks-54/4470/2/?ajax_call=1&_country=IN which fetches movies .In the above url change 2 to 3 (http://www.hungama.com/all/hungama-picks-54/4470/3/?ajax_call=1&_country=IN) and so on for getting next movies detail. |
How to compare time stamps in log lines that also has version numbers? I am trying to find the version number log line with the most recent time stamp, and I am currently trying to do it by using parse_version. log line examples: 2018-05-08T15:25:02.053Z 00000000-0000-0000-0000-000000000000 > LVL:2 RC: version: 2.11.0.104512018-05-08T21:27:14.2049217Z> <INFO >: Version: 2.10.0.239602018-05-08T21:18:53.0428568Z> <INFO >: Version: 2.12.1.26051Although the last log line has the latest version number, I hope to store and display the second log line even though it has the lower version number because it has the latest time stamp. Currently, using the parse_version comparison I am printing the last log line(I'm guessing because the combined value between time stamp and version is bigger overall). The below is my current code: for line in f: #For simplicity sake, I won't include my code above this line because it's just for looping through the folder to find the log lines #0strip out \x00 from read content, in case it's encoded differently line = line.replace('\x00', '') #Regular expressions for finding the log lines in the folder RE2 = r"^.+INFO.+Version.+" RE3 = r"^.+RC: version" latest_version_line = '' #Find the general matches, and get the version line with the latest time stamp pattern2 = re.compile('('+RE2+'|'+RE3+')', re.IGNORECASE) for match2 in pattern2.finditer(line): if parse_version(line) > parse_version(latest_version_line): latest_version_line = line print(latest_version_line)Thanks in advance! | Since you're looking to store and display the second-latest version number, you can simply use another variable to retain the current latest version number before it gets replaced by the new latest version number.Change: if parse_version(line) > parse_version(latest_version_line): latest_version_line = line print(latest_version_line)to: if parse_version(line) > parse_version(latest_version_line): second_latest_version_line = latest_version_line latest_version_line = line print(latest_version_line) print(second_latest_version_line) |
Manually compute the length of a string I have a homework assignment asking for string length calculation without using built-in functions.What I had in mind is to use a counter:s = 0while name[s] != "": s += 1but I'm stuck with how to solve the string index out of range error...or is there actually another way? | you have two simple options :Either add a try/except clause:s = 0try: while(name[s]): s += 1except IndexError: passprint(s)Or use an iterator:s = 0for _ in name: s += 1print(s) |
Machine learning multi-classification: Why use 'one-hot' encoding instead of a number I'm currently working on a classification problem with tensorflow, and i'm new to the world of machine learning, but I don't get something.I have successfully tried to train models that output the y tensor like this:y = [0,0,1,0]But I can't understand the principal behind it...Why not just train the same model to output classes such as y = 3 or y = 4This seems much more flexible, because I can imagine having a multi-classification problem with 2 million possible classes, and it would be much more efficient to output a number between 0-2,000,000 than to output a tensor of 2,000,000 items for every result.What am I missing? | Ideally, you could train you model to classify input instances and producing a single output. Something likey=1 means input=dog, y=2 means input=airplane. An approach like that, however, brings a lot of problems:How do I interpret the output y=1.5?Why I'm trying the regress a number like I'm working with continuous data while I'm, in reality, working with discrete data?In fact, what are you doing is treating a multi-class classification problem like a regression problem.This is locally wrong (unless you're doing binary classification, in that case, a positive and a negative output are everything you need).To avoid these (and other) issues, we use a final layer of neurons and we associate an high-activation to the right class.The one-hot encoding represents the fact that you want to force your network to have a single high-activation output when a certain input is present.This, every input=dog will have 1, 0, 0 as output and so on.In this way, you're correctly treating a discrete classification problem, producing a discrete output and well interpretable (in fact you'll always extract the output neuron with the highest activation using tf.argmax, even though your network hasn't learned to produce the perfect one-hot encoding you'll be able to extract without doubt the most likely correct output ) |
List comprehension to create list of strings from two lists I have two lists of strings and I want to use them to create a list of strings.m1 = ["Ag", "Pt"]m2 = ["Ir", "Mn"]codes = []for i in range (len (m1) ): codes.append('6%s@32%s' %(m1[i], m2[i] ) )print codesFor example codes could have the elements ["6Ag@32Ir", "6Pt@32Mn"]I want to turn the following into a list comprehension. | Non-zip method:>>> m1 = ["Ag", "Pt"]>>> m2 = ["Ir", "Mn"]>>> ['6%s@32%s' %(m1[i], m2[i]) for i in range(min(len(m1), len(m2)))]['6Ag@32Ir', '6Pt@32Mn']Turn that into a generator:>>> ('6%s@32%s' %(m1[i], m2[i]) for i in xrange(min(len(m1), len(m2))))Python 2, you can do:>>> ['6%s@32%s' % (x, y) for x, y in map(None, m1, m2)]['6Ag@32Ir', '6Pt@32Mn']Or, you do not even need to unpack the tuple:>>> ['6%s@32%s' % t for t in map(None, m1, m2)] |
python sort dictionary by value array I have a dictionary with an array as elements.Say:masterListShort = {'a': [5, 2, 1, 2], 'b': [7, 2, 4, 1], 'c': [2, 0, 1, 1]}I would like to reverse sort this dictionary by the first element of the values.I would then like to write my output to a tab delimited file like this:<key> <value1> <value2> <value3> etc.My current code where I write my dictionary to file looks like this:# write the masterListShort to fileoutFile2 = open('../masterListShort.tsv', 'w')for item in sorted(masterListShort): tempStr = '\t'.join(map(str, masterListShort[item])) outFile2.write(str(item) + '\t' + tempStr + '\n')outFile2.close()This code works fine, it just does not sort the list.I want my output to be written in a tab delimited file format.So:b 7 2 4 1c 5 2 1 2a 2 0 1 1I have found the following commands so far, and was wondering if i could apply them to my code:import operatorsorted(myDict, key=operator.itemgetter(1)) | You'll need to sort the values then, on the first index (so 0 for zero-based indexing), and tell sorted() to reverse the order:import operatorsorted(myDict.values(), key=operator.itemgetter(0), reverse=True)Without the dict.values() call you are trying to sort the keys instead.Demo:>>> import operator>>> myDict = {'a': [5, 2, 1, 2], 'b': [7, 2, 4, 1], 'c': [2, 0, 1, 1]}>>> sorted(myDict.values(), key=operator.itemgetter(0), reverse=True)[[7, 2, 4, 1], [5, 2, 1, 2], [2, 0, 1, 1]]If you wanted to output key-value pairs, then use dict.items() and use lambda to access the first index on just the value:sorted(myDict.items(), key=lambda i: i[1][0], reverse=True)Demo:>>> sorted(myDict.items(), key=lambda i: i[1][0], reverse=True)[('b', [7, 2, 4, 1]), ('a', [5, 2, 1, 2]), ('c', [2, 0, 1, 1])]In both cases, there is actually not that much point on sorting just by the first element; you can just leave the lists to sort naturally:sorted(myDict.values(), reverse=True) # sort just the valuessorted(myDict.items(), key=operator.itemgetter(1), reverse=True) # sort itemsas a list of lists is sorted in lexicographical ordering; if the first elements are equal, then two lists are ordered based on the second element, etc.To write this out to a tab-delimited file, use the csv module; it'll take care of converting values to strings, writing the tab delimiters and handle newlines:import csvwith open('../masterListShort.tsv', 'wb') as outfh: writer = csv.writer(outfh, delimiter='\t') for key, values in sorted(myDict.items(), key=operator.itemgetter(1), reverse=True): writer.writerow([key] + values) |
Merge dictionaries without overwriting previous value where value is a list I am aware of Merge dictionaries without overwriting values, merging "several" python dictionaries, How to merge multiple dicts with same key?, and How to merge two Python dictionaries in a single expression?.My problem is slightly different, however.Lets say I have these three dictionariesdict_a = {'a': [3.212], 'b': [0.0]}dict_b = {'a': [923.22, 3.212], 'c': [123.32]}dict_c = {'b': [0.0]}The merged result should beresult_dict = {'a': [3.212, 3.212, 923.22], 'b': [0.0, 0.0], 'c': [123.32]}This code here works, but would nest lists within the listsresult_dict = {}dicts = [dict_a, dict_b, dict_c]for d in dicts: for k, v in d.iteritems(): result_dict.setdefault(k, []).append(v)Using extend instead of append would prevent the nested lists, but doesn't work if a key doesn't exist yet. So basically it should do a update without the overwriting, as in the other threads.Sorry, it was a mistake on my side. It was too late yesterday and I didn't notice the line that threw the error wasn't the one I thought it did, therefore assumed my dictionaries would already have the above structure.In fact, mgilson was correct assuming that it was related to a TypeError. To be exact, an 'uniterable float'. | I'm pretty sure that .extend works here ...>>> dict_a = {'a': [3.212], 'b': [0.0]}>>> dict_b = {'a': [923.22, 3.212], 'c': [123.32]}>>> dict_c = {'b': [0.0]}>>> result_dict = {}>>> dicts = [dict_a, dict_b, dict_c]>>> >>> for d in dicts:... for k, v in d.iteritems():... result_dict.setdefault(k, []).extend(v)... >>> result_dict{'a': [3.212, 923.22, 3.212], 'c': [123.32], 'b': [0.0, 0.0]}The magic is in the dict.setdefault method. If the key doesn't exist, setdefault adds a new key with the default value you provide. It then returns that default value which can then be modified.Note that this solution will have a problem if the item v is not iterable. Perhaps that's what you meant? e.g. if dict_a = {'a': [3.212], 'b': 0.0}. In this case, I think you'll need to resort to catching the TypeError: type object is not iterable in a try-except clause:for d in dicts: for k, v in d.iteritems(): try: result_dict.setdefault(k, []).extend(v) except TypeError: result_dict[k].append(v) |
Passing nested list as an argument to a method Below code is working fineclass p: def __init__(self): self.log={ 'name':'', 'id':'', 'age':'', 'grade':'' } def parse(self,line): self.log['id']=line[0] self.log['name']=line[1] self.log['age']=line[2] self.log['grade']=line[3].replace('\n',"") return self.logobj=p()with open(r"C:\Users\sksar\Desktop\Azure DE\Datasets\dark.csv",'r') as fp: line=fp.read()data=[i.split(',') for i in line.split('\n')]for i in data: a=obj.parse(i) print(a)Input:1,jonas,23,A2,martha,23,BOutput:{'name': 'jonas', 'id': '1', 'age': '23', 'grade': 'A'}{'name': 'martha', 'id': '2', 'age': '23', 'grade': 'B'}Question is: When i make a method call(a=obj.parse(i)) out of the loop, inputs are overwritten and give below as o/p {'name': 'martha', 'id': '2', 'age': '23', 'grade': 'B'} simply missing the previous records.How to make a method(parse) call without having to iterate through nested loop(Input data) and feed data to the method call? simply how to get the desired output without for loop... | I dont get why you are trying to avoid an explicit loop. I mean, even if you don't see it in your code, if there is something being iterated, there will be a loop somewhere, and if so, "explicit is better than implicit".In any case, check this:with open(r"C:\Users\sksar\Desktop\Azure DE\Datasets\dark.csv",'r') as fp: [print(obj.parse(x.split(','))) for x in fp.readlines()] |
convert numpy matrix into pyspark rdd I have a 2d numpy array. How do I create a pyspark rdd from that where each row in the matrix is an entry in the rdd?Such that:rddData.take(1)[0] == list(aaData[0])where aaData is the numpy 2d array (matrix) and rddData is the rdd created from aaData? | Just parallelize it:mat = np.arange(100).reshape(10, -1)rdd = sc.parallelize(mat)np.all(rdd.first() == mat[0])## True |
Python file-IO and zipfile. Trying to loop through all the files in a folder and then loop through the texts in respective file using Python Trying to extract all the zip files and giving the same name to the folder where all the files are gonna be.Looping through all the files in the folder and then looping through the lines within those files to write on a different text file.This is my code so far:#!usr/bin/env python3import globimport osimport zipfilezip_files = glob.glob('*.zip')for zip_filename in zip_files: dir_name = os.path.splitext(zip_filename)[0] os.mkdir(dir_name) zip_handler = zipfile.ZipFile(zip_filename, "r") zip_handler.extractall(dir_name)path = dir_namefOut = open("Output.txt", "w")for filename in os.listdir(path): for line in filename.read().splitlines(): print(line) fOut.write(line + "\n")fOut.close()This is the error that I encounter:for line in filename.read().splitlines():AttributeError: 'str' object has no attribute 'read' | You need to open the file and also join the path to the file, also using splitlines and then adding a newline to each line is a bit redundant:path = dir_namewith open("Output.txt", "w") as fOut: for filename in os.listdir(path): # join filename to path to avoid file not being found with open(os.path.join(path, filename)): for line in filename: fOut.write(line)You should always use with to open your files as it will close them automatically. If the files are not large you can simply fOut.write(f.read()) and remove the loop.You also set path = dir_name which means path will be set to whatever the last value of dir_name was in your first loop which may or may not be what you want. You can also use iglob to avoid creating a full list zip_files = glob.iglob('*.zip'). |
Multiprocesses python with shared memory I have an object that connects to a websocket remote server. I need to do a parallel process at the same time. However, I don't want to create a new connection to the server. Since threads are the easier way to do this, this is what I have been using so far. However, I have been getting a huge latency because of GIL. Can I achieve the same thing as threads but with multiprocesses in parallel?This is the code that I have:class WebSocketApp(object): def on_open(self): # Create another thread to make sure the commands are always been read print "Creating thread..." try: thread.start_new_thread( self.read_commands,() ) except: print "Error: Unable to start thread"Is there an equivalent way to do this with multiprocesses?Thanks! | You sure can, use something along the lines of:from multiprocessing import Processclass WebSocketApp(object): def on_open(self): # Create another thread to make sure the commands are always been read print "Creating thread..." try: p = Process(target = WebSocketApp.read_commands, args = (self, )) # Add other arguments to this tuple p.start() except: print "Error: Unable to start thread"It is important to note, however, that as soon as the object is sent to the other process the two objects self and self in the different threads diverge and represent different objects. If you wish to communicate you will need to use something like the included Queue or Pipe in the multiprocessing module.You may need to keep a reference of all the processes (p in this case) in your main thread in order to be able to communicate that your program is terminating (As a still-running child process will appear to hang the parent when it dies), but that depends on the nature of your program.If you wish to keep the object the same, you can do one of a few things:Make all of your object properties either single values or arrays and then do something similar to this:from multiprocessing import Process, Value, Arrayclass WebSocketApp(object): def __init__(self): self.my_value = Value('d', 0.3) self.my_array = Array('i', [4 10 4]) # -- Snip --And then these values should work as shared memory. The types are very restrictive though (You must specify their types)A different answer is to use a manager: from multiprocessing import Process, Managerclass WebSocketApp(object): def __init__(self): self.my_manager = Manager() self.my_list = self.my_manager.list() self.my_dict = self.my_manager.dict() # -- Snip --And then self.my_list and self.my_dict act as a shared-memory list and dictionary respectively.However, the types for both of these approaches can be restrictive so you may have to roll your own technique with a Queue and a Semaphore. But it depends what you're doing.Check out the multiprocessing documentation for more information. |
Can't embed graph into tkinter I am trying to embed a graph into a tkinter window. The import code looks like this:import matplotlibmatplotlib.use('TkAgg')from numpy import arange, sin, pifrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg# implement the default mpl key bindingsfrom matplotlib.backend_bases import key_press_handlerfrom matplotlib.figure import FigureAnd this is the error I get:Traceback (most recent call last): File "C:/Users/Álvaro/Desktop/Mates/codi/matplotlib.practica/insertar_graf_tkinter.py", line 5, in <module> from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg File "C:\Users\Álvaro\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 13, in <module> import matplotlib.backends.tkagg as tkagg File "C:\Users\Álvaro\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\backends\tkagg.py", line 9, in <module> from matplotlib.backends import _tkaggImportError: DLL load failed: The specified module could not be found.I use Python 3.5.1 on Windows 8. The matplotlib module was installed via pip. | I did what the second answer by @Volodia to this question said and worked fine. Problem solved. |
Creating 2d histogram from 2d numpy array I have a numpy array like this:[[[0,0,0], [1,0,0], ..., [1919,0,0]],[[0,1,0], [1,1,0], ..., [1919,1,0]],...,[[0,1019,0], [1,1019,0], ..., [1919,1019,0]]]To create I use function (thanks to @Divakar and @unutbu for helping in other question):def indices_zero_grid(m,n): I,J = np.ogrid[:m,:n] out = np.zeros((m,n,3), dtype=int) out[...,0] = I out[...,1] = J return outI can access this array by command:>>> out = indices_zero_grid(3,2)>>> outarray([[[0, 0, 0], [0, 1, 0]], [[1, 0, 0], [1, 1, 0]], [[2, 0, 0], [2, 1, 0]]])>>> out[1,1]array([1, 1, 0])Now I wanted to plot 2d histogram where (x,y) (out[(x,y]) is the coordinates and the third value is number of occurrences. I've tried using normal matplotlib plot, but I have so many values for each coordinates (I need 1920x1080) that program needs too much memory. | If I understand correctly, you want an image of size 1920x1080 which colors the pixel at coordinate (x, y) according to the value of out[x, y].In that case, you could useimport numpy as npimport matplotlib.pyplot as pltdef indices_zero_grid(m,n): I,J = np.ogrid[:m,:n] out = np.zeros((m,n,3), dtype=int) out[...,0] = I out[...,1] = J return outh, w = 1920, 1080out = indices_zero_grid(h, w)out[..., 2] = np.random.randint(256, size=(h, w))plt.imshow(out[..., 2])plt.show()which yieldsNotice that the other two "columns", out[..., 0] and out[..., 1] are not used. This suggests that indices_zero_grid is not really needed here.plt.imshow can accept an array of shape (1920, 1080). This array has a scalar value at each location in the array. The structure of the array tells imshow where to color each cell. Unlike a scatter plot, you don't need to generate the coordinates yourself. |
Loop over multiple files to merge according their names I'm a new python. I have for loop function gives me a folder contains 100 files "the data inside is numbers and the nuber of raws are the same" like the follow:A_0.20_1_.txt for example A_0.20_1_ B_0.20_1_ B_0.20_1_.txt 1 4A_0.20_2_.txt 2 5B_0.20_2_.txt 3 6A_0.40_1_.txtB_0.40_1_.txtA_0.40_2_.txtB_0.40_2_.txtand so on.....These files saved in a folder named outputI need to merge the two files form the output folder into one file like:merged_A_B_0.20_1_.txt for example merged_A_B_0.20_1_merged_A_B_0.20_2_.txt 1 4 2 5merged_A_B_0.40_1_.txt 3 6merged_A_B_0.40_2_.txtand so on.....I tried to use the following code:filename_list = [f for f in os.listdir(r'C:\Users\output\')if os.path.isfile(f)] columns = []for filename in filename_list: f=open(filename) x = np.array([float(raw) for raw in f.readlines()]) columns.append(x)columns = np.vstack(columns).Tnp.savetxt('filename_out.txt', columns) But it doesn't work and give me error Traceback (most recent call last):File "<ipython-input-6-5df3067f04e7>", line 1, in <module>runfile('C:/Users/user/Downloads/combine 2 files new2.py', wdir='C:/Users/user/Downloads')File "C:\ProgramData\Anaconda3\lib\site- packages\spyder\utils\site\sitecustomize.py", line 866, in runfileexecfile(filename, namespace)File "C:\ProgramData\Anaconda3\lib\site- packages\spyder\utils\site\sitecustomize.py", line 102, in execfileexec(compile(f.read(), filename, 'exec'), namespace)File "C:/Users/user/Downloads/combine 2 files new2.py", line 22, in <module> columns = np.vstack(columns).TFile "C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\shape_base.py", line 230, in vstackreturn _nx.concatenate([atleast_2d(_m) for _m in tup], 0)ValueError: need at least one array to concatenate Please, any help? | Your code is actually working the problem is you don't pass a full path to os.path.isfile() so it does not return True and your list of files is emptyimport numpy as npimport osfile_path = r"C:\Users\output"filename_list = []for file in os.listdir(file_path): file = os.path.join(file_path, file) if os.path.isfile(file): filename_list.append(file)columns = []for filename in filename_list: with open(filename, 'r') as f: x = np.array([float(raw) for raw in f.readlines()]) columns.append(x)columns = np.vstack(columns).Tnp.savetxt('filename_out.txt', columns)This writes the data of all files into ONE file with one column for each file |
Safely viewing lists in ipython When I ask an ipython notebook to display (via evaluate) a large np.array ipython uses ellipses to summarize the data. However if I ask ipython to display a large list, no such safe guard is in place and my poor ipython notebook struggles. Are there any magics or other tools I can use? I run an ipython notebook in emacs. | could you not just test the length of the list yourself? Or wrap the lists as generators?>>> def guard(XS,N):... if len(XS) > N:... return "list too long" # or whatever you want... else:... return XS... >>> guard([1,2,3,4],2)'list too long'>>> guard([1,2,3,4],6)[1, 2, 3, 4]>>> |
installing matplotlib, error: Setup script exited with error: command 'gcc' failed with exit status 1 I'm on cygwin and using easy_install to install matplotlib. and i get the above error. I have attached the installation process so far. what is going wrong? $ easy_install matplotlibSearching for matplotlibReading http://pypi.python.org/simple/matplotlib/Reading http://matplotlib.orgReading http://matplotlib.sourceforge.netReading http://sourceforge.net/project/showfiles.php?group_id=80706Reading http://sourceforge.net/project/showfiles.php?group_id=80706&package_id=82474Reading http://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.1.0/Reading http://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.1.1/Reading https://sourceforge.net/project/showfiles.php?group_id=80706&package_id=278194Reading https://sourceforge.net/project/showfiles.php?group_id=80706&package_id=82474Reading https://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-0.99.1/Reading https://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-0.99.3/Reading https://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.0Reading https://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.0.1/Best match: matplotlib 1.3.1Downloading https://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.3.1/matplotlib-1.3.1.tar.gzProcessing matplotlib-1.3.1.tar.gzWriting /tmp/easy_install-ocoQwv/matplotlib-1.3.1/setup.cfgRunning matplotlib-1.3.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-ocoQwv/matplotlib-1.3.1/egg-dist-tmp-P_dVeG============================================================================Edit setup.cfg to change the build optionsBUILDING MATPLOTLIB matplotlib: yes [1.3.1] python: yes [2.7.5 (default, Oct 2 2013, 22:34:09) [GCC 4.8.1]] platform: yes [cygwin]REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.6.2] dateutil: yes [dateutil was not found. It is required for date axis support. pip/easy_install may attempt to install it after matplotlib.] tornado: yes [using tornado version 3.2] pyparsing: yes [using pyparsing version 2.0.1] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] freetype: yes [version 16.1.10] png: yes [version 1.5.14]OPTIONAL SUBPACKAGES sample_data: yes [installing] toolkits: yes [installing] tests: yes [using nose version 1.3.0]OPTIONAL BACKEND EXTENSIONS macosx: no [Mac OS-X only] qt4agg: yes [installing, Qt: 4.8.4, PyQt4: 4.10.2] gtk3agg: no [Requires pygobject to be installed.] gtk3cairo: no [Requires pygobject to be installed.] gtkagg: no [The C/C++ header for pygtk (pygtk/pygtk.h) could not be found. You may need to install the development package.] tkagg: yes [installing, version 81008] wxagg: no [requires wxPython] gtk: no [The C/C++ header for pygtk (pygtk/pygtk.h) could not be found. You may need to install the development package.] agg: yes [installing] cairo: yes [installing, version 1.10.0] windowing: no [Microsoft Windows only]OPTIONAL LATEX DEPENDENCIES dvipng: yes [version 1.14] ghostscript: yes [version 9.06] latex: yes [version 3.1415926] pdftops: yes [version 0.22.5]In file included from src/file_compat.h:4:0, from src/_png.cpp:31:/usr/lib/python2.7/site-packages/numpy/core/include/numpy/npy_3kcompat.h: In function ‘PyObject* npy_PyFile_OpenFile(PyObject*, char*)’:/usr/lib/python2.7/site-packages/numpy/core/include/numpy/npy_3kcompat.h:258:60: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] return PyObject_CallFunction(open, "Os", filename, mode); ^In file included from src/_png.cpp:31:0:src/file_compat.h: In function ‘int npy_PyFile_CloseFile(PyObject*)’:src/file_compat.h:125:50: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] ret = PyObject_CallMethod(file, "close", NULL); ^src/_png.cpp: In function ‘void init_png()’:src/_png.cpp:631:25: warning: variable ‘_png’ set but not used [-Wunused-but-set-variable] static _png_module* _png = NULL; ^In file included from src/file_compat.h:4:0, from src/_png.cpp:31:/usr/lib/python2.7/site-packages/numpy/core/include/numpy/npy_3kcompat.h: At global scope:/usr/lib/python2.7/site-packages/numpy/core/include/numpy/npy_3kcompat.h:391:1: warning: ‘void simple_capsule_dtor(void*)’ defined but not used [-Wunused-function] simple_capsule_dtor(void *ptr) ^src/_path.cpp: In function ‘void init_path()’:src/_path.cpp:1759:26: warning: variable ‘_path’ set but not used [-Wunused-but-set-variable] static _path_module* _path = NULL; ^In file included from src/path_converters.h:10:0, from src/path_cleanup.cpp:10:agg24/include/agg_clip_liang_barsky.h: In function ‘void* get_path_iterator(PyObject*, PyObject*, int, int, double*, e_snap_mode, double, int, double, double, double)’:agg24/include/agg_clip_liang_barsky.h:259:12: warning: ‘y1’ may be used uninitialized in this function [-Wmaybe-uninitialized] *x = (T)(double(bound - y1) * (x2 - x1) / (y2 - y1) + x1); ^In file included from src/path_cleanup.cpp:10:0:src/path_converters.h:440:28: note: ‘y1’ was declared here double x0, y0, x1, y1; ^In file included from src/path_converters.h:10:0, from src/path_cleanup.cpp:10:agg24/include/agg_clip_liang_barsky.h:62:39: warning: ‘x1’ may be used uninitialized in this function [-Wmaybe-uninitialized] ((y < clip_box.y1) << 3); ^In file included from src/path_cleanup.cpp:10:0:src/path_converters.h:440:24: note: ‘x1’ was declared here double x0, y0, x1, y1; ^In file included from src/path_converters.h:10:0, from src/path_cleanup.cpp:10:agg24/include/agg_clip_liang_barsky.h:259:12: warning: ‘y0’ may be used uninitialized in this function [-Wmaybe-uninitialized] *x = (T)(double(bound - y1) * (x2 - x1) / (y2 - y1) + x1); ^In file included from src/path_cleanup.cpp:10:0:src/path_converters.h:440:20: note: ‘y0’ was declared here double x0, y0, x1, y1; ^In file included from src/path_converters.h:10:0, from src/path_cleanup.cpp:10:agg24/include/agg_clip_liang_barsky.h:62:39: warning: ‘x0’ may be used uninitialized in this function [-Wmaybe-uninitialized] ((y < clip_box.y1) << 3); ^In file included from src/path_cleanup.cpp:10:0:src/path_converters.h:440:16: note: ‘x0’ was declared here double x0, y0, x1, y1; ^In file included from /usr/include/python2.7/unicodeobject.h:57:0, from /usr/include/python2.7/Python.h:85, from ./CXX/WrapPython.h:58, from ./CXX/Extensions.hxx:37, from lib/matplotlib/tri/_tri.h:66, from lib/matplotlib/tri/_tri.cpp:8:lib/matplotlib/tri/_tri.h:821:33: error: expected unqualified-id before numeric constant const unsigned long _M, _A, _C; ^lib/matplotlib/tri/_tri.cpp: In constructor ‘RandomNumberGenerator::RandomNumberGenerator(long unsigned int)’:lib/matplotlib/tri/_tri.cpp:2180:28: error: expected identifier before numeric constant : _M(21870), _A(1291), _C(4621), _seed(seed % _M) ^lib/matplotlib/tri/_tri.cpp:2180:28: error: expected ‘{’ before numeric constantlib/matplotlib/tri/_tri.cpp: At global scope:lib/matplotlib/tri/_tri.cpp:2180:28: error: expected unqualified-id before numeric constanterror: Setup script exited with error: command 'gcc' failed with exit status 1 | I recommend you to install PythonXY, you will have almost ALL you need, or at least very good and well known libs (including matplotlib, numpy, scipy, and many others). And it works out of the box, no need to find dependencies. |
Is it possible to check for correct python syntax of a given file/string from Java I am writing a piece of code in java, part of this code deals with handling python code. I was just interested if anyone has come across a way of checking if the python code is syntactically correct during runtime. I don't actually need to run the python code, as i'm writing a program that generates small snippets of it for teaching purposes as part of a project.Is using a system commands the only way to achieve this? | For Python 2, this can be done with Jython:new org.python.util.PythonInterpreter().compile("python code here")and an exception will be thrown if it finds a problem (likely org.python.core.PySyntaxError) |
Given array size of four, minus from element 0 and plus to element 1, 2 and 3 I am just looking for ideas that I can carry out such action or a name to "Google" or to learn such ideas.Given a set of arraya = [10,5,3,6]My target is to minus 3 from a, and add back to a[1],a[2] and a[3] respectively.Examplea = [10,5,3,6]New Result:a = [7,8,3,6]a = [7,5,6,6]a = [7,5,3,9]a = [7,6,5,6]a = [7,6,4,7]And so on, what type of implementation can I carry out? This example is only for a[0], and -3, my ultimate goal is to implement to a[1],a[2],a[3] and variable of minus - minus 3 from a[1], a[2], and a[3], and add back to other. | import itertoolsdef my_combinations(my_list, k): for item in itertools.product(range(k+1), repeat=len(my_list) - 1): if sum(item) == k: yield item, [my_list[0] - k] + [n + i for n, i in zip(my_list[1:], item)]spam = [10, 5, 3, 6]for item, lst in my_combinations(spam, 3): print(f'{item} --> {lst}')output:(0, 0, 3) --> [7, 5, 3, 9](0, 1, 2) --> [7, 5, 4, 8](0, 2, 1) --> [7, 5, 5, 7](0, 3, 0) --> [7, 5, 6, 6](1, 0, 2) --> [7, 6, 3, 8](1, 1, 1) --> [7, 6, 4, 7](1, 2, 0) --> [7, 6, 5, 6](2, 0, 1) --> [7, 7, 3, 7](2, 1, 0) --> [7, 7, 4, 6](3, 0, 0) --> [7, 8, 3, 6] |
How to organize images in directory into classes dependent on dataframe column values? I have a directory of images from this kaggle comp. Images with the same animal in them have the same prefix in their name, and then followed by -{num} where num is the number image of that specific animal. So:abc-1.jpgabc-2.jpgdef-1.jpg...abg-1.jpgabg-2.jpgabg-3.jpgpoc-1.jpgqrs-1.jpgSo as you can see there can be different numbers of images of each.Then I have a dataframe (or .csv) that has 1 column which is the prefix of each animal's filename and another column which is a class [0,1,2,3,4,5], and a final column that is the number of images that exist for each animalUPDATE: assume we already have the number of images of each animalanimal class num_imagesabc 0 2def 0 1abg 2 3poc 1 1qrs 4 1I want to organize the images into directories: dir0, dir1, dir2, dir3, dir4, dir5, based on the class that image corresponds with.Here's one way I imagine doing this task: (definitely not the best way)I was able to get bash command that organizes the images into directories based on the prefix: for file in *.jpg; do mkdir -p -- "${file%%-*}" && mv -- "$file" "${file%%-*}"; doneThen somehow looping through each animal prefix in the dataframe and appending the corresponding {num}'s and placing them in a directory named: dir + {class} | Basic CommandThis answer uses the same approach as Inder's answer but inside a single awk command which could be faster. Not that it would matter in this case... Here we assume a file as given in your example as input, see next section for alternative input formats.awk 'NR>1 { system("mkdir -p dir"$2"; mv "$1"-* dir"$2) }' dataframe.csvIn your example this executes the following bash commands:mkdir -p dir0mv abc-* dir0mkdir -p dir0mv def-* dir0mkdir -p dir2mv abg-* dir2mkdir -p dir1mv poc-* dir1mkdir -p dir4mv qrs-* dir4The -p option of mkdir won't cause an error if the directory already exists. With mv abc-* dir0 we move all files starting with abc- into the directory dir0.Using Your Actual Input FormatFrom the comments it seems that your actual file has a different format then the example you showed us. The example had columns separated by whitespaceanimal classabc 0def 0abg 2...but your actual file seems to be a real csv with columns separated by commas. Furthermore, the file seems to have windows line endings (\r\n instead of \n).animal,class\rabc,0\rdef,0\rabg,2\r...You can use this format by adapting awk's special variables FS (for field separator) and RS (for record separator):awk -F, -v RS='\r?\n' 'NR>1 { system("mkdir -p dir"$2"; mv "$1"-* dir"$2) }' dataframe.csv |
How to use subprocess to write to file I am trying to get adb logcat and save to a file. I tried POPEN and call as below f = open("/Users/log.txt") subprocess.call(["adb logcat"], stdout=f) f_read = f.read() print f_readBut I get error File "testPython.py", line 198, in getadbLogs subprocess.call(["adb logcat"], stdout=f) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception**OSError: [Errno 2] No such file or directory**I am not sure what I am doing wrong. Is it possible to get adb logcat logs using subprocess? I checked the file path is right. | Because you opened f in read mode(r). If you don't select the mode, the default mode is r mode.To write to the file you should use w mode like this:f = open("/Users/log.txt", 'w')subprocess.call(["adb logcat"], stdout=f)f.close()f = open("/Users/log.txt")f_read = f.read()print f_readf.close()And use with to auto close the file would be more simple:with open("/Users/log.txt", 'w') as f: subprocess.call(["adb logcat"], stdout=f)with open("/Users/log.txt") as f: f_read = f.read()print f_read |
Swig: Pass a vector from c++ to python I want to write some code in C++ which returns a vector to python. I tried the following example, but it returns the following object.<Swig Object of type 'std::vector< float > *' at 0x100331f90>How can I convert this to a list so that I can use it in python?My code:/* File: example.i */%module example%{#define SWIG_FILE_WITH_INIT#include "example.h"%}std::vector<float> Test(int n);./* File: example.cpp */#include <vector>std::vector<float> Test(int n){ std::vector<float> a(4); a[1] = 1; a[2] = 24234; return a;}./* File: example.h http://www.swig.org/Doc1.3/Python.html#Python_nn4*/#include <vector>std::vector<float> Test(int n); | I found an answer to this problem. It was kind of hard to find the right keywords for this problems.The solution is explained here.%include <std_vector.i> //Takes care of vector<type>What's still missing are the following three linesnamespace std{ %template(FloatVector) vector<float>;}In total my example.i now looks like this:/* File: example.i */%module example%include "std_vector.i"%{#define SWIG_FILE_WITH_INIT#include "example.h"%}namespace std{ %template(FloatVector) vector<float>;}std::vector<float> Test(int n); |
How to select only integers in a list of strings/integers? I have a list (actually an iterable) which was created using this function of python's itertools library:comb = [c for i in range(len(menu)+1) for c in combinations(menu, i)]To give you an idea menu is a list in this format [ ["name of food", grams of sugar] ]:menu = [ ["cheesecake", 13], ["pudding", 24], ["bread", 13], .........]So comb is essentially a list that contains all of the possible combinations of menu sublists. I have to iterate through comb create ALL of the possible item combinations, whose total sugar content will equal EXACTLY (NOT LESS, NOT MORE, EXACTLY) max_sugar = 120. So I figured that I could iterate over each possible combination in comb and check with an if statement if the sum of the sugar of the items in this combination equals EXACTLY max_sugar. If that is the case I want to output the names of the menu items in this combination. Otherwise I want to continue through the other combinations in this manner:for e in comb: for l in e: if sum(sugars of items in this combination) == max_sugar: # pseudo-code print items in this combination #pseudo codeI guess the problem I am having is to access only the sugar values of each item in l and check the condition and if it is TRUE print the names. I am not proficient in python list comprehensions but I have improved a lot in the past few days!flag = 0num_comb = 1comb = [c for i in range(len(menu)+1) for c in combinations(menu, i)]for e in comb: if sum(l[1] for l in e) == targetSugar: print "The combination number " + str(num_comb) + " is:\n" print([l[0] for l in e]) print "\n\n\n" num_comb += 1 flag = 1if flag == 0: print "there are no combinations of dishes for your sugar intake... Sorry! :D " | As you were alluding to, you can use a list comprehension to iterate through all menu combinations, and restrict to those meals with exactly the sugar amount you are looking for:>>> # input data>>> menu = [ ["cheesecake", 13], ["pudding", 24], ["bread", 13] ]>>> max_sugar = 26>>> # construct all combinations of menu items>>> comb = [c for i in range(1, len(menu)+1) for c in combinations(menu, i)]>>> list(comb)[(['cheesecake', 13],), (['pudding', 24],), (['bread', 13],), (['cheesecake', 13], ['pudding', 24]), (['cheesecake', 13], ['bread', 13]), (['pudding', 24], ['bread', 13]), (['cheesecake', 13], ['pudding', 24], ['bread', 13])]>>> # restrict to meals with exactly max_sugar>>> meals = [ e for e in comb if sum( sugar for _, sugar in e) == max_sugar ]>>> meals[(['cheesecake', 13], ['bread', 13])]The only tricky part is when you're iterating through each combination, each element e is a list containing a name and the sugar count. Thus you can measure the amount of sugar in a combination e using:sum( sugar for _, sugar in e) == max_sugarBuilding off of this, if you only wanted to return the names of the foods in each meal, you could use:>>> [ [name for name, sugar in m] for m in meals ][['cheesecake', 'bread']] |
Sending email in python, message missing I am deleting several folders which are 30 days old and want to mail myself the list of all those deleted folders using gmail. Currently it deletes the folder without any trouble but the message in the email is blank along with subject. What am I missing?import osimport timeimport shutilimport smtplibfrom email.MIMEMultipart import MIMEMultipartfrom email.MIMEBase import MIMEBasefrom email.MIMEText import MIMETextfrom email import encoderssender = "my_email@gmail.com"receivers = ["my_email@gmail.com"]username = 'my_email'password = 'passwd'numdays = 86400*30now = time.time()directory=os.path.join("/home","/Downloads/trial/")for r,d,f in os.walk(directory): for dir in d: timestamp = os.path.getmtime(os.path.join(r,dir)) if now-numdays > timestamp: try: print "removing ",os.path.join(r,dir) shutil.rmtree(os.path.join(r,dir)) except Exception,e: print e pass else: print "Deleted folders are: %s" % os.path.join(r,dir)msg = MIMEMultipart()msg['To'] = 'my_email@gmail.com'subject = "Deleted Folders : %s" % os.path.join(r,dir)msg['Subject'] = subjecttry:mailserver = smtplib.SMTP("smtp.gmail.com", 587)mailserver.ehlo()mailserver.starttls()mailserver.ehlo()mailserver.login(sender, password)mailserver.sendmail(sender, to, msg.as_string())mailserver.close()print "Successfully Sent email"except smtplib.SMTPException:print"Error: Unable to send email"I am receiving the email with subject as : Deleted Folders : /home/Downloads/trial/4/4My objective was to have a message/content/body of the email, with all the folders deleted. I see the output I want in the stdout i.eremoving /home/Downloads/trial/1Deleted folders are: /home/Downloads/trial/1removing /home/Downloads/trial/2Deleted folders are: /home/Downloads/trial/2removing /home/Downloads/trial/3Deleted folders are: /home/Downloads/trial/3 | Try this:deleted_folders = []for r,d,f in os.walk(directory): for dir in d: timestamp = os.path.getmtime(os.path.join(r,dir)) if now-numdays > timestamp: try: shutil.rmtree(os.path.join(r,dir)) deleted_folders.append("Deleted folders are: %s" % os.path.join(r,dir)) # Bad, it's almost never appropriate to catch all Exceptions # In this case, OSError seems better except Exception,e: passbody = MIMEText("\n".join(deleted_folders), "plain")msg.attach(body) |
Speed up pandas dataframe iteration I have a dataframe with date and values, Date PriceJun 30 95.60Jun 29 94.40Jun 28 93.59Jun 27 92.04Jun 24 93.40Jun 23 96.10Jun 22 95.55Jun 21 95.91Jun 20 95.10Jun 17 95.33Jun 16 97.55Jun 15 97.14Jun 14 97.46Jun 13 97.34Jun 10 98.83Jun 9 99.65Jun 8 98.94Jun 7 99.03Jun 6 98.63Jun 3 97.92Jun 2 97.72There is a function which iterate through dateframe,indic_up = [False, False,False, False]i = 4while i+4 <= df.index[-1]: if (df.get_value(i, 'value') > df.get_value(i-1, 'value')) or (df.get_value(i, 'value') > df.get_value(i-2, 'value')) or (df.get_value(i, 'value') > df.get_value(i-3, 'value')) or (df.get_value(i, 'value') > df.get_value(i-4, 'value')):indic_up.append(True) else:indic_up.append(False) i = i+1The logic of this function is if value of today greater than yesterday,day before yesterday or before that then it's true or false.This functions seems to be very slow to me, So how i can rewrite this function like these for index, row in df.iterrows():row['a'], indexor for idx in df.index:df.ix[idx, 'a'], idxor can i achieve more fast by converting dataframe into numpy array? | Let's invite Scipy too!The Idea : Compare the current element with the previous 4 values by calculating the minimum in that interval and comparing with the current one. If it matches, we have basically failed all the comparisons and thus choose False. So, codewise, just compare the current element with the minimum in that interval. This is where scipy comes in with its minimum_filter.Implementation :from scipy.ndimage.filters import minimum_filter# Extract values from relevant column into a NumPy array for further procesingA = df['value'].values# Look for no match with interval-ed min & look for NOT matching for True as o/pindic_up_out = A != minimum_filter(A,footprint=np.ones((5,)),origin=2)# Set first four as False because those would be invalid with a 5 elem runwayindic_up_out[:4] = 0 |
How to retrieve the integer value of tkinter ttk Scale widget in Python? Could anyone advise how to retrieve and update a Label widget with the value from a Scale widget in Python? Currently it shows a very large real number. I have tried to type cast the value but this only works when I print to idle. I tried slider.get() but the label is blank. Also tried int(slider.get()) which works when I print to idle.from tkinter import *from tkinter import ttkroot = Tk()root.title("Playing with Scales")mainframe = ttk.Frame(root, padding="24 24 24 24")mainframe.grid(column=0, row=0, sticky=(N, W, E, S))slider = IntVar()ttk.Scale(mainframe, from_=0, to_=100, length=300, variable=slider).grid(column=1, row=4, columnspan=5)ttk.Label(mainframe, textvariable=slider).grid(column=1, row=0, columnspan=5)for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5)root.mainloop()Here you can see the very large real value: | I don't see a way to control the resolution of the Scale widget, but it's fairly easy to modify the string it produces. According to these ttk docs the Scale widget returns a float, but in my experiments it returns a string, which is slightly annoying.Rather than setting the Label text directly from the slider variable attached to the Scale widget we can bind a command to the Scale widget that converts the string holding the Scale's current value back to a float, and then format that float with the desired number of digits; the code below displays 2 digits after the decimal point.import tkinter as tkfrom tkinter import ttkroot = tk.Tk()root.title("Playing with Scales")mainframe = ttk.Frame(root, padding="24 24 24 24")mainframe.grid(column=0, row=0, sticky=('N', 'W', 'E', 'S'))slider = tk.StringVar()slider.set('0.00')ttk.Scale(mainframe, from_=0, to_=100, length=300, command=lambda s:slider.set('%0.2f' % float(s))).grid(column=1, row=4, columnspan=5)ttk.Label(mainframe, textvariable=slider).grid(column=1, row=0, columnspan=5)for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5)root.mainloop()If you just want the Label to display the integer value then change the initialization of slider toslider.set('0')and change the callback function tolambda s:slider.set('%d' % float(s))I've made a few other minor changes to your code. Primarily, I replaced the "star" import with import tkinter as tk; the star import brings in over 130 names into your namespace, which creates a lot of unnecessary clutter, as I mentioned in this recent answer. |
Pandas means in column for subset in another column I have a dataframe called houses: transaction_id house_id date_sale sale_price boolean_2015 \ 0 1 1 31 Mar 2016 £880,000 True 3 4 2 31 Mar 2016 £450,000 True 4 5 3 31 Mar 2016 £680,000 True 6 7 4 31 Mar 2016 £1,850,000 True postcode 0 EC2Y 3 EC2Y 4 EC1Y 6 EC2Y and I was wondering how to compute averages of sale_price based on each postcodeso the output is Average0 EC1Y £1232201 EC2Y £434930I did this with averages = data.groupby(['postcode'], as_index=False).mean() but this did not return sale_priceany thoughts? | You can first replace £, to empty string and then convert to_numeric column sale_price. Last cast to string by astype if need add £ to column sale_price:data.sale_price = pd.to_numeric(data.sale_price.str.replace('[£,]',''))averages = data.groupby(['postcode'], as_index=False)['sale_price'].mean()averages.sale_price = '£' + averages.sale_price.astype(str) print (averages) postcode sale_price0 EC1Y £6800001 EC2Y £1060000 |
What does "TypeError: [foo] object is not callable" mean? I am trying to iterate through a list of Facebook postIDs, and I am getting the following error:TypeError: 'list' object is not callableHere is my code:MCTOT_postIDs = [["126693553344_10155053097028345"],["126693553344_10155050947628345"],["126693553344_10155048566893345"],["126693553344_10155044677673345"],["126693553344_10155042089618345"],["126693553344_10155035937853345"],["126693553344_10155023046098345"]]g = facebook.GraphAPI()g.access_token = g.get_app_access_token(APP_ID, APP_SECRET)for x in MCTOT_postIDs():g.get_object('fields="message, likes, shares')I know I am making a basic error somewhere, but I can't seem to figure it out. Thanks! | EDIT: For the other error, the function g.get_object(...) requires one more argument, that you are not passing.You're passing the fields, but you're must pass an ID as an argument too, you must pass the x of your loop, that contains the id.Probably should go like:g.get_object('fields="message, likes, shares', x)or maybe g.get_object('fields="message, likes, shares', x[0]) if you need to pass it as a string, not an array (your list is a list of arrays)but this should be a topic for a new question...The error message says:TypeError: 'list' object is not callableSo look again at your code: when you try to do the for ... in loop, you're trying to call your list, as if it was a function.You're doingfor x in MCTOT_postIDs():When you should be doingfor x in MCTOT_postIDs:The list is not callable, and the () is used for calling a function (meaning: execute the function). Remove it and it should work. |
AttributeError: 'module' object has no attribute 'DatePickerCtrl' In trying to learn, I am running code developed by others who indicate is it working. It does not work for me. I am attempting for the 1st time to use wx.DatePickerCtrl. After running my code, I get the following error:test8000.py", line 12, in __init__ self.datepick = wx.DatePickerCtrl(self.panel,-1, pos=(20,15),AttributeError: 'module' object has no attribute 'DatePickerCtrl'I import wx upfront, assuming that is where DatePickerCtrl modules resides.Thanks for any help - I am truly new at this. I am using wxPython_Phoenix 3.0.3.dev78406 on a Windows 7 platform. Michael | I know this is old, but in case anyone else comes across it.On wxPython Phoenix (what you're using), the DatePickerCtrl is part of the wx.adv module.On wxPython Classic, the DatePickerCtrl is part of the wx module. Your code was most likely written for WxPython Classic.You can find a link to old installs here. |
Threading in python - processing multiple large files concurrently I'm new to python and I'm having trouble understanding how threading works. By skimming through the documentation, my understanding is that calling join() on a thread is the recommended way of blocking until it completes. To give a bit of background, I have 48 large csv files (multiple GB) which I am trying to parse in order to find inconsistencies. The threads share no state. This can be done single threadedly in a reasonable ammount of time for a one-off, but I am trying to do it concurrently as an exercise.Here's a skeleton of the file processing:def process_file(data_file): with open(data_file) as f: print "Start processing {0}".format(data_file) line = f.readline() while line: # logic omitted for brevity; can post if required # pretty certain it works as expected, single 'thread' works fine line = f.readline() print "Finished processing file {0} with {1} errors".format(data_file, error_count)def process_file_callable(data_file): try: process_file(data_file) except: print >> sys.stderr, "Error processing file {0}".format(data_file)And the concurrent bit:def partition_list(l, n): """ Yield successive n-sized partitions from a list. """ for i in xrange(0, len(l), n): yield l[i:i+n]partitions = list(partition_list(data_files, 4))for partition in partitions: threads = [] for data_file in partition: print "Processing file {0}".format(data_file) t = Thread(name=data_file, target=process_file_callable, args = (data_file,)) threads.append(t) t.start() for t in threads: print "Joining {0}".format(t.getName()) t.join(5) print "Joined the first chunk of {0}".format(map(lambda t: t.getName(), threads))I run this as:python -u datautils/cleaner.py > cleaner.out 2> cleaner.errMy understanding is that join() should block the calling thread waiting for the thread it's called on to finish, however the behaviour I'm observing is inconsistent with my expectation.I never see errors in the error file, but I also never see the expected log messages on stdout.The parent process does not terminate unless I explicitly kill it from the shell. If I check how many prints I have for Finished ... it's never the expected 48, but somewhere between 12 and 15. However, having run this single-threadedly, I can confirm that the multithreaded run is actually processing everything and doing all the expected validation, only it does not seem to terminate cleanly.I know I must be doing something wrong, but I would really appreciate if you can point me in the right direction. | I can't understand where mistake in your code. But I can recommend you to refactor it a little bit.First at all, threading in python is not concurrent at all. It's just illusion, because there is a Global Interpreter Lock, so only one thread can be executed in same time. That's why I recommend you to use multiprocessing module:from multiprocessing import Pool, cpu_countpool = Pool(cpu_count)for partition in partition_list(data_files, 4): res = pool.map(process_file_callable, partition) print resAt second, you are using not pythonic way to read file:with open(...) as f: line = f.readline() while line: ... # do(line) line = f.readline()Here is pythonic way:with open(...) as f: for line in f: ... # do(line) This is memory efficient, fast, and leads to simple code. (c) PyDocBy the way, I have only one hypothesis what can happen with your program in multithreading way - app became more slower, because unordered access to hard disk drive is significantly slower than ordered. You can try to check this hypothesis using iostat or htop, if you are using Linux.If your app does not finish work, and it doesn't do anything in process monitor (cpu or disk is not active), it means you have some kind of deadlock or blocked access to same resource. |
Check for a stale element using selenium 2? Using selenium 2, is there a way to test if an element is stale?Suppose I initiate a transition from one page to another (A -> B). I then select element X and test it. Suppose element X exists on both A and B.Intermittently, X is selected from A before the page transition happens and not tested until after going to B, raising a StaleElementReferenceException. It's easy to check for this condition:try: visit_B() element = driver.find_element_by_id('X') # Whoops, we're still on A element.click() except StaleElementReferenceException: element = driver.find_element_by_id('X') # Now we're on B element.click()But I'd rather do:element = driver.find_element_by_id('X') # Get the elment on Avisit_B()WebDriverWait(element, 2).until(lambda element: is_stale(element))element = driver.find_element_by_id('X') # Get element on B | I don't know what language you are using there but the basic idea you need in order to solve this is:boolean found = falseset implicit wait to 5 secondsloop while not found try element.click() found = truecatch StaleElementReferenceException print message found = false wait a few secondsend loopset implicit wait back to defaultNOTE: Of course, most people don't do it this way. Most of the time people use the ExpectedConditions class but, in cases where the exceptions need to be handled betterthis method ( I state above) might work better. |
How can I connect to MySQL on Heroku? I have a Django project that uses MySQL as its database. On my development machine the MySQL database runs on my local machine as shown here:DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'xxx', 'USER': 'root', 'PASSWORD': 'xxxx', 'HOST': 'localhost', #'PORT': '3306', }}I have successfully deployed the project to Heroku, but it keeps showing that 2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)since the setting is the local MySQL and the views.py contain functions that import local database. I still cannot figure out how to have the website to connect to this database.Is there a way to make this work with Heroku, or do I need to find another host? | juanmhidalgo's answer is a good start, and can be generalized for arbitrary environment variables. But if you only care about the database variable there's another solution.By default, Heroku provides a PostgreSQL database and sets your application's DATABASE_URL with a connection string that can be used to connect to it.As far as I know, the various MySQL addons also set this variable but it would be helpful to know which one you've selected so we can confirm. It may need to be set manually, based on another environment variable.Assuming that the DATABASE_URL environment variable is properly set you can use dj-database-url to set your database up directly, optionally providing a fallback to use when the variable isn't available (e.g. on your development box), in your settings.py file:import dj_database_urlDATABASES['default'] = dj_database_url.config( default='mysql://root:<password>@localhost:3306/<database>',) |
pyramid-arima auto_arima order selection I am working on Time Series Forecasting(Daily entry) using pyramid-arima auto_arima in python where y is my target and x_features are all exogenous variables. I want best order model based on lowest aic, But auto_arima returns only few order combinations. PFA where 1st code line (start_p = start_q = 0 & max_p = 0, max_q = 3) returns all 4 combinations, but 2nd code line(start_p = start_q = 0 & max_p = 3, max_q = 3) returns only 7 combinations , din't gave (0,1,2) and (0,1,3) and others, which leads wrong model selection based on aic. All other parameters are as default e.g max_order = 10Is there anything I am missing or wrongly done?Thankyou in advance. | You say error_action='ignore', so probably (0,1,2) and (0,1,3) (and other orders) gave errors, so they didn't appear in the results.(I don't have enough reputation to write a comment, sorry). |
how to get soup.find_all to work in BeautifulSoup? I'm trying to scrape information a page consisting names of attorneys using BeaurifulSoup#importing librariesfrom urllib.request import urlopen from bs4 import BeautifulSoupimport requestsFollowing is an example of each attorney's names that are nested in HTML tags </a> <div class="person-info search-person-info people-search-person-info"> <div class="col person-name-position"> <a href="https://www.foxrothschild.com/richard-s-caputo/"> Richard S. Caputo </a>I tried using the following script to extract the name of each of the attorneys using 'a' as the tag and "col person-name-position" as the class. But it does not seem to work. Instead it prints out an empty list.page=requests.get("https://www.foxrothschild.com/people/?search%5Bname%5D=&search%5Bkeyword%5D=&search%5Boffice%5D=&search%5Bpeople-position%5D=&search%5Bpeople-bar-admission%5D=&search%5Bpeople-language%5D=&search%5Bpeople-school%5D=Villanova+University+School+of+Law&search%5Bpractice-area%5D=") #insert page heresoup=BeautifulSoup(page.content,'html.parser')#print(soup.prettify())find_name=soup.find_all('a',class_='col person-name-position')print(find_name) | You need to change your soup.find_all to div since the class goes with div and not apage=requests.get("https://www.foxrothschild.com/people/search%5Bname%5D=&search%5Bkeywod%5D=&search%5Boffice%5D=&search%5Bpeople-position%5D=&search%5Bpeople-bar-admission%5D=&search%5Bpeople-language%5D=&search%5Bpeople-school%5D=Villanova+University+School+of+Law&search%5Bpractice-area%5D=") #insert page heresoup=BeautifulSoup(page.content,'html.parser')#print(soup.prettify())find_name=soup.find_all('div',class_='col person-name-position')print(find_name) |
Python click module is_flag not working as expected According to click documentation there are two ways to specific a boolean flag. The "on/off" method:@click.option('--shout/--no-shout', default=False)and the "is_flag" method:@click.option('--shout', is_flag=True)I am not getting this behavior. First the code.import click@click.command()@click.option('--one/--no-one', default=False, help='on/off')@click.option('--two', is_flag=False, help='is_flag')def foo(one, two): print(one, two)if __name__ == '__main__': foo()The session below shows that the second option, "two", is a string argument not a boolean flag.$ python foo.py --helpUsage: foo.py [OPTIONS]Options: --one / --no-one on/off --two TEXT is_flag --help Show this message and exit.$ python foo.py --twoError: --two option requires an argument$ python foo.py --two aFalse a$ python foo.py --oneTrue None$ python foo.py --no-oneFalse NoneAm I using this incorrectly?I am using click 6.7 and:$ python --versionPython 3.5.3 :: Anaconda 4.4.0 (x86_64)$ uname -aDarwin tardis 16.7.0 Darwin Kernel Version 16.7.0: Fri Apr 27 17:59:46 PDT 2018; root:xnu-3789.73.13~1/RELEASE_X86_64 x86_64 | You have specified is_flag=False which means it is not a flag. Change the click option for two to:@click.option('--two', is_flag=True, help='is_flag') |
Error Installing PyInstaller for Python 3.7 on Windows 10 I used the command pip install pyinstaller to installer PyInstaller for Python 3.7 on Windows 10, but the Command Prompt gave me the following errors:ModuleNotFoundError: No module named 'pywintypes'...ModuleNotFoundError: No module named 'cffi'...During handling of the above exception, another exception occurred:...SyntaxError: invalid syntax----------------------------------------`Command "python setup.py egg_info" failed with error code 1 in C:\Users\MUHAMM~1\AppData\Local\Temp\pip-install-6_q2lzs2\pyinstaller\I installed the midule cffi, then tried to install pywintypes but it was not found.Any help? Thanks in advance. | I got that problem.The solution waspython -m pip install pip==18.1then justpython -m pip install -U pyinstaller |
Faster way to split a numpy array according to a threshold Suppose I have a random numpy array:X = np.arange(1000)and a threshold:thresh = 50I want to split X in two partitions X_l and X_r in such a way that every element in X_l is less or equal to thresh while in X_r each element is greater than thresh. After that these two partitions are given to a recursive function.Using numpy I create a boolean array and I use it to partition X:Z = X <= threshX_l, X_r = X[Z == 0], X[Z == 1]recursive_call(X_l, X_r)This is done several times, is there a way to make things faster? Is it possible to avoid creating a copy of the partitions at each call? | X[~Z] is faster than X[Z==0]:In [13]: import numpy as npIn [14]: X = np.random.random_integers(0, 1000, size=1000)In [15]: thresh = 50In [18]: Z = X <= threshIn [19]: %timeit X_l, X_r = X[Z == 0], X[Z == 1]10000 loops, best of 3: 23.9 us per loopIn [20]: %timeit X_l, X_r = X[~Z], X[Z]100000 loops, best of 3: 16.4 us per loopHave you profiled to determine that this is really the bottleneck in your code? If your code is spending only 1% of its time doing this splitting operation, then however much you optimize this operation will have no more than a 1% impact on the overall performance. You might benefit more by rethinking your algorithm or data structures than optimizing this one operation. And if this is really the bottleneck, you might also do better by rewriting this piece of code in C or Cython...When you have numpy arrays of size 1000, there is a chance that using Python lists/sets/dicts might be quicker. The speed benefit of NumPy arrays sometimes does not become apparent until the arrays are quite large. You might want to rewrite your code in pure Python and benchmark the two versions with timeit. Hm, let me rephrase that. It's not really the size of the array which makes NumPy quicker or slower. Its just that having small NumPy arrays is sometimes a sign that you are creating lots of small NumPy arrays, and the creation of a NumPy array is significantly slower than the creation of, say, a Python list:In [21]: %timeit np.array([])100000 loops, best of 3: 4.31 us per loopIn [22]: %timeit []10000000 loops, best of 3: 29.5 ns per loopIn [23]: 4310/295.Out[23]: 14.610169491525424Also, when you code in pure Python, you might be more likely to use dicts and sets for which there is no direct NumPy equivalent. That might lead you to an alternative algorithm which is quicker. |
Why is \d+ only matching one digit in this python regexp? Regexp: editClassification/(?P<pk>[\d+])String to match: foo/editClassification/10pythex example | Because \d+ is within a character class ([...]); [\d+] matches exactly one character that is either a digit or +. You were supposed to write (?P<pk>\d+) instead. |
Django views: Object value not recognized I am trying to learn django from official tutorial.I am stuck with a strange issue, it may be trivial but I am not able to figure it out -I am following this tutorial :Tutorial 3My problem is that when I try to access http://hello.djangoserver:8080/poll, I am getting following output:But the output should be the value of object, I don't know what is going wrong ?? All Previous steps described in the tutorial are working fine for me.Here is Console Output:(InteractiveConsole)>>> from poll.models import Question, Choice>>> Question.objects.get(pk=1)<Question: What's up?>>>> quit()/srv/www/hello/poll/views.py# Create your views here. from django.http import HttpResponsefrom django.template import RequestContext, loaderfrom poll.models import Questiondef index(request): latest_question_list = Question.objects.order_by('-pub_date')[:5] template = loader.get_template('poll/index.html') context = RequestContext(request, { 'latest_question_list': latest_question_list, }) return HttpResponse(template.render(context)) def detail(request, question_id): return HttpResponse("You're looking at question %s." % question_id)def results(request, question_id): response = "You're looking at the results of question %s." return HttpResponse(response % question_id)def vote(request, question_id): return HttpResponse("You're voting on question %s." % question_id)/srv/www/hello/poll/urls.py from django.conf.urls import patterns, url from poll import views urlpatterns = patterns('', url(r'^$', views.index, name='index'), # ex: /polls/5/ url(r'^(?P<question_id>\d+)/$', views.detail, name='detail'), # ex: /polls/5/results/ url(r'^(?P<question_id>\d+)/results/$', views.results, name='results'), # ex: /polls/5/vote/ url(r'^(?P<question_id>\d+)/vote/$', views.vote, name='vote'),/srv/www/hello/hello/urls.py from django.conf.urls import patterns, include, url # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^poll/', include('poll.urls')),/srv/www/hello/poll/templates/poll/index.html {% if latest_question_list %} <ul> {% for question in latest_question_list %} <li><a href="/poll/{ question.id }}/">{ question.question_text }}</a></li> {% endfor %} </ul> {% else %} <p>No polls are available.</p> {% endif %} }" }"Some relevant constructs from settings.py - /srv/www/hello/hello/settings.py: DEBUG = True DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite_db', # Or path to database file if using sqlite3. # The following settings are not used with sqlite3: 'USER': 'root', 'PASSWORD': '<my password>', 'HOST': '', # Empty for localhost through domain sockets or #'127.0.0.1' for localhost through TCP. 'PORT': '', # Set to empty string for default. } } MEDIA_ROOT = '' MEDIA_URL = '' STATIC_ROOT = '' STATIC_URL = '/static/' STATICFILES_DIRS = ( # Put strings here, like "/home/html/static" or "C:/www/django/static". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', # 'django.contrib.staticfiles.finders.DefaultStorageFinder', ) TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', # 'django.template.loaders.eggs.Loader', ) ROOT_URLCONF = 'hello.urls' WSGI_APPLICATION = 'hello.wsgi.application' TEMPLATE_DIRS = ( ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'poll', # Uncomment the next line to enable the admin: 'django.contrib.admin', # Uncomment the next line to enable admin documentation: 'django.contrib.admindocs', )/srv/www/hello/poll/models.py :from django.db import modelsfrom django.utils import timezoneimport datetime# Create your models here.class Question(models.Model): question_text = models.CharField(max_length=200) pub_date = models.DateTimeField('date published')def __unicode__(self): # Python 3: def __str__(self): return self.question_textdef was_published_recently(self): return self.pub_date >= timezone.now() - datetime.timedelta(days=1)was_published_recently.admin_order_field = 'pub_date' was_published_recently.boolean = Truewas_published_recently.short_description = 'Published recently?'Server running command: $ python manage.py runserver hello.djangoserver:8080 | You've forgotten one of the curly braces in /srv/www/hello/poll/templates/poll/index.html. |
monospaced font for coding with splitted underlines Are there any monospaced fonts with separate underlines, like this: with support of Cyrillic script? Consolas' underlines are not separate and Adobe Source Code Pro doesn't support Cyrillic script right now. Or maybe can I enable Adobe Source Code Pro for Latin script and Consolas for Cyrillic script in sublime text 2?I need it for coding in python in sublime text 2. | Try Monofur. It has the separate underscores, and has Cyrillic glyphs. |
How to add hours to current time in python I am able to get the current time as below:from datetime import datetimestr(datetime.now())[11:19]Result'19:43:20'Now, I am trying to add 9 hours to the above time, how can I add hours to current time in Python? | from datetime import datetime, timedeltanine_hours_from_now = datetime.now() + timedelta(hours=9)#datetime.datetime(2012, 12, 3, 23, 24, 31, 774118)And then use string formatting to get the relevant pieces:>>> '{:%H:%M:%S}'.format(nine_hours_from_now)'23:24:31'If you're only formatting the datetime then you can use:>>> format(nine_hours_from_now, '%H:%M:%S')'23:24:31'Or, as @eumiro has pointed out in comments - strftime |
How can I execute simple interactive program in spyder? I wrote typical guess-number game:import randomsecret = random.randint(1, 99) guess = 0tries = 0print("Hey you on board! I am the dreadfull pirat Robert, and I have a secret!")print("that is a magic number from 1 to 99. I give you 6 tries.")while guess != secret & tries < 6: guess = input() if guess < secret: print("You! Son of a Biscuit Eater! It is too little! YOU Scurvy dog!") elif guess > secret: print("Yo-ho-ho! It is generous, right? BUT it is still wrong! The number is too large, Savvy? Shiver me timbers!") tires = tries + 1if guess == secret: print("Enough! You guessed it! Now you know my secret and I can have a peaceful life. Take my ship, and be new captain")else: print("You are not lucky enough to live! You do not have ties. But before you walk the plank...") print("The number was ", secret) print("Sorry pal! This number became actually you death punishment. Dead men tell no tales! Yo Ho Ho!")However spyder executes it all without a stop for inputing number by a user and I have got just this output: Hey you on board! I am the dreadfull pirat Roberth, and I have a secret! that is a magic number from 1 to 99. I give you 6 tries. You are not lucky enough to live! You do not have ties. But before you walk the plank... The number was 56 Sorry pal! This number became actually you death punishment. Dead men tell no tales! Yo Ho Ho!I tried to call cmd -> spyder and execute it there (by copy-paste) but I got a lot of mistakes like: print("The number was ", secret) File "", line 1 print("The number was ", secret)However, executing this code line by (at least all lines with print) line is not a problem.How can I execute my code interactively, so that user could give a number and then game continues? | Couple of issues with your code, in the tires=tries+1 you've probably made a code typo. Second, guess reads in a string so you will need to convert guess into an int to do integer comparisons, use something like guess=int(guess).The reason you aren't seeing this is because your condition in the while loop does not execute as true, run guess != secret & tries < 6 in the interpreter and you'll see that the condition is false. Instead you should use and as this is a logical operator the & is a bitwise logical operator (they are not the same). while guess != secret and tries < 6: is the appropriate line of code you should substitute. |
Comparing each element of two arrays So I have two arrays and i want the amount of elements smaller than the individual elements of the other arrays. So i have two arrays like this:array1 = np.array([4.20, 3.52, 9.44, 12.00, 10.50, 7.30, 9.44])array2 = np.array([3.8600000000000003, 5.75, 8.37, 9.969999999999999, 11.25]And then in the output i want an array where the first element in the array is the amount of elements in array1 smaller than the first element in array2. And then the second element in the output is the amount of elements in array1 smaller than the second element in array2.So I want the following output:output = [1, 2, 3, 5, 6]I hope this makes sense. I have tried to make two for loops where i appends the number as such:for i in range(len(array1)): for j in range(len(array2)): if GPA[i] < thres[j]: number1 += 1 else: number1 = 0failed.append(number1)But it just gives an output that makes no sense. | You can iterate over each element in the second array, and use that element to create a mask, then sum up all of the True values:output = []for el in array2: output.append(np.sum(array1 < el))Output:[1, 2, 3, 5, 6]Your approach is almost correct. I cleaned it up a bit. You can iterate over the elements themselves instead of their indexes.failed = []for el2 in array2: number1 = 0 for el1 in array1: if el1 < el2: number1 += 1 failed.append(number1)Output:[1, 2, 3, 5, 6]I would still use the first answer though as it should be a bit quicker. |
Total number of unique values for each person in a large file I have this unique list:unique_list = {'apple', 'banana', 'coconut'}I want to find how many of the elements occur exactly in my large text file. I just need the number, not the names. For example, if only 'apple' and 'banana' are found for a particular person, then it should return 2. For each person (name and family name), I need to get how many of these unique fruit does this person have. In a large file, this might be difficult. I need the fastest way to do it. Let's say I get names from the text file:people = {'cody meltin', 'larisa harris', 'harry barry'}The text file is as below:Name Fruit unitcody melton apple 3cody melton banana 5cody melton banana 7larisa harris apple 8larisa harris apple 5The output should look like this:{'cody meltin':2, 'larisa harris':1, 'harry barry':0}I do not want to use any packages, just built-ins and basic libraries. | you can leverage python's basic library - collectionsfrom collections import Counterdict(Counter(pd.Series(['cody', 'cody ', 'cody ', 'melton', 'melton', 'harry'])))Output{'cody ': 2, 'melton': 2, 'cody': 1, 'harry': 1}In my example above, I passed a pd.Series as its argument, but in your case, you can pass df['name'] to it, which is a pd.Series object. |
Saving the DateTime format applied in the .csv file in Pandas I have imported a csv file in pandas that contains fields that look like 'datetime' but initially parsed as 'object'. I make the required conversion from 'datetime' to 'object' using 'df.X = pd.to_datetime(df.X)'. Now, when I try to save these changes by writing this out to a new .csv file and importing that, the format is still 'object'. Is there anyway to fix it's datatype so that on importing it I don't have to perform the conversion everytime? My dataset is quite big and conversion takes some time, which I want to save. | Date parsing can be expensive, so pandas doesn't parse dates by default. You need to specify parse_dates argument when call read_csvdf = pd.read_csv('my_file.csv', parse_dates=['date_column']) |
Convert decimal numpy array in 8 numpy arrays after binary transformation I'm reading an image with OpenCV in gray scale, so i have a numpy array with values from 0 to 255.I have to convert it to binary first.From: [dec, dec, dec, dec, dec, dec] To: [bin, bin, bin, bin, bin, bin].After that i have to build 8 numpy arrays with the bits of binary numpy array.[bin[0], bin[0], bin[0], bin[0], bin[0], bin[0]][bin[1], bin[1], bin[1], bin[1], bin[1], bin[1]][bin[2], bin[2], bin[2], bin[2], bin[2], bin[2]][bin[3], bin[3], bin[3], bin[3], bin[3], bin[3]][bin[4], bin[4], bin[4], bin[4], bin[4], bin[4]][bin[5], bin[5], bin[5], bin[5], bin[5], bin[5]]Regards!Found a solution by this way.import numpy as npdef toBin(dec): binary = [] st = 0 while(st < 8): binary.append(dec%2) dec = dec//2 st = st + 1 return binary#Original image is an np.array object with (300x300) shape.imgA = np.array([[42,0,52,234],[232,123,2,243],[24,231,245,21],[21,213,241,233]])colsA = imgB.shape[0]rowsA = imgB.shape[1]cont = 0new = []binA = []for row in imgA: for col in row: new.append(list(reversed(toBin(col)))) cont = cont + 1 if cont == colsA: cont = 0 binA.append(new) new = []npBinA = np.array(binA)print(npBinA)Output:Output in Python ShellSorry if my question was not correctly formulated.Thanks!. | I think you need:x = [10,2,4,5,7,8]# convert decimal to binaryb = [bin(i)[2:] for i in x]arr1 = []for i in b: arr1.append([i]*6)print(arr1)output[['1010', '1010', '1010', '1010', '1010', '1010'], ['10', '10', '10', '10', '10', '10'], ['100', '100', '100', '100', '100', '100'], ['101', '101', '101', '101', '101', '101'], ['111', '111', '111', '111', '111', '111'], ['1000', '1000', '1000', '1000', '1000', '1000']] |
Why am I getting infinite loop for the this code for any input value of range 1 to 10**9? Can somebody point out why am I getting infinite loop in this? I mean it shows error for maximum recursion depth reached?For value of '1' it shows correct output.def beautiful(n): new=str(n+1) new.rstrip('0') return int(new)def check(n): if n==1: temp.extend(list(range(1,10))) return else: temp.append(n) return check(beautiful(n)) n=int(input())temp=[]check(n)print(len(set(temp))) | Unless I've completely misunderstood this, it has been over-complicated in the extreme. It's as simple as this:Note: no recursion, no string manipulationdef check(n): result = [] while n > 1: if (n := n + 1) % 10 == 0: n //= 10 result.append(n) return resultprint(check(17))Output:[18, 19, 2, 3, 4, 5, 6, 7, 8, 9, 1] |
writing scraped data to csv file I am scraping out the 'h2' and 'h3' tags from some html pages and want to write them to a csv file under particular columns. How to create columns and then insert rows under them using python scrapy.My code is:def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul/li') f = open("fquestdata.csv","wb") for site in sites: quest = site.select('//h2').extract() ans = site.select('//h3').extract() f.write(ans)but it gives an error that says:exceptions.TypeError: must be string or buffer, not list | why don't you use custom Csv item exporter ? suggested readingor write your own code suggested reading |
Can't split django views into subfolders I'm following the directions from the first answer here:Django: split views.py in several filesI created a 'views' folder in my app and moved my views.py file inside and renamed it to viewsa.py. I also created an init.py file in the 'views' folder.My folder structure:myproject/ myproject/ ... ... myapp/ __init__.py urls.py views/ __init__.py viewsa.pyThe first problem is in myapp/views/init.py I tried to do this:from viewsa import *and I get an "unresolved reference viewsa" errorI can however do this in the same file instead (in other words, it doesn't throw an error):from . import viewsaBut I can't find any way to import these sub directory views into myapp/urls.py even following the directions in the link above. What am I doing wrong? | Use relative import, in your __init__.py:from .viewsa import *(notice the dot in .viewsa) |
Python 3.7.0: How do I format datetime to mm-dd-yy hh:mm:ss? How do I format datetime to mm-dd-yy hh:mm:ss? I did it using the following code:import datetimet = datetime.datetime.now()s = str(format(t.second, '02d'))m = str(format(t.minute, '02d'))h = str(format(t.hour, '02d'))d = str(format(t.day, '02d'))mon = str(format(t.month, '02d'))y = str(t.year)x = '-'z = ':'print(mon + x + d + x + y + ' ' + h + z + m + z + s)but the problem is, first of all, the year prints in YYYY instead of YY (I only want the last two digits of the year). Second of all, I'm sure there's an easier way to print datetime in mm-dd-yy hh:mm:ss instead of doing it manually like I did.Please help. Thanks. | Like this:import datetimenow = datetime.datetime.now()now.strftime('%m-%d-%y %H:%M:%S')Also see docs - https://docs.python.org/3/library/datetime.html |
How to use properly Tensorflow Dataset with batch? I am new to Tensorflow and deep learning, and I am struggling with the Dataset class. I tried a lot of things and I can’t find a good solution.What I am tryingI have a large amount of images (500k+) to train my DNN with. This is a denoising autoencoder so I have a pair of each image. I am using the dataset class of TF to manage the data, but I think I use it really badly.Here is how I load the filenames in a dataset:class Data:def __init__(self, in_path, out_path): self.nb_images = 512 self.test_ratio = 0.2 self.batch_size = 8 # load filenames in input and outputs inputs, outputs, self.nb_images = self._load_data_pair_paths(in_path, out_path, self.nb_images) self.size_training = self.nb_images - int(self.nb_images * self.test_ratio) self.size_test = int(self.nb_images * self.test_ratio) # split arrays in training / validation test_data_in, training_data_in = self._split_test_data(inputs, self.test_ratio) test_data_out, training_data_out = self._split_test_data(outputs, self.test_ratio) # transform array to tf.data.Dataset self.train_dataset = tf.data.Dataset.from_tensor_slices((training_data_in, training_data_out)) self.test_dataset = tf.data.Dataset.from_tensor_slices((test_data_in, test_data_out))I have a function to call at each epoch that will prepare the dataset. It shuffles the filenames, and transforms filenames to images and batch data.def get_batched_data(self, seed, batch_size): nb_batch = int(self.size_training / batch_size) def img_to_tensor(path_in, path_out): img_string_in = tf.read_file(path_in) img_string_out = tf.read_file(path_out) im_in = tf.image.decode_jpeg(img_string_in, channels=1) im_out = tf.image.decode_jpeg(img_string_out, channels=1) return im_in, im_out t_datas = self.train_dataset.shuffle(self.size_training, seed=seed) t_datas = t_datas.map(img_to_tensor) t_datas = t_datas.batch(batch_size) return t_datasNow during the training, at each epoch we call the get_batched_data function, make an iterator, and run it for each batch, then feed the array to the optimizer operation.for epoch in range(nb_epoch): sess_iter_in = tf.Session() sess_iter_out = tf.Session() batched_train = data.get_batched_data(epoch) iterator_train = batched_train.make_one_shot_iterator() in_data, out_data = iterator_train.get_next() total_batch = int(data.size_training / batch_size) for batch in range(total_batch): print(f"{batch + 1} / {total_batch}") in_images = sess_iter_in.run(in_data).reshape((-1, 64, 64, 1)) out_images = sess_iter_out.run(out_data).reshape((-1, 64, 64, 1)) sess.run(optimizer, feed_dict={inputs: in_images, outputs: out_images})What do I need ?I need to have a pipeline that loads only the images of the current batch (otherwise it will not fit in memory) and I want to shuffle the dataset in a different way for each epoch.Questions and problemsFirst question, am I using the Dataset class in a good way? I saw very different things on the internet, for example in this blog post the dataset is used with a placeholder and fed during the learning with the datas. It seems strange because the data are all in an array, so loaded in memory. I don't see the point of using tf.data.dataset in this case.I found solution by using repeat(epoch) on the dataset, like this, but the shuffle will not be different for each epoch in this case.The second problem with my implementation is that I have an OutOfRangeError in some cases. With a small amount of data (512 like in the exemple) it works fine, but with a bigger amount of data, the error occurs. I thought it was because of a bad calculation of the number of batch due to bad rounding, or when the last batch has a smaller amount of data, but it happens in batch 32 out of 115... Is there any way to know the number of batch created after a batch(n) call on dataset?Sorry for this loooonng question, but I've been struggling with this for a few days. | As far as I know, Official Performance Guideline is the best teaching material to make input pipelines. I want to shuffle the dataset in a different way for each epoch.Using shuffle() and repeat(), you can get different shuffle pattern for each epochs. You can confirm it with the following codedataset = tf.data.Dataset.from_tensor_slices([1,2,3,4])dataset = dataset.shuffle(4)dataset = dataset.repeat(3)iterator = dataset.make_one_shot_iterator()x = iterator.get_next()with tf.Session() as sess: for i in range(10): print(sess.run(x))You can also use tf.contrib.data.shuffle_and_repeat as the mentioned by the above official page.There are some problems in your code outside of creating data pipelines. You confuse graph construction with graph execution. You are repeating to create data input pipeline, so there are many redundant input pipelines as many as epochs. You can observe the redundant pipelines by Tensorboard.You should place your graph construction code outside of loop as the following code (pseudo code)batched_train = data.get_batched_data()iterator = batched_train.make_initializable_iterator()in_data, out_data = iterator_train.get_next()for epoch in range(nb_epoch): # reset iterator's state sess.run(iterator.initializer) try: while True: in_images = sess.run(in_data).reshape((-1, 64, 64, 1)) out_images = sess.run(out_data).reshape((-1, 64, 64, 1)) sess.run(optimizer, feed_dict={inputs: in_images, outputs: out_images}) except tf.errors.OutOfRangeError: passMoreover there are some unimportant inefficient code. You loaded a list of file path with from_tensor_slices(), so the list was embedded in your graph. (See https://www.tensorflow.org/guide/datasets#consuming_numpy_arrays for detail)You would be better off using prefetch, and decreasing sess.run call by combining your graph. |
How to make my multiplication table more neat? My multiplication that I created is not formatted/organized neatly - I want it to have lines that separate the numbers.My code:n = int(input("Enter a positive interger between 1 and 9: "))for row in range(1, n+1): print(*(f"{row*col:5}" for col in range(1, n+1)))It would then display something like this:Enter a positive integer between 1 and 9: 9 1 2 3 4 5 6 7 8 9 2 4 6 8 10 12 14 16 18 3 6 9 12 15 18 21 24 27 4 8 12 16 20 24 28 32 36 5 10 15 20 25 30 35 40 45 6 12 18 24 30 36 42 48 54 7 14 21 28 35 42 49 56 63 8 16 24 32 40 48 56 64 72 9 18 27 36 45 54 63 72 81But I want something like this:Enter a positive integer between 1 and 9: 9 1 2 3 4 5 6 7 8 9------------------------------------------------------ 2 | 4 6 8 10 12 14 16 18 3 | 6 9 12 15 18 21 24 27 4 | 8 12 16 20 24 28 32 36 5 | 10 15 20 25 30 35 40 45 6 | 12 18 24 30 36 42 48 54 7 | 14 21 28 35 42 49 56 63 8 | 16 24 32 40 48 56 64 72 9 | 18 27 36 45 54 63 72 81 | Use tabulate:import pandas as pdfrom tabulate import tabulaten = int(input("Enter a positive interger between 1 and 9: "))data = []for row in range(1, n+1): tmp = [row*col for col in range(1, n+1)] data.append(tmp)df = pd.DataFrame(data, columns=[*(range(1, n+1))])df.loc[0] = df.loc[0]+1print(tabulate(df, headers= "firstrow", floatfmt=".0f", showindex=False))Printout:Enter a positive interger between 1 and 9: 9 1 2 3 4 5 6 7 8 9--- --- --- --- --- --- --- --- --- 2 4 6 8 10 12 14 16 18 3 6 9 12 15 18 21 24 27 4 8 12 16 20 24 28 32 36 5 10 15 20 25 30 35 40 45 6 12 18 24 30 36 42 48 54 7 14 21 28 35 42 49 56 63 8 16 24 32 40 48 56 64 72 9 18 27 36 45 54 63 72 81 |
Read multiple web sockets at the same time and plot data in Python I'm fairly new to scripting in general and I'm pretty sure this is trivial but i can't seem to find a solution. I want to use the python websockets library to listen to multiple websockets in order to get ticker information about crypto prices. How to get real time bid / ask / price from GDAX websocket feed provides a good start for obtaining the feed for one currency. The problem is that the run_forever() does not allow me to show two feeds at the same time as i have no way to interrupt it. | The GDAX websocket allows you to subscribe to multiple pairs.As seen below I subscribe to both the BTC-USD and ETH-USD pairs. I assume you can subscribe to unlimited pairs.import websocketfrom json import dumps, loadstry: import threadexcept ImportError: import _thread as threaddef on_message(ws, message): parsed_msg = loads(message) print(parsed_msg["product_id"], parsed_msg["price"])def on_open(ws): def run(*args): params = { "type": "subscribe", "channels": [{"name": "ticker", "product_ids": ["BTC-USD", "ETH-USD"]}] } ws.send(dumps(params)) thread.start_new_thread(run, ())if __name__ == "__main__": websocket.enableTrace(True) ws = websocket.WebSocketApp("wss://ws-feed.gdax.com", on_open=on_open, on_message = on_message) ws.run_forever()If for some reason GDAX did not allow this, you could open multiple web sockets in multiple threads, but in this case its not necessary. |
Spark python most repeated value for each key I have an RDD with format: (date, city). And the data inside is something like this:day1, city1day1, city2day1, city2day2, city1[...]I need to obtain the most "repeated" city by each day, ie I need the following result:day1, city2day2, city1day3, ...Can you help me how to do it in Python?I tried to do it like a simple wordcount: rdd.map(lambda x: (x[0], [1]. \map(lambda y:y,1). \reduceByKey(lambda a,b: a+b). \takeOrdered(1, lambda s:-1*s[1]))).collect()But of course it doesn't work... Thanks in advance. | It is just a modified wordcount:rdd.map(lambda x: (x, 1)) \ .reduceByKey(lambda x, y: x + y) \ .map(lambda ((day, city), count): (day, (city, count))) \ .reduceByKey(lambda x, y: max(x, y, key=lambda x: x[1])) |
Extracting html-as-text between break tags using regex Have a series of elements in a list which are extracted from html -- each with break tags (<br>...</br>). I used this code below with one element, and will apply to a loop, but it throws an an error SyntaxError: unexpected EOF while parsing on the single element. import refirstElementText = '<td align="center" bgcolor="#e0e0e0" nowrap="" valign="middle"><b>Season</b></td>'re.search(r'<br>.(.*?)</br>', firstElementText ).group(1)Looking to return Season from the search. | It's because of your HTML:firstElementText = '<td align="center" bgcolor="#e0e0e0" nowrap="" valign="middle"><b>Season</b></td>'Has no <br>. Change it to firstElementText = '<td align="center" bgcolor="#e0e0e0" nowrap="" valign="middle"><br>Season</br></td>'Works for me fine. And, your RegEx should look like that:re.search(r'<br>(.*?)</br>', firstElementText ).group(1)You see the "missing" dot between > and (?That will ignore the first character that is in the group.Following works for me fine:>>> import re>>> firstElementText = '<td align="center" bgcolor="#e0e0e0" nowrap="" valign="middle"><br>Season</br></td>'>>> re.search(r'<br>(.*?)</br>', firstElementText ).group(1)'Season' >>> Python 3.4.2.BTW there is no <br></br> out there.It should be <br /> because it breaks a line, and doesnt affect it in any other ways...And as you can read in the comments: https://stackoverflow.com/a/1732454/2588818 |
equation Solution Set in python i have a problem that i should make a programme to solve this equation a2 + b2 + c2 = dand sort the solution in Lexicographical order the order comes without make it and if there is no solution print -1 so i write my Code and i use three nested loops d=int(raw_input())p=0for a in range(d+1): a for b in range(d+1): b for c in range(d+1): c if a**2+b**2+c**2==d: p=1 print a,b,c breakif p==0: print -1my problem in Time Limit exceeded The input range is 10^5 simple input and output to be clear 6output1 1 21 2 12 1 1any ideas To Avoid that Time Limit? | notice that your break statement breaks only out of the most inner for loop.you may want to use p for that and make checks at the end of each for loop, i.e (it wasn't checked...):d=int(raw_input())p=0for a in range(d+1): a for b in range(d+1): b for c in range(d+1): c if a**2+b**2+c**2==d: p=1 lst = [a,b,c] lst.sort() for num in lst: print num, break if p==1: break if p==1: breakif p==0: print -1 |
python regular expression "12x4x67" match only the second group of numbers all i am a little stuck on this regular expression (Python beginner) I have a string here "12x4x67" and I need to split the numbers up into variables, for example: length, width and height. I have successfully gotten the first group. Now I need to match the second group. Here's a link to the regex tester I am using with the example I made.Here is my regex:\d+It only matches 340 in 340x9x20. | No regular expression needed:length, width, height = "12x4x67".split('x')Or if you prefer dealing with integers:length, width, height = [int(s) for s in "12x4x67".split('x')] |
shortest distance from plane to origin using a plane equation suppose i have a plane equation ax+by+cz=d, how can I go about finding the shortest distance from the plane to the origin?I am going in reverse of this post. In this post, they start out with a point P0 and the normal. In my case, I only have the plane equationDistance from origin to plane (shortest)Here is what I have so far. #calculate the distance from plane to origin dist=math.sqrt(new_a**2+new_b**2+new_c**2) x_dist=(a*d)/dist y_dist=(b*d)/dist z_dist=(c*d)/dist | The normal of your plane is [a,b,c]. Multiply it by d and get the length of the result. This should give you what you need. |
How can I add to the initial definition of a python class inheriting from another class? I'm trying to define self.data inside a class inheriting from a classclass Object(): def __init__(self): self.data="1234"class New_Object(Object): # Code changing self.data hereBut I ran into an issue.class Object(): def __init__(self): self.data="1234"So I have the beginning class here, which is imported from elsewhere, and let's say that the class is a universal one so I can't modify the original at all.In the original, the instance is referred to as "self" inside the class, and it is defined as self inside the definition __init__.class New_Object(Object): # Code changing self.data hereSo if I wanted to inherit from the class Object, but define self.data inside New_Object, I thought I would have to define __init__ in New_Object, but this overrides the __init__ from New_ObjectIs there any way I could do this without copypasting the __init__ from Object? | You use super to call the original implementation.class New_Object(Object): def __init__(self): super(NewObject, self).__init__() self.info = 'whatever' |
How do I select an ndb property with a string? With a data model like thisclass M(ndb.Model): p1 = ndb.StringProperty() p2 = ndb.StringProperty() p3 = ndb.StringProperty()I'm trying to set the property values with a loop something like thislist = ["a","b","c", "d"]newM = M( id = "1234" )for p in ['p1','p2','p3']: newM[p] = choice(list)newM.put()But I get an error ERROR 'M' object does not support item assignmentIs there a way to do this without explicitly defining each property? | python has setattr which will do what you want. Inside your loop:setattr(newM, p, choice(list) |
python performance tips for looping calculation Python 2.7, Windows 7.I'm looking for tips on how to make a calculation heavy script run faster. First an idea of what I'm doing:Starting with a given color, I want to generate a list of 30 more colors (rgb values) that are maximally distinctive to the human eye from one another, and with the front of the list more distinctive than the end. Currently, I estimate that the script will take ~48 hours to complete. I could let it run over the weekend, but I figured I would take the opportunity to learn something about python performance. An overview what the code does:gen_colours() contains a loop that runs 30 times. Each time 4 processes run multi(n, l, g) which contains the big loop iterating over each r, g, and b value between 0 and 255 (r value is split between processes so it loops 64 times). The inner most loop contains another loop that checks the rgb value against rgb values already found by calling compute_dist([r, g, b], c). Anyways, without completely restructuring my code, things to help speed it up would be cool. Also, running all four cpus at max for 48 hours...issues there?Code:from math import sqrt, pow, atan2, atan, sin, cos, exp, radians, degreesfrom fractions import Fractionimport timeimport multiprocessingdef to_xyz(rgb): r = rgb[0] / 255.0 g = rgb[1] / 255.0 b = rgb[2] / 255.0 f = Fraction(12, 5) if r > 0.04045: r = ((r + 0.055) / 1.055) ** f else: r /= 12.92 if g > 0.04045: g = ((g + 0.055) / 1.055) ** f else: g /= 12.92 if b > 0.04045: b = ((b + 0.055) / 1.055) ** f else: b /= 12.92 r *= 100 g *= 100 b *= 100 # Observer = 2 degrees, Illuminant = D65 x = r * 0.4124 + g * 0.3576 + b * 0.1805 y = r * 0.2126 + g * 0.7152 + b * 0.0722 z = r * 0.0193 + g * 0.1192 + b * 0.9505 return [x, y, z]def to_lab(xyz): x = xyz[0] y = xyz[1] z = xyz[2] # Observer= 2deg, Illuminant= D65 x /= 95.047 y /= 100.0 z /= 108.883 f = Fraction(1, 3) if x > 0.008856: x **= f else: x = 7.787 * x + 0.13793103448 if y > 0.008856: y **= f else: y = 7.787 * y + 0.13793103448 if z > 0.008856: z **= f else: z = 7.787 * z + 0.13793103448 L = 116 * y - 16 a = 500 * (x - y) b = 200 * (y - z) return [L, a, b]def compute_dist(rgb1, rgb2): """ Compute the apparent difference in colours using CIEDE2000 standards """ xyz1 = to_xyz(rgb1) xyz2 = to_xyz(rgb2) lab1 = to_lab(xyz1) lab2 = to_lab(xyz2) a1 = lab1[1] a2 = lab2[1] b1 = lab1[2] b2 = lab2[2] L1 = lab1[0] L2 = lab2[0] c1 = sqrt(a1 * a1 + b1 * b1) c2 = sqrt(a2 * a2 + b2 * b2) c = (c1 + c2) / 2 crs = c ** 7 x = 0.5 - 0.5 * sqrt(crs / (crs + 6103515625)) temp = (1 + x) * a1 c1 = sqrt(temp * temp + b1 * b1) h1 = hue(temp, b1) temp = (1 + x) * a2 c2 = sqrt(temp * temp + b2 * b2) h2 = hue(temp, b2) dL = L2 - L1 dc = c2 - c1 if c1 * c2 == 0: dh = 0 else: temp = round(h2 - h1, 12) if abs(temp) <= 180: dh = h2 - h1 else: if temp > 180: dh = h2 - h1 - 360 else: dh = h2 - h1 + 360 dh = sqrt(c1 * c2) * sin(radians(dh / 2)) dh += dh lav = (L1 + L2) / 2 cav = (c1 + c2) / 2 if c1 * c2 == 0: htot = h1 + h2 else: temp = abs(round(h1 - h2, 12)) if temp > 180: if h2 + h1 < 360: htot = h1 + h2 + 360 else: htot = h1 + h2 - 360 else: htot = h1 + h2 htot /= 2 T = 1 - 0.17 * cos(radians(htot - 30)) + 0.24 * cos(radians(2 * htot)) + 0.32 * cos(radians(3 * htot + 6)) - 0.20 * cos(radians(4 * htot - 63)) htotdtme = (htot / 25) - 11 xPH = 30 * exp(-htotdtme * htotdtme) cavrs = cav ** 7 scocp = sqrt(cavrs / (cavrs + 6103515625)) xRC = scocp + scocp lavmf = lav - 50 lavmfs = lavmf * lavmf SL = 1 + 0.015 * lavmfs / sqrt(20 + lavmfs) SC = 1 + 0.045 * cav SH = 1 + 0.015 * cav * T RT = -sin(radians(xPH + xPH)) * xRC dL /= SL dc /= SC dh /= SH dE = sqrt(dL * dL + dc * dc + dh * dh + RT * dc * dh) return dEdef hue(a, b): # Function returns CIELAB-Hue value c = 0 if a >= 0 and b == 0: return 0 if a < 0 and b == 0: return 180 if a == 0 and b > 0: return 90 if a == 0 and b < 0: return 270 if a > 0 and b > 0: c = 0 elif a < 0: c = 180 elif b < 0: c = 360 return degrees(atan(b / a)) + cdef multi(p, l, q): f = 0 n = [] s = p * 64 e = (p + 1) * 64 for r in xrange(s, e): for g in xrange(256): for b in xrange(256): s = 1000 # smallest dist for c in l: # compare to existing colours d = compute_dist([r, g, b], c) if d < s: s = d if s > f: n = [r, g, b] f = s q.put(f) q.put(n)def gen_colours(start_col=[68, 68, 68]): out = open('colour_output.txt', 'w') l = [start_col] if __name__ == '__main__': q0 = multiprocessing.Queue() q1 = multiprocessing.Queue() q2 = multiprocessing.Queue() q3 = multiprocessing.Queue() for h in xrange(30): # create 30 more colours p0 = multiprocessing.Process(target=multi, args=[0, l, q0]) p1 = multiprocessing.Process(target=multi, args=[1, l, q1]) p2 = multiprocessing.Process(target=multi, args=[2, l, q2]) p3 = multiprocessing.Process(target=multi, args=[3, l, q3]) p0.start() p1.start() p2.start() p3.start() p0.join() p1.join() p2.join() p3.join() d0 = q0.get() d1 = q1.get() d2 = q2.get() d3 = q3.get() c0 = q0.get() c1 = q1.get() c2 = q2.get() c3 = q3.get() d = [d0, d1, d2, d3] c = [c0, c1, c2, c3] m = max(d) i = d.index(m) n = c[i] l.append(n) out.write("[" + str(n[0]) + ", " + str(n[1]) + ", " + str(n[2]) + "]\n") print "\nnew colour added: " + str(l) out.close() print "Done"gen_colours()Any tips?Edit:An obvious improvement is the fact that I'm calculating Lab values on found rgb colors every time. I added a list to store Lab values for these so that it doesn't need to do this each loop. This reduced time by about 1/4. Not a Python performance improvement that I'm looking for however. | I'm sure a color that is R:100 G:100 B:101 would not be a "maximally distinctive" solution if color R:100 G:100 B:100 is chosen already. One quick improvement you could make is to omit checking colors which are similar (ie. R and G values which are the same that have a B value within a given range). |
Python + WSGI - Can't import my own modules from a directory? I'm new to Python and i have looked around on how to import my custom modules from a directory/ sub directories. Such as this and this.This is my structure,index.py__init__.pymodules/ hello.py HelloWorld.py moduletest.pyindex.py,# IMPORTS MODULESimport helloimport HelloWorldimport moduletest# This is our application object. It could have any name,# except when using mod_wsgi where it must be "application"def application(environ, start_response): # build the response body possibly using the environ dictionary response_body = 'The request method was %s' % environ['REQUEST_METHOD'] # HTTP response code and message status = '200 OK' # These are HTTP headers expected by the client. # They must be wrapped as a list of tupled pairs: # [(Header name, Header value)]. response_headers = [('Content-Type', 'text/plain'), ('Content-Length', str(len(response_body)))] # Send them to the server using the supplied function start_response(status, response_headers) # Return the response body. # Notice it is wrapped in a list although it could be any iterable. return [response_body]init.py, from modules import moduletestfrom modules import hellofrom modules import HelloWorldmodules/hello.py,def hello(): return 'Hello World from hello.py!'modules/HelloWorld.py,# define a classclass HelloWorld: def __init__(self): self.message = 'Hello World from HelloWorld.py!' def sayHello(self): return self.messagemodules/moduletest.py,# Define some variables:numberone = 1ageofqueen = 78# define some functionsdef printhello(): print "hello"def timesfour(input): print input * 4# define a classclass Piano: def __init__(self): self.type = raw_input("What type of piano? ") self.height = raw_input("What height (in feet)? ") self.price = raw_input("How much did it cost? ") self.age = raw_input("How old is it (in years)? ") def printdetails(self): print "This piano is a/an " + self.height + " foot", print self.type, "piano, " + self.age, "years old and costing\ " + self.price + " dollars."But through the Apache WSGI, I get this error, [wsgi:error] [pid 5840:tid 828] [client 127.0.0.1:54621] import hello [wsgi:error] [pid 5840:tid 828] [client 127.0.0.1:54621] ImportError: No module named helloAny idea what have I done wrong?EDIT:index.py__init__.pymodules/ hello.py HelloWorld.py moduletest.py User/ Users.py | You should have an __init__.py file in the modules/ directory to tell Python that modules is a package. It can be an empty file.If you like, you can put this into that __init__.pyto simplify importing your package's modules:__all__ = ['hello', 'HelloWorld', 'moduletest']From Importing * From a Package Now what happens when the user writes from sound.effects import *? Ideally, one would hope that this somehow goes out to the filesystem, finds which submodules are present in the package, and imports them all. This could take a long time and importing sub-modules might have unwanted side-effects that should only happen when the sub-module is explicitly imported. The only solution is for the package author to provide an explicit index of the package. The import statement uses the following convention: if a package’s __init__.py code defines a list named __all__, it is taken to be the list of module names that should be imported when from package import * is encountered. It is up to the package author to keep this list up-to-date when a new version of the package is released. Package authors may also decide not to support it, if they don’t see a use for importing * from their package. |
How to create colour map from 3 arrays in python I'm trying to create a colour plot in python of two arrays t1 and t2 with the colours being set by a third one v, but I can't get the colour bar to be in terms of the v array, it is instead in terms of t1. This is my code: import matplotlib.pyplot as plt import numpy as np t1 = [75, 76, 77, 78] t2 = [75, 76, 77, 78] v = [0.5, 0.5, 0.8, 0.8] image_data = np.column_stack([t1, t2, v]) plt.imshow(image_data) plt.colorbar() plt.show()It produces this figure:Any help would be much appreciated. | You cannot use imshow to set x and y coordinates, and color as 3rd.It is to show a matrix image, where there are X*Y values, and all these values represent color.Perhaps you want to use scatter.E.g. you can try:import matplotlib.pyplot as pltt1 = [0,1,2,3]t2 = [0, 10, 20, 30]v = [0.5, 0.5, 0.8, 0.8]plt.scatter(t1, t2, c=v, cmap='Greens')plt.colorbar()plt.show()You can check which colormap is most suitable for you. |
Generate all possible 2 and 3 string combinations from a list in python I have a following list:mylist = ['car', 'truck', 'ship']Currently I am able to only get all the possible combinations of 2 strings using this:from itertools import combinationsprint(list(combinations(mylist,2)))which gives me:[('car', 'truck'), ('car', 'ship'), ('truck', 'ship')]However, one combination is actually all the 3 strings. Thus I would like my outcome to be:[('car', 'truck'), ('car', 'ship'), ('truck', 'ship'), ('car', 'truck', 'ship')] | This is an adjusted case of the powerset. Typically the code for the powerset in Python looks like this:from itertools import chain, combinationsdef powerset(it): yield from chain.from_iterable(combinations(it, r) for r in range(len(it)+1))You can change it, though, to only accept results within a certain range. In your case, it's from 2 to the 3:from itertools import chain, combinationsdef adjusted_powerset(it): yield from chain.from_iterable(combinations(it, r) for r in range(2, 3))See it in action here.If you need it to become more general, play with the range parameters. A nice template would be to create a powerset helper:from itertools import chain, combinationsdef powerset_helper(it, start, stop): yield from chain.from_iterable(combinations(it, r) for r in range(start, stop+1))def powerset(it): yield from powerset_helper(it, 0, len(it))def adjusted_powerset(it): yield from powerset_helper(it, 2, 3) |
Pointing of variables in Python Suppose I create a function A that is used as an argument in another function, so B(A). Function A points to an entry in a (SciPy) array C. If I change the array C from within B, then the value in the array will be changed globally, so that also A notices that change. Here is an example:def pointing_test2(inputs, fs): local = inputs print (local is initial) print fs(0) local[0] += 1.0 print fs(0)initial = sc.array([1.0])func = lambda x: initial[0]pointing_test2(initial, func)------- Output -------True1.02.0[ 2.]One can avoid that the array C is changed globally by making a copy locally within B, like so:[...] local = inputs.copy()[...]------- Output -------False1.01.0[ 1.]What I want to achieve is midway! I would like to have the function A to point to the copy of the array locally, so within B! In that fashion, the output I desire would be the following:------- Output -------False1.02.0[ 1.]How can I achieve this? | The most straightforward way is to write A so that B passes it the array to work on (hence: the locally modified one). Is there any reason you don't want to do that?def func(arr): return arr[0](Incidentally your func ignores its argument). |
Looping Two Values from a JSON Dictionary in Python3 I know this should be simple, but in the dozens of questions I've read, there is no answer for this. I'm did a lot of reading about comprehensions here, but it's going a bit over my head on this.I am trying to create a list of names and IDs from a Twitter API response. Right now, it only gives me the name and ID for the first result in the dictionary. I want a loop that gives me a list of all names and IDs.You can see an example of the JSON response here at the Twitter API docs.My code:twitterlists = twitter.show_owned_lists()print("List name: "+json.dumps(twitterlists['lists'][0]['name'], skipkeys = True))print("List ID: "+json.dumps(twitterlists['lists'][0]['id'], skipkeys = True))Current response:List name: "listname"List ID: 12345Desired response:1. List name: listname. List ID: 123.2. List name: listname2. List ID: 124.3. List name: listname3. List ID: 125.I also believe I can combine the two lines getting 'name' and 'id' into one, but I got an error along the way and was able to get part way there with the two lines. Happy to be more efficient if you have any suggestions! Thanks! | Accessing a list with [0] just gives you the first element. You want to iterate over the list:twitterlists = twitter.show_owned_lists()['lists']for i in twitterlists: print(f"List name: {i['name']}. List ID: {i['id']}.") |
How to make a repeating generator in Python How do you make a repeating generator, like xrange, in Python? For instance, if I do:>>> m = xrange(5)>>> print list(m)>>> print list(m)I get the same result both times — the numbers 0..4. However, if I try the same with yield:>>> def myxrange(n):... i = 0... while i < n:... yield i... i += 1>>> m = myxrange(5)>>> print list(m)>>> print list(m)The second time I try to iterate over m, I get nothing back — an empty list.Is there a simple way to create a repeating generator like xrange with yield, or generator comprehensions? I found a workaround on a Python tracker issue, which uses a decorator to transform a generator into an iterator. This restarts every time you start using it, even if you didn't use all the values last time through, just like xrange. I also came up with my own decorator, based on the same idea, which actually returns a generator, but one which can restart after throwing a StopIteration exception:@decorator.decoratordef eternal(genfunc, *args, **kwargs): class _iterable: iter = None def __iter__(self): return self def next(self, *nargs, **nkwargs): self.iter = self.iter or genfunc(*args, **kwargs): try: return self.iter.next(*nargs, **nkwargs) except StopIteration: self.iter = None raise return _iterable()Is there a better way to solve the problem, using only yield and/or generator comprehensions? Or something built into Python? So I don't need to roll my own classes and decorators?UpdateThe comment by u0b34a0f6ae nailed the source of my misunderstanding: xrange(5) does not return an iterator, it creates an xrange object. xrange objects can be iterated, just like dictionaries, more than once.My "eternal" function was barking up the wrong tree entirely, by acting like an iterator/generator (__iter__ returns self) rather than like a collection/xrange (__iter__ returns a new iterator). | Not directly. Part of the flexibility that allows generators to be used for implementing co-routines, resource management, etc, is that they are always one-shot. Once run, a generator cannot be re-run. You would have to create a new generator object.However, you can create your own class which overrides __iter__(). It will act like a reusable generator:def multigen(gen_func): class _multigen(object): def __init__(self, *args, **kwargs): self.__args = args self.__kwargs = kwargs def __iter__(self): return gen_func(*self.__args, **self.__kwargs) return _multigen@multigendef myxrange(n): i = 0 while i < n: yield i i += 1m = myxrange(5)print list(m)print list(m) |
I am facing an error while i try to load a file in python 3 f = open(path,'r',encoding='utf8')This is the code I'm trying to run but it outputs 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte as the error. What might be the reason for this? | Try changing your encoding to utf-8, and see if that fixes it. Otherwise, the file might not be encoded in utf-8. |
Python Image using Numpy I am trying to show 2 images using PYQT Numpy format. But the 2nd image comes after 1 image closes. I want to show both the image simultaneously. ImageAddress = 'D:\\Boot.PNG'ImageItself = Image.open(ImageAddress)ImageNumpyFormat = np.asarray(ImageItself)plt.imshow(ImageNumpyFormat)plt.title('Decision Tree')plt.axis('off')plt.draw()plt.pause(20)plt.close()ImageAddress = 'D:\\Internwt.PNG'ImageItself = Image.open(ImageAddress)ImageNumpyFormat = np.asarray(ImageItself)plt.imshow(ImageNumpyFormat)plt.title('Decision Tree')plt.axis('off')plt.draw()plt.pause(20)plt.close() | I assume plt comes from matplotlib. Instead of your first plt.close(), use plt.figure(2) to open a second figure. Also, you probably don't need plt.draw() at all, instead, end the program with plt.show() so it waits until you close the plots. |
Better code prediction in PyCharm I'm trying various IDE for development on Python. Basic requirement better code-prediction and git association. I really liked PyCharm, but code-prediction is somewhat better in PyDev. Here is a comparison of code prediction side-by-side (Left - PyDev, Right - PyCharm)Here is What I've tried so far -1) Restart, Re-install2) Enabled Collect run-time types information for code insight(UPDATE)Code is being predicted, when I'm using Python Console in PyCharm, but not in editor. | You will be surprised but if you are using windows try VS community edition with Python plugin. intellisense just works. |
parsing the Json for the Optional fields I have the JSON in the below format:{ "type":"MetaModel", "attributes":[ { "name":"Code", "regexp":"^[A-Z]{3}$" }, { "name":"DefaultDescription", }, ]}The attributes["regexp"] is optional. When I try to access the field like attribute["regexp"] , I'm getting the error asKeyError: 'regexp'The assumption is, if the field is not there then it will be considered as NULL.How can I access the optional fields? | Use get, a method of dictionaries that will return None if a key doesn't exist:foo = json.loads(the_json_string)value = foo.get('regexp')if value: # do something with the regular expressionYou can also just catch the exception:value = Nonetry: value = foo['regexp']except KeyError: # do something, as the value is missing passif value: # do something with the regular expression |
looping throught the folder I need to solve trivial task running in loop sequence of the commands:1) to take input .dcd file from the folder2) to make some operations with the file3) to save results in listMy code (which is not working !) looks like# make LIST OF THE input DCD FILES path="./inputs/"dirs=os.listdir(path)for traj in dirs: trajectory = command(traj)it correctly define name of the input but wrote that evvery file is emptyalternatively I've used below script to loop through the files using digit variable assidned to name of each file (which is not good in my current task because I need to keep name of each input file avoiding to use digits!)# number of input filesn=3for i in xrange (1,n+1): trajectory = command('./inputs/file_%d.dcd' %(i))In the last case all dcd files were correctly loaded (in opposit to the first example)! So the question what should I to fix in the first example? | os.listdir() gives you only the base filenames relative to the directory. No path is included.Prefix your filenames with the path:for traj in dirs: trajectory = command(os.path.join(path, traj)) |
Google Drive Resumable Upload Failing I am trying to upload a file using the Google Drive resumable upload api[https://developers.google.com/drive/web/manage-uploads#resumable] and i'm always getting a 400 status code with Invalid Upload request in the step 3 of the process.For the step 1(Starting a resumable session), I get the session uri and it's when I upload the contents I'm getting a bad request error.REQUEST HEADERS:{ "X-Upload-Content-Length": 249159, "X-Upload-Content-Type": "application/pdf", "content-type": "application/json", "Authorization": "Bearer ya..."}REQUEST BODY:{ "parents": [ { "id": "0B..." } ], "title": "New file.pdf"}And i got the response back with the session uri as https://www.googleapis.com/upload/drive/v2/files?uploadType=resumable&upload_id=AEnB2UpNCcQgsbuqKer6z25jVRX4ijweQkpAJ1aQ1nEQPGi_ODtvob9vSwQk2oyjzyUNQs-b2HenMst92HQo6WMKmQyO1lgTwwFor the step 3(Uploading the file)Here are the request headers.{ "Content-Length": 249159, "Accept-Encoding": "gzip, deflate", "Accept": "*/*", "User-Agent": "python-requests/2.3.0 CPython/2.7.8 Darwin/14.0.0", "Content-Range": "bytes 0-249159/249159", "Content-Type": "application/pdf", "Authorization": "Bearer ya..."}And the request body contains the data who's length is again the same. The data is a chunk obtained from another url using python requests module. The following is the response for the above request from google{ "error": { "errors": [ { "domain": "global", "reason": "badRequest", "message": "Invalid Upload Request" } ], "code": 400, "message": "Invalid Upload Request" }}It works with other clouds(the same method) except for Google Drive. What am I missing here? | I finally nailed it myself. It was a bug in the code where the content-range for the last chunk was off by 1 byte. |
Python - Prime finder/calculator error Okay, I know my code is extremely inefficient and longer than it be, but I am very new to python and only know a few basic functions. restart = Truenumtocheck = 2 #number to be tested for being a primewhile 0==0: #forever loop if restart == True: testnum = 2 #used to test the 'numtocheck' varible calculated = numtocheck/testnum if (calculated).is_integer(): numtocheck = numtocheck+1 else: testnum = testnum+1 if numtocheck == testnum: print (numtocheck) #Should be a prime numtocheck = numtocheck+1 restart = True else: restart = FalseThe output is almost perfectly full of primes, however there are several which appear such as '35' or '95' which keep appearing every time I run the code. I have no idea what is happening, and so any help would be much appreciated :) | What you want to do is probably the following:restart = Truenumtocheck = 2 #number to be tested for being a primewhile 0==0: #forever loop if restart == True: testnum = 2 #used to test the 'numtocheck' varible calculated = numtocheck % testnum #modulo computation if (calculated == 0): numtocheck = numtocheck+1 else: testnum = testnum+1 if numtocheck == testnum: print (numtocheck) #Should be a prime numtocheck = numtocheck+1 restart = True else: restart = FalseHope this helps... the algorithm is probably not exactly what you expect, but this is another subject. |
XSLT matching with namespaces I am trying to build bottle.py templates from RelaxNG definitions using Python 3.6 and lxml (which means XSLT 1.0 and XPath 1.0). I cannot find the trick to getting the name of the starting template, which in this example is 'AddressBook'. I need this from rng:grammar/rng:start/rng:start/rng:ref/@name instead of rng:define/@name.<?xml version="1.0" ?><grammar xmlns="http://relaxng.org/ns/structure/1.0"><start><ref name="AddressBook"/></start><define name="AddressBook"><element name="addressbook"> <zeroOrMore> <element name="card"> <element name="name"> <text/> </element> <element name="email"> <text/> </element> </element> </zeroOrMore></element></define></grammar>This is my xslt:<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:rng="http://relaxng.org/ns/structure/1.0" exclude-result-prefixes="rng"><xsl:output type="text"/><xsl:template match="/"> <div>start <xsl:apply-templates select="rng:start"/> done</div></xsl:template><xsl:template match="rng:start"> <div>Start <h1><xsl:value-of select="rng:ref/@name" /></h1> End</div></xsl:template>The first match works and I get Start End as a result of this transformation. What am I missing here? | Your problem is not with namespaces, but with paths. Your instruction:<xsl:apply-templates select="rng:start"/>does not do anything because start is not a child of the current node (which is the / root node matched by <xsl:template match="/">).Either change it to:<xsl:apply-templates select="rng:grammar/rng:start"/>or - preferably, IMHO - start with:<xsl:template match="/rng:grammar">P.S. xsl:output takes a method attribute, not type. And if you're outputting XML/HTML, do not make your output method text. |
What does struct.calcsize (python) actually calculate? Following the manual about struct.calcsizetruct.calcsize(fmt)¶ Return the size of the struct (and hence of the string) corresponding to the given formatBut I do not get why struct.calcsize('hll') is not struct.calcsize('h') plus two times of struct.calcsize('l'). See below. Any idea?In [216]: struct.calcsize('hll')Out[216]: 24In [217]: struct.calcsize('h')Out[217]: 2In [218]: struct.calcsize('l')Out[218]: 8 | I would assume this is due to padding elements shorter than a machine word, for efficiency purposes.Accesses to memory addresses that are multiples of the machine word length (e.g. 8 bytes for 64-bit machines) tend to be faster. For this reason, C compilers will pad their structs, unless told otherwise. The struct module will do the same for interoperability.It looks like it's configurable though, depending on how you plan to use it. |
How to find out size of crawled webpage in Scrapy? I am learning Scrapy.I want to find out size of crawled webpage or size of response in KB or MB etc using Scrapy.I can find out length of content of crawled webpage usingresponse.bodywhat is the simplest way to findout how much data is getting downloaded per request? I tried to understand this solution which is similar to my my requirement. but I am not able to understand this code. parse(self, response): url=response.url content=response.body #download_size= | You can get size by using information provided by reading content-length from headers property of Response Object. parse(self, response): url=response.url content=response.body #response length in bytes download_size= int(response.headers['content-length']) |
Obfuscate strings in Python I have a password string that must be passed to a method. Everything works fine but I don't feel comfortable storing the password in clear text. Is there a way to obfuscate the string or to truly encrypt it? I'm aware that obfuscation can be reverse engineered, but I think I should at least try to cover up the password a bit. At the very least it wont be visible to a indexing program, or a stray eye giving a quick look at my code.I am aware of pyobfuscate but I don't want the whole program obfuscated, just one string and possibly the whole line itself where the variable is defined.Target platform is GNU Linux Generic (If that makes a difference) | If you just want to prevent casually glancing at a password, you may want to consider encoding/decoding the password to/from base64. It's not secure in the least, but the password won't be casually human/robot readable.import base64# Encode password (must be bytes type)encoded_pw = base64.b64encode(raw_pw)# Decode password (must be bytes type)decoded_pw = base64.b64decode(encoded_pw) |
BeautifulSoup HTTPResponse has no attribute encode I'm trying to get beautifulsoup working with a URL, like the following:from urllib.request import urlopenfrom bs4 import BeautifulSouphtml = urlopen("http://proxies.org")soup = BeautifulSoup(html.encode("utf-8"), "html.parser")print(soup.find_all('a'))However, I am getting a error: File "c:\Python3\ProxyList.py", line 3, in <module> html = urlopen("http://proxies.org").encode("utf-8")AttributeError: 'HTTPResponse' object has no attribute 'encode'Any idea why? Could it be to do with the urlopen function? Why is it needing the utf-8?There clearly seems to be some differences with Python 3 and BeautifulSoup4, regarding the examples that are given (which seem to be out of date or wrong now)... | Check this one.soup = BeautifulSoup(html.read().encode('utf-8'),"html.parser") |
constructing Page URL that i can reach after inserting an item number into search box I'm writing a python script to scrape an online shopping websiteevery item on this website has an item number and after inserting an item number into search box I'm redirected to item pagewhen I looked to the URL of this page there was no clue about the item number in this URL _ so I can replace it with any number after that so I can go directly to the item page without going first to website portal_any clue how to construct this URL?is it a general case or it depends on every website?say my website is ebay so to reach this page searching for cisco 262 on ebay there are 2 ways:open ebay and then inserting cisco 262 into the search boxuse this URL cisco 262 search result on ebay as we can see from URL we can replace "cisco++262" with what we want to search for so we can go directly to search result without going first to the main page of eBay and inserting what we want to search for into the search box my question here is it's not always clear in the URL where you can put what you want to search for so you can go directly to its pageso is there any way to know how to construct URL if it's not clear.Updatehere is the base url of website I want to scrapeand here is the page url after inserting this value "CHG2020324" into its the search boxanother url after inserting this "CHG2022230" into the search boxso as you can see there is no clue where to put item number so we can reconstruct url ...any help with url inspecting or constructing. | If I understand your question correctly, you want to go straight to the search result pages using a series of search strings. If so, then - at least in the case of ebay (if will likely be different for each site) - you can use f-strings together with a base url to achieve that:base_url = 'https://www.ebay.com/sch/i.html?_from=R40&_nkw='search_strings = ['ryzen', 'cisco 262'] #examples of a single word and phrase search stringsfor srchstr in search_strings: src = srchstr.replace(' ','+') #note the ebay represents a phrase as "word1+word2"; other sites will do it differently print(f'{base_url}"{src}"')Output:https://www.ebay.com/sch/i.html?_from=R40&_nkw="ryzen"https://www.ebay.com/sch/i.html?_from=R40&_nkw="cisco+262"And these take to the search result pages for the respective search strings. |
How to create multiprocess with regression function? I'm trying to build a regression function that call itself in a new process. The new process should not stop the parent process nor wait for it to finish, that is why I don't use join(). Do you have another way to create regression function with multi-process.I use the following code:import multiprocessing as mpimport concurrent.futuresimport timedef do_something(c, seconds, r_list): c += 1 # c is a counter that all processes should use # such that no more than 20 processes are created. print(f"Sleeping {seconds} second(s)...") if c < 20: P_V = mp.Value('d', 0.0, lock=False) p = mp.Process(group=None, target=do_something, args=(c, 1, r_list,)) p.start() if not p.is_alive(): r_list.append(P_V.value) time.sleep(seconds) print(f"Done Sleeping...{seconds}") return f"Done Sleeping...{seconds}"if __name__ == '__main__': C = 0 # C is a counter that all processes should use # such that no more than 20 processes are created. Result_list = [] # results that come from all processes are saved here Result_list.append(do_something(C, 1, Result_list))Notice that results from all processes should be compared at the end.In fact, this code is working well but the child processes, which are created in the recursive method, do not print anything, the list "Result_list" contains only one item from the first call, and C=0 at the end, any idea why? | Here's a simplified example of what I think you're trying to do (side note: launching processes recursively is a great way to accidentally create a "fork bomb". It is extremely more common to create multiple processes in some sort of loop instead)from multiprocessing import Process, Queuefrom time import sleepfrom os import getpiddef foo(n_procs, return_Q, arg): if __name__ == "__main__": #don't actually run the body of foo in the "main" process, just start the recursion Process(target=foo, args=(n_procs, return_Q, arg)).start() else: n_procs -= 1 if n_procs > 0: Process(target=foo, args=(n_procs, return_Q, arg)).start() sleep(arg) print(f"{getpid()} done sleeping {arg} seconds") return_Q.put(f"{getpid()} done sleeping {arg} seconds") #put the result to a queue so we can get it in the main processif __name__ == "__main__": q = Queue() foo(10, q, 2) sleep(10) #do something else in the meantime results = [] #while not q.empty(): #usually better to just know how many results you're expecting as q.empty can be unreliable for _ in range(10): results.append(q.get()) print("mp results:") print("\n".join(results)) |
matplotlib.cpp, Python 3.10 from C++ I'm learning how to use Matplotlib from within C++ according to the readthedocs. I have installed Python 3.10 from scratch and copied matplotlibcpp.h from Cryoris/matplotlib-cpp into my working directory. Now, when compiling one of the examples,#include <vector>#include "matplotlibcpp.h"namespace plt = matplotlibcpp;int main() { std::vector<double> x = {1, 2, 3, 4}; std::vector<double> y = {1, 4, 9, 16}; // plt::plot(x, y); // https://matplotlib-cpp.readthedocs.io/en/latest/docs.html plt::plot(x, y, "r*"); // Red stars as markers, no line plt::show();in Visual Studio 2022 according to pratikmahamuni1843, with the required include and library folders set, I get the error messageI kind of understand the error message C2668, but I cannot figure out how to change the code. The issue is with the formatting string.Update 1:@hyde: Right, I figured I can set a compiler flag to ISO:C++20. That explains almost all of the errors I got with the original lava's version.@hyde, @kiner_shah: With that compiler flag, lava's version gives No overload, here!@hyde: Cryoris's version still gives the same error message as above, no change.Update 2:@timonvanderberg: Modifying your suggestion to plt::plot(x, y, std::string{"r*"}), with or without the std::string, gives withCryoris/matplotlib-cpp Update 3:Found the way to increase verbosity. With Cryoris, the error message was about an ambigous call to overloaded function. The log reads I am still learning and thus cannot decipher what it will tell me. | I understand, correct me if I'm wrong, matplotlibcpp.h contains a couple constructors, each of which falls through to the first one, that is the one finally calling plot_base. Now, the compiler cannot decide between this "fall-through to" constructor and another one further below.// @brief standard plot function supporting the args (x, y, s, keywords) // line 519// ...template <typename VectorX, typename VectorY>bool plot(const VectorX &x, const VectorY &y, const std::string &s = "", const std::map<std::string, std::string> &keywords = {}) { return detail::plot_base(detail::_interpreter::get().s_python_function_plot, x, y, s, keywords);}// enable plotting of multiple triples (x, y, format) // line 1953template <typename A, typename B, typename... Args>bool plot(const A &a, const B &b, const std::string &format, Args... args) { return plot(a, b, format) && plot(args...);}I have renamed all but the first fall-through to constructor.// enable plotting of multiple triples (x, y, format) // renamed, Bjorntemplate <typename A, typename B, typename... Args>bool plot1(const A &a, const B &b, const std::string &format, Args... args) { return plot(a, b, format) && plot(args...);}I dislike having to rename all my (standard) plot commands, and someone more knowledgeable might develop a proper solution. |
Django admin.py readonly_fields not working I need to show on Django administration the date the object was created, but date field should be disabled of modification.here is my model on Models.pyfrom django.db import modelsfrom django.utils import timezoneclass MyModel(models.Model): name = models.CharField(max_length=200) date_of_creation = models.DateField(default=timezone.now)here is my admin.pyfrom django.contrib import adminfrom .models import MyModelclass MyModelAdmin(admin.ModelAdmin): readonly_fields = ('date_of_creation',)admin.site.register(MyModel)The "save() override to set the date of creation" isn't the case here.I tried this snippet and django documentation already, but can't find my mistake.Please note that I'm a beginner in Django, and this is my first App.Thank you in advance. | You need to register the admin class as well as the model. admin.site.register(MyModel, MyModelAdmin) |
Why do pygame and pyglet show different results on the screen with the SAME matrices? Why do I see different results when I run this code with use_pyglet being True vs. False?The matrices and viewport are the same in both cases, so I'm really confused.import ctypesimport numpyuse_pyglet = False # change this to True to see the differenceif use_pyglet: import pyglet from pyglet.gl import * window = pyglet.window.Window(resizable=True, config=pyglet.gl.Config(double_buffer=True))else: import pygame, pygame.locals from pyglet.gl import * pygame.init() pygame.display.set_mode((640, 480), pygame.locals.DOUBLEBUF | pygame.locals.OPENGL)a = (ctypes.c_int * 4)(); glGetIntegerv(GL_VIEWPORT, a); print numpy.asarray(a)a = (ctypes.c_float * 16)(); glGetFloatv(GL_PROJECTION_MATRIX, a); print numpy.asarray(a).reshape((4, 4)).Ta = (ctypes.c_float * 16)(); glGetFloatv(GL_MODELVIEW_MATRIX, a); print numpy.asarray(a).reshape((4, 4)).Tdef on_draw(): glClearColor(1, 1, 1, 1) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glColor4d(0, 0, 0, 1) glBegin(GL_LINE_STRIP) glVertex2d(0, 0) glVertex2d(100, 100) glEnd()if use_pyglet: on_draw = window.event(on_draw) pyglet.app.run()else: while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() break on_draw() pygame.display.flip() pygame.time.wait(20)PyGame:Pyglet: | The matrices and viewport are the same in both cases, so I'm really confused.They actually aren't. The thing is that at the point where you check it they haven't been changed yet. If you instead move the check into on_draw. Then you'll notice that GL_PROJECTION_MATRIX for Pyglet will output:[[ 0.003125 0. 0. -1. ] [ 0. 0.00416667 0. -1. ] [ 0. 0. -1. -0. ] [ 0. 0. 0. 1. ]]While for Pygame it will output:[[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]]The solution would be to setup the projection matrix yourself. Thus ensuring that it will always be the same.glMatrixMode(GL_PROJECTION)glLoadIdentity()glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)glMatrixMode(GL_MODELVIEW)glLoadIdentity()How you want to setup the project matrix of course depends on the desired result. |
Django User password not getting hashed for custom users I am currently implementing the authentication for a Django application, I am writing. Following code of the Thinkster Django course, I implemented the whole registration process, but I cannot login, because the password is not getting hashed, when registering a user. Here is my custom User model and the create_user function.class UserManager(BaseUserManager) def create_user(self, username, email, password=None): if username is None: raise TypeError('Users must have a username.') if email is None: raise TypeError('Users must have an email address.') user = self.model(username=username, email=self.normalize_email(email)) user.set_password(password) user.save() return user def create_superuse(self, username, email, password): if password is None: raise TypeError('Superusers must have a password.') user = self.create_user(username, email, password) user.is_superuser = True user.is_staff = True user.save() return userclass User(AbstractBaseUser, PermissionsMixin): username = models.CharField(db_index=True, max_length=255, unique=True) email = models.EmailField(db_index=True, unique=True) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['username'] objects = UserManager()As you can see, I am explicitly calling the set_password function but I cannot find out, why it does not get properly executed? My Serializer, where I create the user is as follows:class RegistrationSerializer(serializers.ModelSerializer): password = serializers.CharField( max_length=128, min_length=8, write_only=True )token = serializers.CharField(max_length=255, read_only=True)class Meta: model = User fields = ['email', 'username', 'password', 'token'] def create(self, validated_data): return User.objects.create_user(**validated_data)Please note that instead of return User.objects.create_user(**validated_data), I also tried doing return get_user_model().objects.create_user(**validated_data), as it was a suggestion at another question, but that did not work either.I also post my view, in case something is wrong there, but I really don't think that's the case.class RegistrationAPIView(APIView): permission_classes = (AllowAny,) renderer_classes = (UserJSONRenderer,) serializer_class = RegistrationSerializer def post(self, request): user = request.data.get('user', {}) serializer = self.serializer_class(data=user) serializer.is_valid(raise_exception=True) serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED)Side Note: In case that is relevant, I am sending requests with Postman and all the responses I get back seem completely right. But when I browse my SQLite, I see that the password was not hashed. That also results in the users not being able to log in, because in the login process the password gets hashed and then compared with the one in the database.Side Note 2: When I register a user via the command line with python manage.py createsuperuser, it gets a hashed password and everything works as I would normally expect. | You need to use set_password method like this in serializer:def create(self, validated_data): user = User(email=validated_data['email'], username=validated_data['username']) user.set_password(validated_data['password']) user.save() return user |
(x,y) pair for max z value in list I have a list of lists such as:nodes =[[nodeID,x,y,z],....]I want to find:xi,yi for zi=zmax given zmax= max z for same x,yand store the (xi,yi,zi) in another list.I can do this using:nodes=[[literal_eval(x) for x in item] for item in nodes]maxz_levels=[]for i,row in enumerate(nodes): fe=0 maxz=0 nodeID,x,y,z=row for j,line in enumerate(nodes): nodeID2,x2,y2,z2=line if x==x2 and y==y2 and z2>maxz: maxz=z2 if len(maxz_levels)==0: maxz_levels.append([x, y, maxz]) else: for row2 in maxz_levels: if row2[0]==x and row2[1]==y: fe=1 if fe==0: maxz_levels.append([x, y, maxz])but it takes ages... so I thought of using a dictionary but I'm not finding an easy way to do what I want. My code is:dic1=defaultdict(list) for nodeID,x,y,z in nodes: dic1[(x,y)].append((nodeID,z))for key in dic1: dic1[key].sort( key=lambda x:float(x[1]) )for j,row in enumerate(nodes): nodeID,x,y,z=row z_levels=[item[1] for item in dic1[(x,y)]] #How to find easily and quickly the max of z_levels and the associated (x,y) coordinates?Any ideas? ThanksEDIT:example:nodes = [['1','1','1','2'],['2','1','1','3'],['3','0','0','5'],['4','0','0','4'],['5','1','2','4'],['6','0','0','40'],['7','0','10','4'],['8','10','0','4'],['9','0','0','4'],['10','2','1','4']]I want to find:maxz_levels = [[1, 1, 3], [0, 0, 40], [1, 2, 4], [0, 10, 4], [10, 0, 4], [2, 1, 4]] | You could use the max function with a key:maxz = max(list_, key=lambda x: x[3])This will assign maxz to the item of the list list_ with the maximum value with index 3 (z value). You can then extract the `xi` and `yi` values: xi, yi = (maxz[1], maxz[2])If you want to sort your nodes list by z, you could use the sorted function together with a key: maxz_levels = sorted(nodes, key=lambda x: x[3], reverse=True)and then remove the first item.Alright, I think I finally got your question. So here's a functional attempt:maxz_levels = []for i in set([(i[1], i[2]) for i in nodes]): m = sorted(filter(lambda x: x[1] == i[0] and x[2] == i[1], nodes))[-1] maxz_levels.append((m[1], m[2], m[3]))Explanation:The for loop loops through a list of all (x, y) combinations in nodesThe first line in the loop sorts a list of all items in nodes with the current (x, y) values by their z value and takes the last one (the one with the biggest z value).The second line in the loop then adds this node to the list of maximum nodes. |
Threads, wxpython and statusbar I'm doing a program in which I'm using a wxStatusBar, when a download starts I start a child thread like this:def OnDownload(self, event): child = threading.Thread(target=self.Download) child.setDaemon(True) child.start()Download is another function without parameters (except self). I would like to update my statusbar from there with some information about the downloading progress, but when I try to do so I often get Xwindow, glib and segfaults errors. Any idea to solve this?Solved: I just needed to include wx.MutexGuiEnter() before changing something in the GUI inside the thread and wx.MutexGuiLeave() when finished. For exampledef Download(self): #stuff that doesn't affect the GUI wx.MutexGuiEnter() self.SetStatusText("This is a thread") wx.MutexGuiLeave()And that's all :D | Most people get directed to the wxPython wiki:http://wiki.wxpython.org/LongRunningTasksI also wrote up a little piece on the subject here:http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/I don't think I've ever seen your solution before though. |
Scrapy: downloader/response_count vs response_received_count I am using scrapy to crawl multiple websites, and I want to analyze the crawling rate.The stats dumped at the end contain a downloader/response_count value and a response_received_count value. The former is systematically greater than the latter.Why is there a difference and what element of the crawler does increment the two values in the stats collector? | CoreStats is the Extension responsible for response_received_countDownloaderStats is the Middleware responsible for downloader/response_count.CoreStats extension is connecting the signal of signals.response_received to incrementing the value of response_received_count, so it should count every response that you get (even bad statuses), whilst DownloaderStats middleware processes the response on a specific order as we can see here its order is 850, so previous Downloader Middlewares (ones set with a number lower than 850 could drop or even get errors processing the response, and the downloader/response_count would never be increased. |
Using Python Requests to simulate clicking a 'show more' button I am not sure what code to use for clicking the show more button. I want to get a list of university who are doing certain topic. below is one of the websites http://www.sciencedirect.com/science/article/your helps will be true appreciated Thanks | You shouldn't have to simulate, in Python, an actual "click" of the "show more" button to accomplish web-scraping."Show more" buttons in websites are usually tied to some JavaScript that either reveals a hidden element already in the HTML (see Bootstrap's collapse class for a typical example) or fires off a request to some web service (e.g. a REST API) for information to insert in the DOM.Either way, you can scrape that data. For the former, find the hidden element in the DOM (view the page's source [Ctrl + U] and search the HTML [Ctrl + F]), and use your typical webscraping tools. For the latter, use something like Google Dev Tools' Network tab to inspect the API request when you click "show more" and then try to replicate that request with Python.In the specific example you've given, it appears the data you want is stored in an HTML <script> tag as a JSON object. Search the HTML for the word "affiliation". |
What does "-" (dash) after color do in matplotlib? I have this code that draws a graph.plt.plot([1,7], [1,1], 'k-', linewidth=2)In lines like this, there are k- which represents the color black.However the code works without the dash, so k is just fine.Why is that dash - there? What does it do?I couldn't find anything that explains this. I even read the documentation below.https://matplotlib.org/api/colors_api.htmlimport matplotlib.pyplot as pltplt.title("Dijkstra")#plt.plot( [x1, x2], [y1, y2], color, linewidth )plt.plot([1,7,1,7,9], [5,5,1,1,3], 'ro')plt.plot([1,1], [1,5], 'r', linewidth=2)plt.annotate('A', [1,5])plt.plot([1,7], [1,1], 'k-', linewidth=2)plt.plot([7,7], [1,5], 'k-', linewidth=2)plt.plot([1,7], [5,5], 'k-', linewidth=2)plt.plot([1,7], [1,5], 'k-', linewidth=2)plt.plot([7,9], [5,3], 'k-', linewidth=2)plt.plot([7,9], [1,3], 'k-', linewidth=2)plt.axis([0, 10, 0, 6]) # Set axis valuesplt.show() | The dash is the symbol for a solid line. Since it is the default line type, omitting it does not alter the plot.For more information, see the line-style reference: |
Group by with multiple conditions in pandas I want to aggregate rows, using different conditions for two columns.When I do df.groupby('[a]').agg('count'), I get the output 1When I do df.groupby('[a]').agg('mean'), I get the output 2Is there a way to do an aggregation that show output 1 to the column[b] and output 2 to the column[c]? | Code below should work:# Import librariesimport pandas as pdimport numpy as np# Create sample dataframedf = pd.DataFrame({'a': ['A1', 'A1', 'A2', 'A3', 'A4', 'A3'], 'value': [1,2,3,4,5,6]})# Calculate count, mean temp1 = df.groupby(['a']).count().reset_index().rename(columns={'value':'count'})temp2 = df.groupby(['a'])['value'].mean().reset_index().rename(columns={'value':'mean'})# Add columns to existing dataframedf.merge(temp1, on='a', how='inner').merge(temp2, on='a', how='inner')# Add columns to a new dataframedf2 = temp1.merge(temp2, on='a', how='inner')df2 |
How do I search files in a folder to be moved to another folder in a certain condition in Python 3? I will try to explain a little more clearly: I am trying to figure out how to use shutil and os modules on Python 3.8.5 to be able to take a look at a folder, determine if its contents have been created and/or modified within the last 24 hours... and then if they have, move those files to another folder.I am going to try to link up the code I have here, I'm still pretty new at using Stackoverflow, so I apologize:import shutilimport osshutil.copystat(' \Users\aaron\Desktop\checkFiles\File B.txt', "\Users\aaron\Desktop\needToCopy\"," follow_symlinks=True)This code keeps giving me invalid syntax errors. I don't know what I'm doing wrong, I have even looked at docs.python.org, but, since I am very new to coding, it was pretty greek to me. | I am not sure, what are you trying to achieve by using shutil.copystat. It only copies the stats and permissions onto the path. (If your File B.txt is read only, the needToCopy will be also read only)In order to find out creation and modification times, consult this great answer.I would approach the check for 24 hour time window modification like this:import os, timeDIR_PATH = "."for filename in os.listdir(DIR_PATH): if os.path.getmtime(filename) >= (time.time() - 60*60*24): print(filename)And for the moving part, there is (for example) shutil.move. So it could look like this:import os, time, shutilSRC_PATH = "."TARGET_PATH = "../"for filename in os.listdir(SRC_PATH): if os.path.getmtime(filename) >= (time.time() - 60*60*24): shutil.move(os.path.join(SRC_PATH, filename), TARGET_PATH) |
How to create a column based on the value of an element of an array indicated by the other column? I have one large data frame and I want to create a column based on the position of an array that is indicated by the other column. In the example below, I want to create a column that assigns the value based on the param array, where the position is indicated by the column "type".>>> import pandas as pd>>> d = {'type': [3, 4, 1, 2, 5]}>>> df = pd.DataFrame(data=d)>>> param = [0.3, 0.4, 0.1, 0.25, 0.75]>>> df type0 21 32 03 14 4The desired outcome will be>> df type outcome0 2 0.101 3 0.252 0 0.303 1 0.404 4 0.75I couldn't think of a good way to do it without converting the frame into arrays and getting the result through loops. I have tried to create dummy variables for the types and conduct matrix multiplication but since there are 100+ types in my data, it will increase the computation time drastically. Any help will be much appreciated! | df['outcome'] = [param[i-1] for i in df.type.values] |
values_list() in query_set is showing only numbers but not the names of countries I selected my field with 'values_list' which contains name of countries. But what I get is country_id numbers. I already used flat=True but it did not help any.my models.py: class Report(models.Model): author = models.ForeignKey(User, on_delete=models.CASCADE) church_name = models.CharField(max_length=255) area = models.ForeignKey(Area, on_delete=models.CASCADE, null=True, blank=True) country = models.ForeignKey(Country, on_delete=models.CASCADE, null=True, blank=True) water_baptism = models.IntegerField(default=0) holy_ghost = models.IntegerField(default=0) def __str__(self): return '%s' % self.area def get_absolute_url(self): return reverse('users:report_view')my views.py:def report_cities(request, pk): countries = Report.objects.values_list('country', flat=True) context = { 'countries':countries } return render(request, 'users/report_countries.html', context)my html <h1>Report Countries</h1> {% for country in countries %} {{country}} {% endfor %} | Use Report.objects.values_list('country__name', flat=True) (assuming that is the country name field on the country model). By default django will list the object id if no field is specified.E.G if your country model was class Country(models.Model): name = Charfield() another_field = ...The above query would now return a list of the names. |
How to convert Json array list with multiple possible values into columns in a dataframe using pyspark I am using the Google Admin Report API via the Python SDK in Databricks (Spark + Python 3.5).It returns data in the following format (Databricks pyspark code): dbutils.fs.put("/tmp/test.json", '''{ "userEmail": "rod@test.com", "parameters": [ { "intValue": "0", "name": "classroom:num_courses_created" }, { "boolValue": true, "name": "accounts:is_disabled" }, { "name": "classroom:role", "stringValue": "student" } ]}''', True)There are 188 parameters and for each param it could be an int, bool, date or string. Depending on the field type the Api returns the value in the appropriate value (e.g. intValue for an int field and boolValue for a boolean).I am writing out this JSON untouched into my datalake and processing it later by loading it into a spark dataframe:testJsonData = sqlContext.read.json("/tmp/test.json", multiLine=True)This results in a dataframe with this schema:userEmail:string parameters:array element:struct boolValue:booleanintValue:stringname:stringstringValue:stringIf I display the dataframe it shows as{"boolValue":null,"intValue":"0","name":"classroom:num_courses_created","stringValue":null} {"boolValue":true,"intValue":null,"name":"accounts:is_disabled","stringValue":null} {"boolValue":null,"intValue":null,"name":"classroom:role","stringValue":"student"}As you can see, it has inferred nulls for the typeValues that do not exist.The end state that I want is columns in a dataframe like:and the pivoted columns would be typed correctly (e.g classroom:num_courses_created would be of type int - see yellow columns above)Here is what I have tried so far:from pyspark.sql.functions import explodetempDf = testJsonData.select("userEmail", explode("parameters").alias("parameters_exploded"))explodedColsDf = tempDf.select("userEmail", "parameters_exploded.*")This results in a dataframe with this schema:userEmail:stringboolValue:booleanintValue:stringname:stringstringValue:stringI then pivot the rows into columns based on the Name field (which is ""classroom:num_courses_created", "classroom:role" etc (there are 188 name/value parameter pairs):#turn intValue into an Int columnexplodedColsDf = explodedColsDf.withColumn("intValue", explodedColsDf.intValue.cast(IntegerType()))pivotedDf = explodedColsDf.groupBy("userEmail").pivot("name").sum("intValue")Which results in this dataframe:userEmail:stringaccounts:is_disabled:longclassroom:num_courses_created:longclassroom:role:longwhich is not correct as the types for the columns are wrong.What I need to do is somehow look at all the typeValues for a parameter column (there is no way of knowing the type from the name or inferring it - other than in the original Json where it returns just the typeValue that is relevant) and whichever one is not null is the type of that column. Each param only appears once so the string, bool, int and date values just need to be outputed for the email key, not aggregated.This is beyond my current knowledge however I was thinking a simpler solution might be to go back all the way to the beginning and pivot the columns before I write out the Json so it would be in the format I want when I load it back into Spark, however I was reluctant to transform the raw data at all. I also would prefer not to handcode the schema for 188 fields as I want to dynamically pick which fields I want so it needs to be able to handle that. | The code below converts the example JSON provided to a dataframe(without using PySpark). Import Librariesimport numpy as npimport pandas as pdAssign variablestrue = Truefalse = FalseAssign JSON to a variabledata = [{"userEmail": "rod@test.com", "parameters": [ { "intValue": "0", "name": "classroom:num_courses_created" }, { "boolValue": true, "name": "accounts:is_disabled" }, { "name": "classroom:role", "stringValue": "student" } ]},{"userEmail": "EMAIL2@test.com", "parameters": [ { "intValue": "1", "name": "classroom:num_courses_created" }, { "boolValue": false, "name": "accounts:is_disabled" }, { "name": "classroom:role", "stringValue": "student2" } ]}]Function to convert dictionary to columnsdef get_col(x): y = pd.DataFrame(x, index=[0]) col_name = y.iloc[0]['name'] y = y.drop(columns=['name']) y.columns = [col_name] return yIterate through the JSON listdf = pd.DataFrame()for item in range(len(data)): # Initialize empty dataframe trow = pd.DataFrame() temp = pd.DataFrame(data[item]) for i in range(temp.shape[0]): # Read each row x = temp.iloc[i]['parameters'] trow = pd.concat([trow,get_col(x)], axis=1) trow['userEmail'] = temp.iloc[i]['userEmail'] df = df.append(trow) # Rearrange columns, drop those that are not neededdf = df[['userEmail', 'classroom:num_courses_created', 'accounts:is_disabled', 'classroom:role']]Output:......................... Previous edit .....................Convert JSON/nested dictionaries to a dataframetemp = pd.DataFrame(data)# Initialize empty dataframedf = pd.DataFrame()for i in range(temp.shape[0]): # Read each row x = temp.iloc[i]['parameters'] temp1 = pd.DataFrame([x], columns=x.keys()) temp1['userEmail'] = temp.iloc[i]['userEmail'] # Convert nested key:value pairs y = x['name'].split(sep=':') temp1['name_' + y[0]] = y[1] # Combine to dataframe df = df.append(temp1, sort=False)# Rearrange columns, drop those that are not neededdf = df[['userEmail', 'intValue', 'stringValue', 'boolValue', 'name_classroom', 'name_accounts']]OutputEdit-1Based on the screenshot in updated question, code below should work.Assign variables |
Members in A Guild I have this code that is supposed to send a list of members in a server that the bot is in, only with a guild id. Here is my code:@client.command(name='members')async def _members(ctx, guild_id: int): guild = client.get_guild(guild_id) for m in guild.fetch_members(limit=None): await ctx.send(f"{m}") await ctx.send("Done!")But it doesn't seem to be working and I can't figure out why. I'm seriously stupid sometimes.Here is the error I'm getting:172.18.0.1 - - [05/Nov/2020 20:59:40] "HEAD / HTTP/1.1" 200 -Ignoring exception in command members:Traceback (most recent call last): File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/core.py", line 85, in wrapped ret = await coro(*args, **kwargs) File "main.py", line 154, in _members for m in guild.fetch_members(limit=None):TypeError: 'MemberIterator' object is not iterableThe above exception was the direct cause of the following exception:Traceback (most recent call last): File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/bot.py", line 903, in invoke await ctx.command.invoke(ctx) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/core.py", line 859, in invoke await injected(*ctx.args, **ctx.kwargs) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/core.py", line 94, in wrapped raise CommandInvokeError(exc) from excdiscord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: 'MemberIterator' object is not iterableIn addition, is there a way to remove all the bots from the list? | According to the API References:Retrieves an AsyncIterator that enables receiving the guild’s members.Also it says:Note: This method is an API call. For general usage, consider members instead.But if you want to use fetch_members instead of guild.members, you can do 2 things to prevent this error.You can use async for member in guild.fetch_members(limit=None):orfor member in await guild.fetch_members(limit=150).flatten():.But I suggest you to do what API References says and use guild.members.Also, if you send messages in for loop, bot will spam so many messages, you should use str.join() method to send all members in one message. Here's what I mean:@client.command(name='members')async def _members(ctx, guild_id: int): guild = client.get_guild(guild_id) await ctx.send(', '.join(guild.members))Or if you still want to send seperate messages, you can use this:@client.command(name='members')async def _members(ctx, guild_id: int): guild = client.get_guild(guild_id) for m in guild.members(): await ctx.send(f'{m}') await ctx.send('DONE!') |
Can one matplotlib style file inherit values from another? Suppose I have a matplotlib style file base.mplstyle with several specificationslegend.fancybox: Truelegend.numpoints: 1legend.frameon: Truelegend.framealpha: 0.8legend.shadow: Truetext.color: whitetext.usetex: Truefigure.figsize : 9, 9Suppose I want to create another style file small_figure.mplstyle, which has all identical settings save for just one:legend.fancybox: Truelegend.numpoints: 1legend.frameon: Truelegend.framealpha: 0.8legend.shadow: Truetext.color: whitetext.usetex: Truefigure.figsize : 2, 2 # <--- the only differenceInstead of copy-pasting a bunch of parameter values, is there any easy way to tell small_figure.mplstyle to inherit defaults from base.mplstyle? I'm thinking something concise, like# This is not a valid .mplstyle file plt.style.use("base")figure.figsize : 2, 2 | Style sheets are designed to be combined, so you can set up the styles you want to combine in list form. Note, however, that they will be overridden by the values of the more right-handed styles.import matplotlib.pyplot as pltplt.style.use(['base', 'small_figure']) |
How to Extract Numbers from String Column in Pandas with decimal? I need to extract Numbers from String Column.df:Product tld los 16OZ HSJ14 OZ hqk 28.3 OZ rtk .7 OZ ahdd .92OZ aje 0.22 OZI need to Extract Numbers from column "Product" along with Decimal.df_Output: Product Numbers tld los 16OZ 16 HSJ14 OZ 14 hqk 28.3 OZ 28.3 rtk .7 OZ 0.7 ahdd .92OZ 0.92 aje 0.22 OZ 0.22what i tried:df['Numbers'] = df['Product'].str.extract('([0-9]+[,./]*[0-9]*)') -- Missing Values like .7 | If as simplified as presented, replace every other string except digit and dotdf['Numbers'] =df['Product'].str.replace('[^\d\.]','', regex=True).astype(float) Product Numbers0 tld los 16OZ 16.001 HSJ14 OZ 14.002 hqk 28.3 OZ 28.303 rtk .7 OZ 0.704 ahdd .92OZ 0.925 aje 0.22 OZ 0.22 |
Inserting data into xlsx I'm running a python script and the result of it is a group of values. Let's say the result is a unique date and time.date = now.strftime("%d/%m/%Y")time = now.strftime("%H:%M:%S")I would like to write down the data into xlsx file, but the problem is that the data is rewrite every single time the scrip is done.How should I write the date to add it into the xlsx and not to rewrite the first line of it?That's the code I am using and I'm not sure how to change it.worksheet.write(1, 0, date)worksheet.write(1, 1, time)The result I would like to get at the end should be something like following:Date Time20/03/2022 00:24:3620/03/2022 00:55:3621/03/2022 15:24:3622/03/2022 11:24:3623/03/2022 22:24:36 | You can open the excel in append mode and then keep inserting data. Refer below snippet:with pd.ExcelWriter("existing_file_name.xlsx", engine="openpyxl", mode="a") as writer: df.to_excel(writer, sheet_name="name") |
Python extracting only the First href link for every nth occurence in the for loop I am trying simple web scraping using python, but there is problem fetching link names as there are 2 to 3 href headers in the same class btn as mentioned below whereas i need only the first one to be printed for every new occurrence in the loop.#!/usr/bin/python3from bs4 import BeautifulSoupimport requestsurl = "https://www.learndatasci.com/free-data-science-books/"# Getting the webpage, creating a Response object.response = requests.get(url)# Extracting the source code of the page.data = response.text# Passing the source code to BeautifulSoup to create a BeautifulSoup object for it.soup = BeautifulSoup(data, 'lxml')# Extracting all the <a> tags into a list.tags = soup.find_all('a', class_='btn')# Extracting URLs from the attribute href in the <a> tags.for tag in tags: print(tag.get('href'))Output from the above code:http://www.cin.ufpe.br/~tfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdfhttp://www.amazon.com/gp/product/0136042597/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0136042597&linkCode=as2&tag=learnds-20&linkId=3FRORB7P56CEWSK5http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdfhttp://amzn.to/1WePh0Nhttp://www.e-booksdirectory.com/details.php?ebook=9575http://amzn.to/1FcalRpWhile desired Output:http://www.cin.ufpe.br/~tfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdfhttp://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdfhttp://www.e-booksdirectory.com/details.php?ebook=9575 | BeautifulSoup has excellent CSS support, just use that to pick every odd item:soup = BeautifulSoup(data, 'lxml')for tag in soup.select('a.btn:nth-of-type(odd)'):Demo:>>> for tag in soup.select('a.btn:nth-of-type(odd)'): print(tag['href'])...http://www.cin.ufpe.br/~tfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdfhttp://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdfhttp://www.e-booksdirectory.com/details.php?ebook=9575... etcYou do have a parent <div class="book"> element per group of links you could make use of:for tag in soup.select('.book a.btn:first-of-type'):which would work for any number of links per book. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.