questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
python 3 complex mathematics wrong answer So I was trying to get e^(pi*I)=-1, but python 3 gives me another, weird result:print(cmath.exp(cmath.pi * cmath.sqrt(-1)))Result:(-1+1.2246467991473532e-16j)This should in theory return -1, no? | (Partial answer to the revised question.)In theory, the result should be -1, but in practice the theory is slightly wrong.The cmath unit uses floating-point variables to do its calculations--one float value for the real part of a complex number and another float value for the imaginary part. Therefore the unit experiences the limitations of floating point math. For more on those limitations, see the canonical question Is floating point math broken?.In brief, floating point values are usually mere approximations to real values. The value of cmath.pi is not actually pi, it is just the best approximation that will fit into the floating-point unit of many computers. So you are not really calculating e^(pi*I), just an approximation of it. The returned value has the exact, correct real part, -1, which is somewhat surprising to me. The imaginary part "should be" zero, but the actual result agrees with zero to 15 decimal places, or over 15 significant digits compared to the start value. That is the usual precision for floating point.If you require exact answers, you should not be working with floating point values. Perhaps you should try an algebraic solution, such as the sympy module.(The following was my original answer, which applied to the previous version of the question, where the result was an error message.)The error message shows that you did not type what you thought you typed. Instead of cmath.exp on the outside of the expression, you typed math.exp. The math version of the exponential function expects a float value. You gave it a complex value (cmath.pi * cmath.sqrt(-1)) so Python thought you wanted to convert that complex value to float.When I type the expression you give at the top of your question, with the cmath properly typed, I get the result(-1+1.2246467991473532e-16j)which is very close to the desired value of -1. |
Django display radio button choice i want to display vaule_fields of my User model as selectable choice radiobuttons, any idea how to do this?...template.html....currently they are displayed as input fields?! | Here is an example of gender choices displayed as radio buttons.MODELS.PY ********************************************************#GENDER CHOICES OPTIONS GENDER_COICES = ( ('M', 'Male'), ('F', 'Female'), ('O', 'Other'), )gender = models.CharField(max_length=3, choices=GENDER_COICES, default="N/A")*****************************************************************FORMS.PY ********************************************************class UserQForm(UserCreationForm): """ This is a form used to create a user. """ # Form representation of an image QUserPictureProfile = forms.ImageField(label="Profile Picture", allow_empty_file=True, required=False) password1 = forms.CharField(label="Password", max_length=255, widget=forms.PasswordInput()) password2 = forms.CharField(label="Confirmation", max_length=255, widget=forms.PasswordInput()) #GENDER CHOICES OPTIONS GENDER_COICES = ( ('M', 'Male'), ('F', 'Female'), ('O', 'Other'), ) gender = forms.ChoiceField(widget=forms.RadioSelect(), choices=GENDER_COICES) class Meta: model = QUser fields = ('QUserPictureProfile', 'gender', 'email', 'first_name', 'last_name', 'date_of_birth', 'password1', 'password2','phone_number',) def clean(self): """ Clean form fields. """ cleaned_data = super(UserQForm, self).clean() password1 = cleaned_data.get("password1") password2 = cleaned_data.get("password2") if password1 and password2 and password1 != password2: raise forms.ValidationError("Passwords do not match!") |
How to fit a set of 3D data points using a third or higher degree of polynomial surface regression? I have input data points (x,y,z), all positive, and need to fit them to a surface. More specifically, I have to create a grid from the x and y data points and evaluate the data points on this grid to obtain a surface of z-values to plot.How can I do a 3rd or higher polynomial regression to fit a surface to my data points?The degree of the polynomial regression should preferably be an input value. | Here is a non-linear 3D surface fitter with 3D scatter plot, 3D surface plot, and contour plot. This should be all of the graphs.import numpy, scipy, scipy.optimizeimport matplotlibfrom mpl_toolkits.mplot3d import Axes3Dfrom matplotlib import cm # to colormap 3D surfaces from blue to redimport matplotlib.pyplot as pltgraphWidth = 800 # units are pixelsgraphHeight = 600 # units are pixels# 3D contour plot linesnumberOfContourLines = 16def SurfacePlot(func, data, fittedParameters): f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100) matplotlib.pyplot.grid(True) axes = Axes3D(f) x_data = data[0] y_data = data[1] z_data = data[2] xModel = numpy.linspace(min(x_data), max(x_data), 20) yModel = numpy.linspace(min(y_data), max(y_data), 20) X, Y = numpy.meshgrid(xModel, yModel) Z = func(numpy.array([X, Y]), *fittedParameters) axes.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=1, antialiased=True) axes.scatter(x_data, y_data, z_data) # show data along with plotted surface axes.set_title('Surface Plot (click-drag with mouse)') # add a title for surface plot axes.set_xlabel('X Data') # X axis data label axes.set_ylabel('Y Data') # Y axis data label axes.set_zlabel('Z Data') # Z axis data label plt.show() plt.close('all') # clean up after using pyplot or else thaere can be memory and process problemsdef ContourPlot(func, data, fittedParameters): f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100) axes = f.add_subplot(111) x_data = data[0] y_data = data[1] z_data = data[2] xModel = numpy.linspace(min(x_data), max(x_data), 20) yModel = numpy.linspace(min(y_data), max(y_data), 20) X, Y = numpy.meshgrid(xModel, yModel) Z = func(numpy.array([X, Y]), *fittedParameters) axes.plot(x_data, y_data, 'o') axes.set_title('Contour Plot') # add a title for contour plot axes.set_xlabel('X Data') # X axis data label axes.set_ylabel('Y Data') # Y axis data label CS = matplotlib.pyplot.contour(X, Y, Z, numberOfContourLines, colors='k') matplotlib.pyplot.clabel(CS, inline=1, fontsize=10) # labels for contours plt.show() plt.close('all') # clean up after using pyplot or else thaere can be memory and process problemsdef ScatterPlot(data): f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100) matplotlib.pyplot.grid(True) axes = Axes3D(f) x_data = data[0] y_data = data[1] z_data = data[2] axes.scatter(x_data, y_data, z_data) axes.set_title('Scatter Plot (click-drag with mouse)') axes.set_xlabel('X Data') axes.set_ylabel('Y Data') axes.set_zlabel('Z Data') plt.show() plt.close('all') # clean up after using pyplot or else thaere can be memory and process problemsdef func(data, a, alpha, beta): t = data[0] p_p = data[1] return a * (t**alpha) * (p_p**beta)if __name__ == "__main__": xData = numpy.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]) yData = numpy.array([11.0, 12.1, 13.0, 14.1, 15.0, 16.1, 17.0, 18.1, 90.0]) zData = numpy.array([1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.0, 9.9]) data = [xData, yData, zData] initialParameters = [1.0, 1.0, 1.0] # these are the same as scipy default values in this example # here a non-linear surface fit is made with scipy's curve_fit() fittedParameters, pcov = scipy.optimize.curve_fit(func, [xData, yData], zData, p0 = initialParameters) ScatterPlot(data) SurfacePlot(func, data, fittedParameters) ContourPlot(func, data, fittedParameters) print('fitted prameters', fittedParameters) modelPredictions = func(data, *fittedParameters) absError = modelPredictions - zData SE = numpy.square(absError) # squared errors MSE = numpy.mean(SE) # mean squared errors RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE Rsquared = 1.0 - (numpy.var(absError) / numpy.var(zData)) print('RMSE:', RMSE) print('R-squared:', Rsquared) |
Geod ValueError : undefined inverse geodesic I want to compute the distance between two lon / lat points by using Geod class from pyproj library.from pyproj import Geodg = Geod(ellps='WGS84')lonlat1 = 10.65583081724002, -7.313341167341917lonlat2 = 10.655830383300781, -7.313340663909912_, _, dist = g.inv(lonlat1[0], lonlat1[1], lonlat2[0], lonlat2[1])I get the following error :ValueError Traceback (most recent call last)<ipython-input-5-8ba490aa5fcc> in <module>()----> 1 _, _, dist = g.inv(lonlat1[0], lonlat1[1], lonlat2[0], lonlat2[1])/usr/lib/python2.7/dist-packages/pyproj/__init__.pyc in inv(self, lons1, lats1, lons2, lats2, radians) 558 ind, disfloat, dislist, distuple = _copytobuffer(lats2) 559 # call geod_inv function. inputs modified in place.--> 560 _Geod._inv(self, inx, iny, inz, ind, radians=radians) 561 # if inputs were lists, tuples or floats, convert back. 562 outx = _convertback(xisfloat,xislist,xistuple,inx)_geod.pyx in _geod.Geod._inv (_geod.c:1883)()ValueError: undefined inverse geodesic (may be an antipodal point)Where does this error message come from ? | Those two points are only a few centimetres apart. It looks like pyproj / Geod doesn't cope well with points which are that close together. That's a bit strange, since simple plane geometry is more than adequate at such distances. Also, that error message is a bit suspicious, since it's suggesting that the two points are antipodal, i.e., diametrically opposite, which is clearly not the case! OTOH, maybe the antipodal point it mentions is some intermediate point that arises somehow in the calculation... Still, I'd be rather hesitant in using a library that behaves like this. Given this defect, I suspect that pyproj has other flaws. In particular, it probably uses the old Vincenty's formulae for its ellipsoid geodesic calculations, which is known to be unstable when dealing with near-antipodal points, and not particularly accurate over large distances. I recommend using the modern algorithms of C. F. F. Karney. Dr Karney is a major contributor to the Wikipedia articles on geodesics, in particular Geodesics on an ellipsoid, and his geographiclib is available on PyPi, so you can easily install it using pip. See his SourceForge site for further information, and geographiclib binding in other languages.FWIW, here's a short demo of using geographiclib to compute the distance in your question. from geographiclib.geodesic import GeodesicGeo = Geodesic.WGS84lat1, lon1 = -7.313341167341917, 10.65583081724002lat2, lon2 = -7.313340663909912, 10.655830383300781d = Geo.Inverse(lat1, lon1, lat2, lon2)print(d['s12'])output0.07345528623159624That figure is in metres, so those two points are a little over 73mm apart.If you'd like to see geographiclib being used to solve a complex geodesic problem, please see this math.stackexchange answer I wrote last year, with Python 2 / 3 source code on gist.Hopefully, this is no longer an issue, since pyproj now uses code from geographiclib. |
consumer not acknowledging message I have a channel.basic_ack however when I check in the rabbitmq admin ui, it stays unacked rather than acked. Here is my code to ack the messagedef handle_payload(self, channel, method, properties, body): #self.taskhub.server.invoke('SendTaskNotification', body['userId'], body['taskId']) discovery = body channel.basic_ack(method.delivery_tag) print "acked " + str(method.delivery_tag) self.medium.server.invoke('DetectDevice', discovery)To my understand it should ack the message as soon as channel.basic_ack is called. However this is not happening. | Pika documentation http://pika.readthedocs.io/en/0.10.0/modules/adapters/blocking.html says, delivery_tag is a default parameter.change the above code with the below line:channel.basic_ack(delivery_tag=method.delivery_tag) |
Caffe: Extremely high loss while learning simple linear functions I'm trying to train a neural net to learn the function y = x1 + x2 + x3. The objective is to play around with Caffe in order to learn and understand it better. The data required are synthetically generated in python and written to memory as an lmdb database file.Code for data generation:import numpy as npimport lmdbimport caffeNtrain = 100Ntest = 20K = 3H = 1W = 1Xtrain = np.random.randint(0,1000, size = (Ntrain,K,H,W))Xtest = np.random.randint(0,1000, size = (Ntest,K,H,W))ytrain = Xtrain[:,0,0,0] + Xtrain[:,1,0,0] + Xtrain[:,2,0,0]ytest = Xtest[:,0,0,0] + Xtest[:,1,0,0] + Xtest[:,2,0,0]env = lmdb.open('expt/expt_train')for i in range(Ntrain): datum = caffe.proto.caffe_pb2.Datum() datum.channels = Xtrain.shape[1] datum.height = Xtrain.shape[2] datum.width = Xtrain.shape[3] datum.data = Xtrain[i].tobytes() datum.label = int(ytrain[i]) str_id = '{:08}'.format(i) with env.begin(write=True) as txn: txn.put(str_id.encode('ascii'), datum.SerializeToString())env = lmdb.open('expt/expt_test')for i in range(Ntest): datum = caffe.proto.caffe_pb2.Datum() datum.channels = Xtest.shape[1] datum.height = Xtest.shape[2] datum.width = Xtest.shape[3] datum.data = Xtest[i].tobytes() datum.label = int(ytest[i]) str_id = '{:08}'.format(i) with env.begin(write=True) as txn: txn.put(str_id.encode('ascii'), datum.SerializeToString())Solver.prototext file:net: "expt/expt.prototxt"display: 1max_iter: 200test_iter: 20test_interval: 100base_lr: 0.000001momentum: 0.9# weight_decay: 0.0005lr_policy: "inv"# gamma: 0.5# stepsize: 10# power: 0.75snapshot_prefix: "expt/expt"snapshot_diff: truesolver_mode: CPUsolver_type: SGDdebug_info: trueCaffe model:name: "expt"layer { name: "Expt_Data_Train" type: "Data" top: "data" top: "label" include { phase: TRAIN } data_param { source: "expt/expt_train" backend: LMDB batch_size: 1 }}layer { name: "Expt_Data_Validate" type: "Data" top: "data" top: "label" include { phase: TEST } data_param { source: "expt/expt_test" backend: LMDB batch_size: 1 }}layer { name: "IP" type: "InnerProduct" bottom: "data" top: "ip" inner_product_param { num_output: 1 weight_filler { type: 'constant' } bias_filler { type: 'constant' } }}layer { name: "Loss" type: "EuclideanLoss" bottom: "ip" bottom: "label" top: "loss"}The loss on the test data that I'm getting is 233,655. This is shocking as the loss is three orders of magnitude greater than the numbers in the training and test data sets. Also, the function to be learned is a simple linear function. I can't seem to figure out what is wrong in the code. Any suggestions/inputs are much appreciated. | The loss generated is a lot in this case because Caffe only accepts data (i.e. datum.data) in the uint8 format and labels (datum.label) in int32 format. However, for the labels, numpy.int64 format also seems to be working. I think datum.data is accepted only in uint8 format because Caffe was primarily developed for Computer Vision tasks where inputs are images, which have RGB values in [0,255] range. uint8 can capture this using the least amount of memory. I made the following changes to the data generation code:Xtrain = np.uint8(np.random.randint(0,256, size = (Ntrain,K,H,W)))Xtest = np.uint8(np.random.randint(0,256, size = (Ntest,K,H,W)))ytrain = int(Xtrain[:,0,0,0]) + int(Xtrain[:,1,0,0]) + int(Xtrain[:,2,0,0])ytest = int(Xtest[:,0,0,0]) + int(Xtest[:,1,0,0]) + int(Xtest[:,2,0,0])After playing around with the net parameters (learning rate, number of iterations etc.) I'm getting an error of the order of 10^(-6) which I think is pretty good! |
Discrepancy between my pandas.cut output category and the ones shown in pandas documentation I'm learning pandas.cut to put my data into different bins. I'm running the example code from the pandas documentation. But somehow the category shown in the outputs I generated are different. The first example:Tocut = np.array([1, 7, 5, 6, 4, 9])pd.cut(Tocut, 3)The category output I get is "Categories (3, object): [(0.992, 3.667] < (3.667, 6.333] < (6.333, 9]]" while the documentation shows "Categories (3, interval[float64]):..."The second example:s = pd.Series(np.array([2, 4, 6, 8, 10]), index=['a', 'b', 'c', 'd', 'e'])pd.cut(s, 6)The category output I get is "Categories (6, object):" while the documentation still shows float64. I am just wondering what contributes to this. And is anything in Python not an object?Thanks. | I think this might be a bug, but it has been fixed now. On 0.23.4, it returns float64 as expected.pd.cut(s, 6)a (1.992, 3.333]b (3.333, 4.667]c (4.667, 6.0]d (7.333, 8.667]e (8.667, 10.0]dtype: categoryCategories (6, interval[float64]): [(1.992, 3.333] < (3.333, 4.667] < (4.667, 6.0] < (6.0, 7.333] < (7.333, 8.667] < (8.667, 10.0]]Guessing it was a bug that had to do with the non-numeric index in the second example contributing to that in some way. |
Is it possible to create a query command that takes in a list of variables in python-mysql I am trying to do a multiquery which utilizes executemany in MySQLDb library. After searching around, I found that I'll have to create a command that uses INSERT INTO along with ON DUPLICATE KEY instead of UPDATE in order to use executemanyAll is good so far, but then I run into a problem which I can't set the SET part efficiently. My table has about 20 columns (whether you want to criticize the fatness of the table is up to you. It works for me so far) and I want to form the command string efficiently if possible.Right now I haveupdate_query = """ INSERT INTO `my_table` ({all_columns}) VALUES({vals}) ON DUPLICATE KEY SET <should-have-each-individual-column-set-to-value-here> """.format(all_columns=all_columns, vals=vals)Where all_columns covers all the columns, and vals cover bunch of %s as I'm going to use executemany later.However I have no idea how to form the SET part of string. I thought about using comma-split to separate them into elements in a list, but I'm not sure if I can iterate them.Overall, the goal of this is to only call the db once for update, and that's the only way I can think of right now. If you happen to have a better idea, please let me know as well.EDIT: adding more infoall_columns is something like 'id, id2, num1, num2'vals right now is set to be '%s, %s, %s, %s'and of course there are more columns than just 4 | Assuming that you have a list of tuples for the set piece of your command:listUpdate = [('f1', 'i'), ('f2', '2')]setCommand = ', '.join([' %s = %s' % x for x in listUpdate])all_columns = 'id, id2, num1, num2'vals = '%s, %s, %s, %s'update_query = """ INSERT INTO `my_table` ({all_columns}) VALUES({vals}) ON DUPLICATE KEY SET {set} """.format(all_columns=all_columns, vals=vals, set=setCommand)print(update_query) |
how to skip lines in pandas dataframe at the end of the xls I have a dataframe: Energy Supply Energy Supply per Capita % Renewable Country Afghanistan 3.210000e+08 10 78.669280 Albania 1.020000e+08 35 100.000000 British Virgin Islands 2.000000e+06 85 0.000000 ... Aruba 1.200000e+07 120 14.870690 ... Excludes the overseas territories. NaN NaN NaN Data exclude Hong Kong and Macao Special Admini... NaN NaN NaN Data on kerosene-type jet fuel include aviation... NaN NaN NaN For confidentiality reasons, data on coal and c... NaN NaN NaN Data exclude Greenland and the Danish Faroes. NaN NaN NaNI had used df = pd.read_excel(filelink, skiprows=16) to cut unwanted information at the very beginning of the file but how can I get rid of the "noize"-information at the end of df?I had tried to pass a list to skiprows but it messed the results up. | It seems you need parameter skip_footer = 5 in read_excel:skip_footer : int, default 0Rows at the end to skip (0-indexed)Sample:df = pd.read_excel('myfile.xlsx', skip_footer = 5)print (df) Country Energy Supply Energy Supply per Capita \0 Afghanistan 321000000.0 10 1 Albania 102000000.0 35 2 British Virgin Islands 2000000.0 85 3 Aruba 12000000.0 120 % Renewable 0 78.66928 1 100.00000 2 0.00000 3 14.87069 Another solution is remove all rows where all NaN in some columns with dropna:df = pd.read_excel('myfile.xlsx')cols = ['Energy Supply','Energy Supply per Capita','% Renewable']df = df.dropna(subset=cols, how='all')print (df) Country Energy Supply Energy Supply per Capita \0 Afghanistan 321000000.0 10.0 1 Albania 102000000.0 35.0 2 British Virgin Islands 2000000.0 85.0 3 Aruba 12000000.0 120.0 % Renewable 0 78.66928 1 100.00000 2 0.00000 3 14.87069 |
How to concatenate pandas DataFrame with built-in logic? I have two pandas data frame and I would like to produce the output shown in the expected data frame.import pandas as pddf1 = pd.DataFrame({'a':['aaa', 'bbb', 'ccc', 'ddd'], 'b':['eee', 'fff', 'ggg', 'hhh']})df2 = pd.DataFrame({'a':['aaa', 'bbb', 'ccc', 'ddd'], 'b':['eee', 'fff', 'ggg', 'hhh'], 'update': ['', 'X', '', 'Y']})expected = pd.DataFrame({'a': ['aaa', 'bbb', 'ccc', 'ddd'], 'b': ['eee', 'X', 'ggg', 'Y']})I tried to apply some concatenation logic but this is not producing the expected output.df1.set_index('b')df2.set_index('update')out = pd.concat([df1[~df1.index.isin(df2.index)], df2])print(out) a b update0 aaa eee1 bbb fff X2 ccc ggg3 ddd hhh YFrom this output I can produce the expected output but I was wondering if this logic can be built directly inside the concat call?def fx(row): if row['update'] is not '': row['b'] = row['update'] return rowresult = out.apply(lambda x : fx(x),axis=1)result.drop('update', axis=1, inplace=True)print(result) a b0 aaa eee1 bbb X2 ccc ggg3 ddd Y | Use builtin update by replacing '' with nan i.e df1['b'].update(df2['update'].replace('',np.nan)) a b0 aaa eee1 bbb X2 ccc ggg3 ddd YYou can also use np.where i.e out = df1.assign(b=np.where(df2['update'].eq(''), df2['b'], df2['update'])) |
Use tf.TextLineReader to read to a np.array in TensorFlow I need to read a file in my train module into a np.array (i want to use the array as label_keys in a DNNClassifier).I tried tf.read_file and tf.TextLineReader() but i can´t get them to just output the rows to a np.array.Is it possible?(why not just read a file with open? I´m training in GCS and want to get the file from storage :) | To access a file from GCS using TensorFlow, you can use the Python tf.gfile.GFile API, which acts like a regular Python file object, but allows you to use TensorFlow's filesystem connectors:with tf.gfile.GFile("gs://...") as f: file_contents = f.read() |
Blank python path When I run echo $PYTHONPATH in bash, I receive a blank line then the prompt again. My .bash_profile is this:# Setting PATH for Python 2.7# The orginal version is saved in .bash_profile.pysavePATH="/Library/Frameworks/Python.framework/Versions/2.7/bin:${PATH}"export PATHI'm running OsX 10.10.5What does this blank line mean? | Blank line means the variable PYTHONPATH is not set with any value. Note that PATH and PYTHONPATH are 2 different variables.PATH has a list of directories to find executables when running in bash whereas PYTHONPATH has a list of directories for the python interpreter to search for python modules (similar to classes for CLASSPATH in Java).Hence you must use: PYTHONPATH="/Library/Frameworks/Python.framework/Versions/2.7/bin"export PYTHONPATH |
Returning found char and index in list - Python I have the following code:I am trying to compare each char in the userInputList with the Letters array, if found in the letters array i would like to return it along with its index number; so if a user was to type hello: it would check if 'h' exists in Letters which it does, return the value and also return the index of it which is 7.At the moment my if function checks against the index and not the actual character so it will always return true. Any help would be appreciated.Letters = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o', 'p','q','r','s','t','u','v','w','x','y','z']userInput = input("Please enter what you would like to deycrpt:")userInputList = list(userInput)for i in range(0,len(userInputList)): print(userInputList[i]) if userInputList[i] in Letters: print("true") i+=1Thanks. | This should work:# Python 2 users add the following line:from __future__ import print_functionfor letter in userInputList: print(letter, end=': ') try: print('found at index', Letters.index(letter)) except ValueError: print('not found')You can iterate directly over userInputList without using i.The method index returns the index of the first found entry and raises an IndexError if the value is not in the list. So catch this error and print that the letter is not found. Finally, print(letter, end=': ') suppresses the newline at the end of the print an puts : there, which make the letter and the message from the next print appear on the same line. |
multiprocessing pool.map() got "TypeError: list indices must be integers, not str" I do a multiprocessing with python's multiprocessing.Pool module, but got TypeError: list indices must be integers, not str Error:Here is my code: def getData(qid): r = requests.get("http://api.xxx.com/api?qid=" + qid) if r.status == 200: DBC.save(json.loads(r.text)) def getAnotherData(qid): r = requests.get("http://api.xxxx.com/anotherapi?qid=" + qid) if r.status == 200: DBC.save(json.loads(r.text)) def getAllData(qid): print qid getData(str(qid)) getAnotherData(str(qid)) if __name__ == "__main__": pool = Pool(processes=200) pool.map(getAllData, range(10000, 700000))After running the code for some time (not instantly), a Exception will be thrown out pool.map(getAllData, range(10000, 700000)) File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib/python2.7/multiprocessing/pool.py", line 567, in get raise self._valueTypeError: list indices must be integers, not strWhat could be wrong? Is it a bug of the Pool module? | When a worker task raises an exception, Pool catches it, sends it back to the parent process, and reraises the exception, but this doesn't preserve the original traceback (so you just see where it was reraised in the parent process, which isn't very helpful). At a guess, something in DBC.save expects a value loaded from the JSON to be an int, and it's actually a str.If you want to see the real traceback, import traceback at top level, and change the top level of your worker function to:def getAllData(qid): try: print qid getData(str(qid)) getAnotherData(str(qid)) except: traceback.print_exc() raiseso you can see the real traceback in the worker, not just the neutered, mostly useless traceback in the parent. |
How to give a condition along with cv2.waitKey() in python? I need to capture video and stop video after 10 seconds.But when i give condition along with cv2.waitKey() video stops instantly.When i separate the condition the second condition(elapsed==10) doesn't work.My sample code isimport cv2import timecap = cv2.VideoCapture(0)start_time=time.time()while(True): # Capture frame-by-frame ret, frame = cap.read() # Our operations on the frame come here gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) e_time = time.time() elapsed = e_time - start_time # Display the resulting frame cv2.imshow('frame',gray) if cv2.waitKey(1) or elapsed==10: breakcap.release()cv2.destroyAllWindows()How to stop video after 10 seconds? | Try using elapsed>=10.It's not sure your code will EXACTLY hit the 10 elapsed seconds. If 10.1 or 10.000000000001 seconds are elapsed your program will miss the time and never stop, because the condition will never be met. |
Hide Lines from Matplotlib Plot without redrawing them? I have a new issue with matplotlib and 'hiding' lineplots.I have a wxFrame with a matplotlib plot and an cursor to give values. Works perfectly well.In the plot are up to 13 lines and I want to show and hide them using checkboxes, this is working fine, too.This is my code to 'redraw'def Draw(self, visibility = None): self._axes.lines = [] # if no visibility data, draw all if visibility is None: visibility = [] for i in range(len(self._data)): visibility.append(True) else: # check if length match if len(self._data) != len(visibility): raise AttributeError('Visibility list length does not match plot count') # draw the lines you want for i in range(len(self._data)): if visibility[i]: plotName = 'FFD ' + str(i + 1) self._axes.plot(self.timebase, self._data[i], picker=5, label = plotName) #if there are any lines, draw a legend if len(self._axes.lines): self._axes.legend(prop={'size':9}) #update the canvas self._canvas.draw()But this results in the plot colors changing on every change. How can I have the colors stay? Any good ideas are appreciated (bad ones are also appreciated :) )! | Here's a bad one:Try setting alpha=0 to effectively hide a line doing something along the lines of self._axes.lines[mylineindex].set_alpha(0.0). In this way you should only have to redraw that one line. |
PyQt4 Gui that prints loop I'm trying to learn PyQt4 and has made the following Gui for this purpose - it has no other use.The code works almost as expected - the only thing that doesn't is the 'else' clause.import sysimport timefrom PyQt4.QtCore import *from PyQt4.QtGui import *class Form(QDialog): def __init__ (self, parent=None): super(Form, self).__init__(parent) self.startButton = QPushButton('Start') self.stopButton = QPushButton('Stop') self.browser = QTextBrowser() self.myLabel = QLabel() layout = QVBoxLayout() layout.addWidget(self.startButton) layout.addWidget(self.stopButton) layout.addWidget(self.browser) layout.addWidget(self.myLabel) self.setLayout(layout) self.startButton.setFocus() self.startButton.clicked.connect(self.guiLoop) self.stopButton.clicked.connect(self.guiLoop) self.setWindowTitle('Loop Gui') def guiLoop(self): state = False text = self.sender() self.myLabel.setText(text.text()) time.sleep(1) if text.text() == 'Start': state = True else: state = False i = 0 while state: time.sleep(.1) self.browser.append(str(i)) QApplication.processEvents() i += 1 else: self.browser.append('Stop loop') time.sleep(3) sys.exit()app = QApplication(sys.argv)form = Form()form.show()app.exec_()...I'd expect that the program would print 'Stop loop' in the browser widget before exiting, but it doesn't else: self.browser.append('Stop loop') time.sleep(3) sys.exit()I now have 3 questions:Why doesn't it print 'Stop loop'If you imagine that the loop was instead a data stream from a serial connection, how could I print only every 10th value. In the loop that would be 1, 11, 21 ... and so onGeneral comments on my codeThx in advance | Add the following line in your else partQApplication.processEvents()likewhile state: time.sleep(.1) if i % 10 == 1: self.browser.append(str(i)) QApplication.processEvents() i += 1else: self.browser.append('Stop loop') QApplication.processEvents() time.sleep(3) sys.exit()Output is like: 1 11 21 31 etc.. and Stop Loop |
Searching two things I am using re and would like to search a string between two strings. My problem is the string that I would like to search may end with either newline(\n) or another string. So what I want to do is if it is newline or another string it should give me back the string. The reason why I want to do that is some of my documents are created wrong in a way that it does not have new line, so I have to get the text until newline and then check if it has the corresponding string.I have tried this:recipients = re.search('Recipients:(.*)\n', body)reciBody = re.search('(.*)Notes', recipients.group(1).encode("utf-8"))Later on I am trying to split this by using:recipientsList = reciBody.group(1).encode("utf-8").split(',')The problem is I am getting this error if there is no corresponding string: recipientsList = reciBody.group(1).encode("utf-8").split(',')AttributeError: 'NoneType' object has no attribute 'group'What other ways can I use? Or how can I handle this errror? | I'm assuming nothing needs to be done if the group isn't found. Simplest is to just skip the error.try: recipientsList = reciBody.group(1).encode("utf-8").split(',')except AttributeError: pass # nothing needs to be doneInstead of pass you may need to set recipientsList to something else |
Pandas: equivalent to excel look up in python I have a data frame such thatA Bv1 2v2 4v3 6v4 3v5 5v6 3now I want to look up (col B value) for a value = v3 in column A. It shall give me an output 6. How shall I do that in python? | You need create Series by set_index and select by loc:s = df.set_index('A')['B']print (s)Av1 2v2 4v3 6v4 3v5 5v6 3Name: B, dtype: int64print (s.loc['v3'])#alternative:#print (s['v3'])6 |
Start AVD from jenkins I am try to launch an android emulator from jenkins.I have written a batch file as follows: cd E:\android-sdk\toolsemulator.exe -avd "AVD" -wipe-dataI execute this batch file from jenkins. But it does not launch the emulator.I have also tried launching it from python as follows:bash = "E:\\android-sdk\\tools\\emulator"print "executing: " + bashf_handle = open('test_output_launch.txt','w+')process = subprocess.Popen([bash, '-avd', 'AVD'])But the latter gives an error 'PANIC: Could not open: AVD'.Where as when I run the batch file normally without jenkins, everything works perfectly.I need to launch the AVD, install apk on it, and run some automated tests via jenkins. Please help!! | I think, it should be permission issue. Try running the jenkins client as admin.For Python, change your subprocess call to process = subprocess.Popen(['emulator.exe', '-avd', 'AVD'], cwd=bash) |
Failing to format regex correctly to locate and then parse out paragraph from text document between regex1 & regex2 using python I am trying to scrape the content between the line starting with 2) and the line starting with 3)I have managed to iterate through the document, but I'm drawing a blank as to how to get the script to begin storing the document's contents between line 2) and line 3).Eventually, I want to be able to harvest the variable device names "SWTEST6" & "SWTEST7" from the example, but it's not there yet.PYTHON SCRIPT SNIPPETdef eat_txt(filename): line2 = re.search('2\)*Section 2 .*?.') line3 = re.search('3\)*Section 3 .*?:') txt = Document(filename) pCount = 0 for blob in doc.paragraphs: if line2 in blob[pCount].text: for line in doc.paragraphs: print(doc.paragraphs) pCount=+1 if line3 in doc.paragraphs: breakSOURCE DOCUMENT 2) Section 2 - Here be the SWTEST6 & SWTEST7 reboot sequence. Do much stuff. valid command ! valid command ! valid command ! ! *** Comments go here *** ! valid command ! ! *** Comments go here *** ! ! *** Comments go here *** 3) Section 3 - Do not forget the SWTEST6 & SWTEST7 Restore Sequence and stuff: | I managed to figure out this question and wanted to share the results. It isn't the most elegant, but it works like a charm.The code iterates thru the document, locates the first search string, scrapes 2 hostnames (and converts them to ip addresses) from the first search string, then begins scraping lines of the document (while ignoring lines beginning with !), until it finishes upon locating the second search string. def eat_txt(filename): line2 = 'INITIAL SEARCH STRING GOES HERE' line3 = 'TERMINATING SEARCH STRING GOES HERE' doc = Document(filename) paragraphCount = 0 hostnames = [] ipaddrs = [] commands = [] reading_lines = False for paragraph in doc.paragraphs: if(reading_lines): if(re.search(line3, paragraph.text)): reading_lines = False else: if not str.startswith(paragraph.text, '!') and paragraph.text.strip(): commands.append(paragraph.text) elif(re.search(line2, paragraph.text)): reading_lines = True results2 = (re.search(line2, paragraph.text)) words2 = results2.group(0).split(' ') hostnames.append(words2[6]) ipaddrs.append(socket.gethostbyname(words2[6])) hostnames.append(words2[8]) ipaddrs.append(socket.gethostbyname(words2[8])) |
Can't access memory map from child process (Python 3.8) I'm writing a program that uses Python's multiprocessing module to speed up CPU-bound tasks, and I want the child processes I create to access a memory map that initially gets created in the parent process without duplicating it. According to the multiprocessing documentation, child processes no longer inherit file descriptors by default as of Python 3.4, so I've tried using os.set_inheritable() to override that behavior.Here's a quick mockup I made to demonstrate the issue:DATA = r"data.csv"from sys import platformWINDOWS = platform.startswith("win")import osfrom multiprocessing import Processimport mmapfrom typing import Optionaldef child(fd: int, shm_tag: Optional[str]) -> None: if shm_tag: # i.e. if using Windows mm = mmap.mmap(fd, 0, shm_tag, mmap.ACCESS_READ) else: mm = mmap.mmap(fd, 0, mmap.MAP_SHARED, mmap.PROT_READ) mm.close()if __name__ == "__main__": # Some code differs on Windows WINDOWS = platform.startswith("win") # Open file fd = os.open(DATA, os.O_RDONLY | os.O_BINARY if WINDOWS else os.O_RDONLY) os.set_inheritable(fd, True) # Create memory map from file descriptor if WINDOWS: shm_tag = "shm_mmap" mm = mmap.mmap(fd, 0, shm_tag, mmap.ACCESS_READ) else: shm_tag = None mm = mmap.mmap(fd, 0, mmap.MAP_SHARED, mmap.PROT_READ) # Run child process (p := Process(target = child, args = (fd, shm_tag), daemon = True)).start() p.join() p.close() mm.close() os.close(fd)This hasn't been working at all—or at least not on Windows*, where I'm primarily testing. I'm receiving an error in the child process that heavily implies that the file descriptor wasn't actually inherited:Process Process-1:Traceback (most recent call last): File "C:\Program Files\Python38\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Program Files\Python38\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "C:\Users\[N.D.]\Documents\test.py", line 12, in child mm = mmap.mmap(fd, 0, shm_tag, mmap.ACCESS_READ)ValueError: cannot mmap an empty fileFurthermore, I'm getting the exact same error regardless of whether I pass True or False to os.set_inheritable(), as if it doesn't actually make a difference after all.What's going on? Am I using the mmap module incorrectly?* Possibly relevant: Windows uses spawn() to create new processes rather than fork(), and throws an exception if you try to memory map an empty file. | Thanks to Eryk Sun's comments, I was able to make a working implementation:DATA = r"data.csv"from sys import platformif platform.startswith("win"): WINDOWS = True from msvcrt import get_osfhandleelse: WINDOWS = Falseimport osfrom multiprocessing import Processimport mmapfrom typing import Optionaldef child(fd_or_size: int, shm_tag: Optional[str]) -> None: if WINDOWS: mm = mmap.mmap(-1, fd_or_size, shm_tag, mmap.ACCESS_READ) else: mm = mmap.mmap(fd_or_size, 0, mmap.MAP_SHARED, mmap.PROT_READ) mm.close()if __name__ == "__main__": # Open file fd = os.open(DATA, os.O_RDONLY | os.O_BINARY if WINDOWS else os.O_RDONLY) # Create memory map from file descriptor if WINDOWS: # Obtain underlying file handle from file descriptor os.set_handle_inheritable(get_osfhandle(fd), True) shm_tag = f"test_mmap_{os.getpid()}" mm = mmap.mmap(fd, 0, shm_tag, mmap.ACCESS_READ) else: os.set_inheritable(fd, True) mm = mmap.mmap(fd, 0, mmap.MAP_SHARED, mmap.PROT_READ) # Run child process (p := Process(target = child, args = (mm.size() if WINDOWS else fd, shm_tag), daemon = True)).start() p.join() p.close() mm.close() os.close(fd)Important changes (all on Windows):Use get_osfhandle() to obtain the underlying file handle from the file descriptorGive the memory map a tagname that's process-specificIn the child process, attach to the memory map by giving the already-known size of the mapping. |
Transferring user entered data from an Excel worksheet to a Python Script I'm writing a script and I would like to be able to interact with MS Excel. Once I run the script, I would like to open up an excel worksheet, have the user enter some data, do some basic calcs in excel, and then return the calculated data and some of the user entered data to the script. I'm doing this because I'd like to allow the user to enter all the data at once and not have to answer prompt after prompt and also because I'm new to Python and not even close to being able to write a GUI.Here's an example of what I'm trying to do:Table 1 (range1 in an Excel worksheet): User enters Fraction and xa:aa data and excel calculates the blend. Blend info on xa:aa is returned to my python script is used in my script for further calculations. The data entry table is actually longer (more rows of data entry) but just showing enough of a subset to give a feel for what I'm trying to do: Stream 1 2 3 4 5 BlendFraction 10% 60% 20% 10 100%xa 100 150 175 57 135.0yg 30.7 22 18 12.2 25.6aa 210 425 375 307 352.2Table 2 (range2 in the same Excel worksheet)User enters all data and everything is returned to script for further calculations: Min Max Incr temp 45 60 5 press 7.2 7.8 0.2 cf 1 5 1 Once the data is entered into excel and transferred to the script, I complete the script. How do I go about doing this ? Excel seems like the easiest way to set up table entry data but if there is another way, please let me know. If it is Excel, how do I go about doing this ? | Here's a potential approach using pandas. If the input file doesn't exist, it writes a dummy file (modify this to suit), then opens the excel file, and reads back into a dataframe once the user has closed. Replace excel_path with a path to your Excel install (I'm using LibreOffice to here).import osimport subprocessimport pandas as pdinput_file = 'input.xlsx'excel_path = 'soffice'############## setup stuff#############if not os.path.isfile(input_file): input_template = pd.DataFrame(columns=['1','2','3','4']) # any additional code to set up space for user to input values input_template.to_excel(input_file)else:# call excel and open the input filesubprocess.call([excel_path, input_file])# read modified input into DataFrameexcel_input = pd.read_excel(input_file)################# body of script################ |
Peewee SQL query join where none of many match The following SQL finds all posts which haven't any associated tags named 'BadTag'.select * from post t1where not exists(select 1 from tag t2 where t1.id == t2.post_id and t2.name=='BadTag');How can I write this functionality in Peewee ORM? If I write something along the lines ofPost.select().where( ~Tag.select() .where(Post.id == Tag.post & Tag.name=='BadTag') .exists())it gets compiled toSELECT "t1"."id", ... FROM "post" AS t1 WHERE ? [-1]Something likePost.select().join(Tag).where(Tag.name!='BadTag')doesn't work since a Post can have many Tags.I'm new to SQL/Peewee so if this is a bad way to go about things I'd welcome pointers. | Do not use manecosta's solution, it is inefficient.Here is how to do a NOT EXISTS with a subquery:(Post .select() .where(~fn.EXISTS( Tag.select().where( (Tag.post == Post.id) & (Tag.name == 'BadTag'))))You can also do a join:(Post .select(Post, fn.COUNT(Tag.id)) .join(Tag, JOIN.LEFT_OUTER) .where(Tag.name == 'BadTag') .group_by(Post) .having(fn.COUNT(Tag.id) == 0)) |
Calculating number of adjacent odd numbers I am trying to write a code in Python where the user is asked to enter the number of numbers in a sequence, then the numbers themselves. And finally, the program outputs the number of pairs of adjacent odd numbers. Here's a sample output:Enter the length of the sequence: 6 Enter number 1: 3 Enter number 2: 4 Enter number 3: 7 Enter number 4: 9 Enter number 5: 3 Enter number 6: 5 The number of pairs of adjacent odd numbers is 3I have come up with the following code:length = eval(input("Enter the length of the sequence: "))for i in range(1,length+1): ask = eval(input("Enter number: "+str(i)+ ": ")) for m in range(0,length+1,2): ask2 = askh = ask%2f = ask2%2if h>0 and f>0: k = (len(str(ask) + str(ask2))) print(k)else: passAlthough the output for the prompts are correct, I am unable to count the number of pairs of adjacent odd numbers. Please help me correct my code or build on it; this will be highly appreciated. As you must have noticed, I have been using basic if statements, loops and strings to write the code. It would be great if you could stick to this for my better understanding.Sorry for the long post.Thank you so much | check if the current element and the next are both odd and sum:length = int(input("Enter the length of the sequence: "))nums = [int(input("Enter number: {}: ".format(i))) for i in range(1, length + 1)]print(sum(ele % 2 and nums[i] % 2 for i,ele in enumerate(nums, 1)))enumerate(nums, 1) starts the index at 1 so ele % 2 and nums[i] % 2 checks the current element as we iterate over nums with the next adjacent number.Use int(input.. when you want to cast to an int, using eval is never a good idea. You should also probably use a while loop and verify the user input with a try/except.Without using lists:length = int(input("Enter the length of the sequence: "))total = 0# get a starting numberask = int(input("Enter number: {}".format(1)))# will keep track of previous number after first iterationprev = askfor i in range(2, length + 1): ask = int(input("Enter number: {}".format(i))) # if current and previous are both odd increase count if ask % 2 and prev % 2: total += 1 # update prev prev = askprint(total) |
Issues Translating Custom Discrete Fourier Transform from MATLAB to Python I'm developing Python software for someone and they specifically requested that I use their DFT function, written in MATLAB, in my program. My translation is just plain not working, tested with sin(2*pi*r).The MATLAB function below:function X=dft(t,x,f)% Compute DFT (Discrete Fourier Transform) at frequencies given% in f, given samples x taken at times t:% X(f) = sum { x(k) * e**(2*pi*j*t(k)*f) }% kshape = size(f);t = t(:); % Format 't' into a column vectorx = x(:); % Format 'x' into a column vectorf = f(:); % Format 'f' into a column vectorW = exp(-2*pi*j * f*t');X = W * x;X = reshape(X,shape);And my Python interpretation:def dft(t, x, f): i = 1j #might not have to set it to a variable but better safe than sorry! w1 = f * t w2 = -2 * math.pi * i W = exp(w1 * w2) newArr = W * x return newArrWhy am I having issues? The MATLAB code works fine but the Python translation outputs a weird increasing sine curve instead of a Fourier transform. I get the feeling Python is handling the calculations slightly differently but I don't know why or how to fix this. | Numpy arrays do element wise multiplication with *.You need np.dot(w1,w2) for matrix multiplication using numpy arrays (not the case for numpy matrices)Make sure you are clear on the distinction between Numpy arrays and matrices. There is a good help page "Numpy for Matlab Users":http://wiki.scipy.org/NumPy_for_Matlab_UsersDoesn't appear to be working at present so here is a temporary link.Also, use t.T to transpose a numpy array called t. |
Write DataFrame to mysql table using pySpark I am attempting to insert records into a MySql table. The table contains id and name as columns.I am doing like below in a pyspark shell.name = 'tester_1'id = '103' import pandas as pdl = [id,name]df = pd.DataFrame([l])df.write.format('jdbc').options( url='jdbc:mysql://localhost/database_name', driver='com.mysql.jdbc.Driver', dbtable='DestinationTableName', user='your_user_name', password='your_password').mode('append').save()I am getting the below attribute errorAttributeError: 'DataFrame' object has no attribute 'write'What am I doing wrong? What is the correct method to insert records into a MySql table from pySpark | Use Spark DataFrame instead of pandas', as .write is available on Spark Dataframe only So the final code could bedata =['103', 'tester_1']df = sc.parallelize(data).toDF(['id', 'name'])df.write.format('jdbc').options( url='jdbc:mysql://localhost/database_name', driver='com.mysql.jdbc.Driver', dbtable='DestinationTableName', user='your_user_name', password='your_password').mode('append').save() |
Globally getting context in Wagtail site I am working on a Wagtail project consisting of a few semi-static pages (homepage, about, etc.) and a blog. In the homepage, I wanted to list the latest blog entries, which I could do adding the following code to the HomePage model:def blog_posts(self): # Get list of live blog pages that are descendants of this page posts = BlogPost.objects.live().order_by('-date_published')[:4] return postsdef get_context(self, request): context = super(HomePage, self).get_context(request) context['posts'] = self.blog_posts() return contextHowever, I would also like to add the last 3 entries in the footer, which is a common element of all the pages in the site. I'm not sure of what is the best way to do this — surely I could add similar code to all the models, but maybe there's a way to extend the Page class as a whole or somehow add "global" context? What is the best approach to do this? | This sounds like a good case for a custom template tag.A good place for this would be in blog/templatetags/blog_tags.py:import datetimefrom django import templatefrom blog.models import BlogPostregister = template.Library()@register.inclusion_tag('blog/includes/blog_posts.html', takes_context=True)def latest_blog_posts(context): """ Get list of live blog pages that are descendants of this page """ page = context['page'] posts = BlogPost.objects.descendant_of(page).live().public().order_by('-date_published')[:4] return {'posts': posts}You will need to add a partial template for this, at blog/templates/blog/includes/blog_posts.html. And then in each page template that must include this, include at the top:{% load blog_tags %}and in the desired location:{% latest_blog_posts %}I note that your code comment indicates you want descendants of the given page, but your code doesn't do that. I have included this in my example. Also, I have used an inclusion tag, so that you do not have to repeat the HTML for the blog listing on each page template that uses this custom template tag. |
xlwings runpython EOL error I have recently installed xlwings on my Mac and am currently trying to write a small programme to update some data(via requests). As a test, I tried to update the cryptocurrency prices via an API and write them into excel.Without using runpython, the code works. However as soon as I run my VBA code,I get this error: File "<string>", line 1import sys, os;sys.path.extend(os.path.normcase(os.path.expandvars('/Users/Dennis/Documents/crypto; ^SyntaxError: EOL while scanning string liberalI have searched numerous threads and forums, but can't seem to find an answer to my problem.For a better understanding,my python code:import requests, jsonfrom datetime import datetimeimport xlwings as xwdef do(): parameter = {'convert' : 'EUR'} #anfrage über API query_ticker = requests.get('https://api.coinmarketcap.com/v1/ticker', params = parameter) #anfragedaten in JSON-format data_ticker = query_ticker.json() wb = xw.Book.caller() ws0 = wb.sheets['holdings'] for entry in data_ticker: # update eth price if entry['symbol'] == 'ETH': ws0.range('B14').value = float(entry['price_eur']) #update btc price if entry['symbol'] == 'BTC': ws0.range('B15').value = float(entry['price_eur']) if entry['symbol'] == 'NEO': ws0.range('B16').value = float(entry['price_eur']) if entry['symbol'] == 'XRP': ws0.range('B17').value = float(entry['price_eur']) now = datetime.now() write_date = '%s.%s.%s' %(now.day, now.month, now.year) write_time = '%s:%s:%s' %(now.hour, now.minute,now.second) ws0.range('B2').value = write_date ws0.range('B3').value = write_time wb.save('holdings.xlsm') #wb.close()this is my vba code:Sub update_holdings() RunPython ("import update_holdings; update_holdings.do()")End Sub | Solved this. I just wanted to post the solution for anyone who might be confronted with the same issue.I went to check my xlwings.conf file, in order to see the setup for "INTERPRETER" and "PYTHONPATH". I never did editing on this, however, it was formatted incorrectly.The correct format is:"INTERPRETER","pythonw""PYTHONPATH",""My config file was setup this way:"PYTHONPATH","""INTERPRETER","Python"Also, the path to my python was set incorrectly by default. Even though my command line works with Anaconda python 3.6, "pythonw" used the interpreter set in .bash_profile referenced to python 2.7 which came pre-installed with macOS.Editing the config file "INTERPRETER" solved this issue.Thanks everyone. |
Failed building wheel for I'm trying to install Pillow usingpip install pillowBut everytime it does that:Failed building wheel for Pillow Running setup.py clean for PillowFailed to build PillowCommand "/data/data/com.termux/files/usr/bin/python -u -c "import setuptools, tokenize;file='/data/data/com.termux/files/usr/tmp/pip-build-rzy91xcz/pillow/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /data/data/com.termux/files/usr/tmp/pip-d_if45hl-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /data/data/com.termux/files/usr/tmp/pip-build-rzy91xcz/pillow/It also says that:unable to execute 'aarch64-linux-android-clang': No such file or directory error: command 'aarch64-linux-android-clang' failed with exit status 1 | I solved this by installing clangpkg install clangThank you to abarnet and sorry for late answer to myself haha I'm late because I sixed this week ago. |
Trying to get the next 3 characters after regex string match I have a problem from college that I am trying to solve. I have a log file, from which I want to extract just the HTTP codes.I have included a bit of that log file below:45.132.51.36 - - [19/Dec/2020:18:00:08 +0100] "POST /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 188 "-" "Mozilla/5.0(Linux;Android9;LM-K410)AppleWebKit/537.36(KHTML,likeGecko)Chrome/85.0.4183.81MobileSafari/537.36" "-"45.153.227.31 - - [19/Dec/2020:18:25:17 +0100] "GET /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 9873 "-" "Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/84.0.4147.125Safari/537.36Edg/84.0.522.59" "-"194.156.95.52 - - [19/Dec/2020:18:27:18 +0100] "GET /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 9873 "-" "Mozilla/5.0(Linux;Android10;PCT-L29)AppleWebKit/537.36(KHTML,likeGecko)Chrome/84.0.4147.125MobileSafari/537.36" "-"45.132.207.221 - - [19/Dec/2020:19:43:45 +0100] "POST /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 188 "-" "Mozilla/5.0(Linux;Android5.1;HUAWEILYO-L21)AppleWebKit/537.36(KHTML,likeGecko)Chrome/80.0.3987.99MobileSafari/537.36" "-"45.145.161.6 - - [19/Dec/2020:19:46:33 +0100] "POST /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 188 "-" "Mozilla/5.0(Linux;Android9;A3)AppleWebKit/537.36(KHTML,likeGecko)Chrome/85.0.4183.81MobileSafari/537.36" "-"83.227.29.211 - - [19/Dec/2020:19:54:04 +0100] "GET /images/stories/raith/wohnung_1_web.jpg HTTP/1.1" 200 80510 "http://almhuette-raith.at/index.php?option=com_content&view=article&id=49&Itemid=55" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" "-"87.247.143.30 - - [19/Dec/2020:20:00:43 +0100] "POST /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 188 "-" "Mozilla/5.0(WindowsPhone10.0;Android6.0.1;Microsoft;Lumia640LTE)AppleWebKit/537.36(KHTML,likeGecko)Chrome/52.0.2743.116MobileSafari/537.36Edge/15.15063" "-"45.138.4.22 - - [19/Dec/2020:20:25:15 +0100] "GET /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 9873 "-" "Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/85.0.4183.83Safari/537.36/null/null/null" "-"87.247.143.30 - - [19/Dec/2020:20:44:07 +0100] "GET /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 9873 "-" "Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/46.0.2486.0Safari/537.36Edge/13.10586" "-"45.153.227.31 - - [19/Dec/2020:21:17:17 +0100] "GET /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 9873 "-" "Mozilla/5.0(Linux;Android9;LYA-L29Build/HUAWEILYA-L29;wv)AppleWebKit/537.36(KHTML,likeGecko)Version/4.0Chrome/85.0.4183.81MobileSafari/537.36EdgW/1.0" "-"45.144.0.98 - - [19/Dec/2020:21:25:42 +0100] "GET /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 9873 "-" "Mozilla/5.0(Linux;Android9;SAMSUNGSM-J330F)AppleWebKit/537.36(KHTML,likeGecko)SamsungBrowser/12.1Chrome/79.0.3945.136MobileSafari/537.36" "-"45.132.207.221 - - [19/Dec/2020:21:39:00 +0100] "POST /index.php?option=com_contact&view=contact&id=1 HTTP/1.1" 200 188 "-" "Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/84.0.4147.125Safari/537.36" "-"My code is below. I thought by limiting the numbers after .* it would work. I also tried adding a $ after the [0-9]{3}.import rewith open("access.log") as file: contents = file.read() http_code = re.findall("HTTP/1.1\".* [0-9]{3}", contents) print(http_code)What I can do just to extract the numeric HTTP codes after the HTTP/1.1"? | I'm not sure you need a regex here:with open("access.log") as file: for line in file: print(line.split()[8])# Output:200200200200200200200200200200200200 |
pygame making the sprite go upwards and downwards I am trying to make the sprite move up and down by using the arrow keys but it schemes to only be moving slightly upwards and slightly downwards: there is a speed for the x and y axis and also a position. There are also two functions which are draw and update (which gets the new xpos and the new ypos). here is my codeimport pygameimport randomimport syspygame.init()screen = pygame.display.set_mode((1280, 720))clock = pygame.time.Clock() # A clock to limit the frame rate.pygame.display.set_caption("this game")class Background: picture = pygame.image.load("C:/images/cliff.jpg").convert() picture = pygame.transform.scale(picture, (1280, 720)) def __init__(self, x, y): self.xpos = x self.ypos = y def draw(self): screen.blit(self.picture, (self.xpos, self.ypos))class player_first: picture = pygame.image.load("C:/aliens/ezgif.com-crop.gif") picture = pygame.transform.scale(picture, (200, 200)) def __init__(self, x, y): self.xpos = x self.ypos = y self.speed_x = 0 self.speed_y = 0 self.rect = self.picture.get_rect() def update(self): self.xpos =+ self.speed_x self.ypos =+ self.speed_y def draw(self): #left right #screen.blit(pygame.transform.flip(self.picture, True, False), self.rect) screen.blit(self.picture, (self.xpos, self.ypos))class Bullet_player_1(pygame.sprite.Sprite): picture = pygame.image.load("C:/aliens/giphy.gif").convert_alpha() picture = pygame.transform.scale(picture, (100, 100)) def __init__(self): self.xpos = 360 self.ypos = 360 self.speed_x = 0 super().__init__() self.rect = self.picture.get_rect() def update(self): self.xpos += self.speed_x def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) #self.screen.blit(pygame.transform.flip(self.picture, False, True), self.rect) def is_collided_with(self, sprite): return self.rect.colliderect(sprite.rect)player_one_bullet_list = pygame.sprite.Group()player_one = player_first(0, 0)cliff = Background(0, 0)while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_w: player_one.speed_y = -5 print("up") elif event.key == pygame.K_s: player_one.speed_y = 5 print("down") elif event.key == pygame.K_SPACE: bullet_player_one = Bullet_player_1 bullet_player_one.ypos = player_one.ypos bullet_player_one.xpos = player_one.xpos bullet_player_one.speed_x = 14 player_one_bullet_list.add(bullet_player_one) elif event.type == pygame.KEYUP: if event.key == pygame.K_d and player_one.speed_x > 0: player_one.speed_x = 0 elif event.key == pygame.K_a and player_one.speed_x < 0: player_one.speed_x = 0 player_one.update() cliff.draw() player_one.draw() player_one_bullet_list.update() for bullet_player_one in player_one_bullet_list: bullet_player_one.draw() pygame.display.flip() clock.tick(60) | You've written self.xpos =+ self.speed_x (which is interpret as self.xpos = +self.speed_x) instead of self.xpos += self.speed_x. So you're not adding the speed to the position, you're overwriting it. |
AWS lambda to delete default VPC New to cloud, Can anyone help to correct this codeThis module is to list the regions and delete the complete default vpc via a lambda function.Getting below error while testing this:Syntax error in module 'lambda function': unindent does not match any outer indentation level Please help on this Removed other function like vpc, sc as the block looks very big here in the post just added the igw for understanding..Need assistancedef lambda_handler(event, context): # TODO implement #for looping across the regions regionList=[] region=boto3.client('ec2') regions=region.describe_regions() #print('the total region in aws are : ',len(regions['Regions'])) for r in range(0,len(regions['Regions'])): regionaws=regions['Regions'][r]['RegionName'] regionList.append(regionaws) #print(regionList) #regionsl=['us-east-1'] #sending regions as a parameter to the remove_default_vps function res=remove_default_vpcs(regionList) return { 'status':res }def get_default_vpcs(client): vpc_list = [] vpcs = client.describe_vpcs( Filters=[ { 'Name' : 'isDefault', 'Values' : [ 'true', ], }, ] ) vpcs_str = json.dumps(vpcs) resp = json.loads(vpcs_str) data = json.dumps(resp['Vpcs']) vpcs = json.loads(data) for vpc in vpcs: vpc_list.append(vpc['VpcId']) return vpc_listdef del_igw(ec2, vpcid): """ Detach and delete the internet-gateway """ vpc_resource = ec2.Vpc(vpcid) igws = vpc_resource.internet_gateways.all() if igws: for igw in igws: try: print("Detaching and Removing igw-id: ", igw.id) if (VERBOSE == 1) else "" igw.detach_from_vpc( VpcId=vpcid ) igw.delete( ) except boto3.exceptions.Boto3Error as e: print(e)def remove_default_vpcs(): for region in res: try: client = boto3.client('ec2', region_name = region) ec2 = boto3.resource('ec2', region_name = region) vpcs = get_default_vpcs(client) except boto3.exceptions.Boto3Error as e: print(e) exit(1) else: for vpc in vpcs: print("\n" + "\n" + "REGION:" + region + "\n" + "VPC Id:" + vpc) del_igw(ec2, vpc)print(completed) | It looks to me a code indentation issue. Please try with thisdef lambda_handler(event, context): # TODO implement #for looping across the regions regionList=[] region=boto3.client('ec2') regions=region.describe_regions() #print('the total region in aws are : ',len(regions['Regions'])) for r in range(0,len(regions['Regions'])): regionaws=regions['Regions'][r]['RegionName'] regionList.append(regionaws) #print(regionList) #regionsl=['us-east-1'] #sending regions as a parameter to the remove_default_vps function res=remove_default_vpcs(regionList) return { 'status':res }def get_default_vpcs(client): vpc_list = [] vpcs = client.describe_vpcs( Filters=[ { 'Name' : 'isDefault', 'Values' : [ 'true', ], }, ] ) vpcs_str = json.dumps(vpcs) resp = json.loads(vpcs_str) data = json.dumps(resp['Vpcs']) vpcs = json.loads(data) for vpc in vpcs: vpc_list.append(vpc['VpcId']) return vpc_listdef del_igw(ec2, vpcid): """ Detach and delete the internet-gateway """ vpc_resource = ec2.Vpc(vpcid) igws = vpc_resource.internet_gateways.all() if igws: for igw in igws: try: print("Detaching and Removing igw-id: ", igw.id) if (VERBOSE == 1) else "" igw.detach_from_vpc( VpcId=vpcid ) igw.delete( ) except boto3.exceptions.Boto3Error as e: print(e)def remove_default_vpcs(): for region in res: try: client = boto3.client('ec2', region_name = region) ec2 = boto3.resource('ec2', region_name = region) vpcs = get_default_vpcs(client) except boto3.exceptions.Boto3Error as e: print(e) exit(1) else: for vpc in vpcs: print("\n" + "\n" + "REGION:" + region + "\n" + "VPC Id:" + vpc) del_igw(ec2, vpc)print(completed) |
How could I access localstorage under Python requests I found I need to send a session id x-connection-id which is stored by server side Javascript localStorage.setItem("x-connection-id")If and only if I get this id, so that I can keep going the following request.Any idea ?headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36', 'x-connect-id': 'i need it',}req.get('https://api.sample/api/v1//flightavailability?'+urllib.parse.urlencode(params)) | Seems like it's impossibleLocal storage is specific to browser.Local Storage is a way to store persistent data using JavaScript. Itshould be used only with HTML5 compatible web browser.To access Local storage in python, a compatible browser's python API is required. |
Modifying python variables based on config file entries Relative Python newbie, and I'm writing a script that takes as its input a csv file, splits it into its constituent fields line-by-line and spits it out in another format. What I have so far generally works very well.Each incoming csv line has specific fields that are read into variables 'txnname' and 'txnmemo' respectively. Before I write out the line containing these, I need to test them for specific string sequences and modify them as necessary. At the moment I have these tests hard coded into the script as follows if ('string1' in txnname) and ('string2' in txnmemo): txnname = 'string3'if 'string4' in txnname: txnname = 'string5 ' + txnmemo'stringx' is being used here replacing the actual text I'd like to use. What I'd really like to do is to remove the search strings, variable names and modifier strings from the script altogether and place them in a config file that already exists alongside the script and contains various parameters read into the script via ConfigParser. That way I won't have to modify the code every time I want to test a new condition. The 'hacky' way of doing this would seem to be using eval/exec in some combination on the literal Python statements read in from the config file, however just about everybody says that there is nearly always a better alternative to eval/exec. However I can't come up with what this alternative would be despite much research. I've read mention of using dictionaries somehow, but can't get my head round how this would work, especially on an 'If' statement with more than one condition to be tested (as above)Any advice gratefully received. | There are a few options. The simplest way is to create a global config module with your string variables and modifiers defined and import it into this module. Basically that will just centralize all these variables into a single location so when you need to modify, you just change the config module.The other option is by using configparser as you noted. In that case, you'd need to create an INI file with a format something like below. There are different sections and keys in each section.example.ini[DEFAULT]variable_to_access = value[INPUT]txnname_mod = string1txnmemo_mod = string2[MODIFY]txnname_mod = string3txnmemo_mod = string4Now you can read the INI file and access variables like a dictionary.Python>>> import configparser>>> config = configparser.ConfigParser()>>> config.sections()[]>>> config.read('example.ini')>>> config['INPUT']['txnname_mod']string1For more info, check out the documentation here. |
Tkinter scrollbar not updating to cover expanded canvas I'm having a scrollbar issue with a Tkinter GUI that I'm creating. The GUI contains a class Gen_Box that reproduces a given widget vertically downwards as many times as the widgets add_btn is called. Obviously, this runs off the window frame pretty quickly.I've tried adding a scrollbar to account for this running off the frame. I know these are pretty complicated to add in tkinter and require some hacking around. I referenced this video by Codemy. While I can see the scrollbar to the right of the frame, it doesn't expand accomodate the added widgets (see screenshots below). I've been tinkering around with this all day but am at a loss.Starting frame:Frame after second Monitor widget is added:Also as a note, I'm using CustomTkinter to stylize the widgets.Current code. Scrollbar is set in the Form.__init__() function. Any help on this issue or other input here is much appreciated.class Form(tk.Frame): def __init__(self, root): self.root = root self.main_frame = tk.Frame(self.root) self.main_frame.grid(sticky='NSEW') self.canvas = tk.Canvas(self.main_frame, height=750, width=750) self.canvas.grid(row=0, column=0, sticky='NSEW') self.canvas.grid_rowconfigure(0, weight=1) self.scrollbar = tk.Scrollbar(self.main_frame, orient=VERTICAL) self.scrollbar.config(command=self.canvas.yview) self.scrollbar.grid(row=0, column=1, sticky='NSE') self.canvas.config(yscrollcommand=self.scrollbar.set) self.canvas.bind("<Configure>", lambda e: self.canvas.configure(scrollregion=self.canvas.bbox(ALL))) self.inner_frame = tk.Frame(self.canvas) self.canvas.create_window((0,0), window=self.inner_frame, anchor='nw') self.load() def load(self): self.right = ctk.CTkFrame(self.inner_frame, bg_color='#09275d', fg_color='#09275d', height=750, width=750) basics_label = ctk.CTkLabel(self.right, text='Setup', text_font=('Helvetica', 24), text_color='#f5d397') basics_label.grid(row=0, column=0, sticky='NW', pady=10) name_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d') name_frame.grid(row=1, columnspan=2, sticky=NW,pady=5) name_label = ctk.CTkLabel(name_frame, text='Name', text_font=("MS Sans Serif", 15)) name_label.grid(row=1, column=0, padx=2, pady=2, sticky=NW) name_entry = ctk.CTkEntry(name_frame, width=250, border_width=1, corner_radius=10) name_entry.grid(row=1, column=1, padx=2, pady=2) start_frame = ctk.CTkFrame(self.right, corner_radius=2, fg_color='#E4E7F1', bg_color='#09275d') start_frame.grid(row=2, columnspan=2, sticky=NW,pady=5) start_url_label = ctk.CTkLabel(start_frame, text='Start URL', text_font=("MS Sans Serif", 15)) start_url_label.grid(row=2, column=0, sticky=NW, pady=2, padx=2) start_urls_frame = ctk.CTkFrame(start_frame, fg_color='#E4E7F1', bg_color='#09275d') start_urls_frame.grid(row=2, column=1, pady=2, padx=2) start_urls = Gen_Box(start_urls_frame, Include) include_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d', corner_radius=5) include_frame.grid(row=3, columnspan=2, sticky=NW,pady=5) include_url_label = ctk.CTkLabel(include_frame, text='Include URL', text_font=("Myriad", 15)) include_url_label.grid(row=3, column=0, sticky=NW, padx=2, pady=2) include_url_entry = ctk.CTkFrame(include_frame, fg_color='#E4E7F1', bg_color='#09275d') include_url_entry.grid(row=3, column=1, columnspan=2, padx=2, pady=2) url_patterns = Gen_Box(include_url_entry, Include) depth_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d', corner_radius=5) depth_frame.grid(row=4, columnspan=2, sticky=NW, pady=5) depth_label = ctk.CTkLabel(depth_frame, text='Max Depth', text_font=("Myriad", 15)) depth_label.grid(row=4, column=0, sticky=NW, padx=2, pady=2) depth_level = ctk.IntVar(depth_frame) depth_level.set(1) depth_options = ctk.CTkOptionMenu(master=depth_frame, variable=depth_level, values=[str(i) for i in range(1,11)], fg_color='white', button_color='#6AA6DE', width=25, corner_radius=10) depth_options.grid(row=4, column=1, sticky=NW, padx=2, pady=2) profile_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d', corner_radius=5) profile_frame.grid(row=5, columnspan=2, sticky=NW, pady=5) profile_label = ctk.CTkLabel(profile_frame, text='Chrome Profile', text_font=("Myriad", 15)) profile_label.grid(row=5, column=0, sticky=NW, padx=2, pady=2) profile_level = ctk.StringVar(profile_frame) profile_level.set('False') profile_options = ctk.CTkOptionMenu(master=profile_frame, variable=profile_level, values=['True', 'False'], fg_color='white', button_color='#6AA6DE', width=25, corner_radius=10) profile_options.grid(row=5, column=1, sticky=NW, padx=2, pady=2) headless_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d', corner_radius=5) headless_frame.grid(row=6, columnspan=2, sticky=NW, pady=5) headless_label = ctk.CTkLabel(headless_frame, text='Headless', text_font=("Myriad", 15)) headless_label.grid(row=6, column=0, sticky=NW, padx=2, pady=2) headless_level = ctk.StringVar(headless_frame) headless_level.set('True') headless_options = ctk.CTkOptionMenu(master=headless_frame, variable=headless_level, values=['True', 'False'], fg_color='white', button_color='#6AA6DE', width=25, corner_radius=10) headless_options.grid(row=6, column=1, sticky=NW, padx=2, pady=2) delay_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d', corner_radius=5) delay_frame.grid(row=7, columnspan=2, sticky=NW, pady=5) delay_label = ctk.CTkLabel(delay_frame, text='Delay', text_font=("Myriad", 15)) delay_label.grid(row=7, column=0, sticky=NW, padx=2, pady=2) delay_level = ctk.IntVar(delay_frame) delay_level.set(0) delay_options = ctk.CTkOptionMenu(master=delay_frame, variable=delay_level, values=[str(i) for i in range(1,11)], fg_color='white', button_color='#6AA6DE', width=25, corner_radius=10) delay_options.grid(row=7, column=1, sticky=NW, padx=2, pady=2) download_frame = ctk.CTkFrame(self.right, fg_color='#E4E7F1', bg_color='#09275d') download_frame.grid(row=8, columnspan=2, sticky=NW,pady=5) download_label = ctk.CTkLabel(download_frame, text='Download Path', text_font=("MS Sans Serif", 15)) download_label.grid(row=8, column=0, padx=2, pady=2, sticky=NW) download_entry = ctk.CTkEntry(download_frame, width=350, border_width=1, corner_radius=10) download_entry.grid(row=8, column=1, padx=2, pady=2) self.monitors_label = ctk.CTkLabel(self.right, text='Monitors', text_font=('Helvetica', 24), text_color='#f5d397') self.monitors_label.grid(sticky='NW', row=9, column=0, pady=10) self.monitor_frame = ctk.CTkFrame(self.right, bg_color='#09275d', fg_color='#09275d') Gen_Box(self.monitor_frame, Monitor) self.monitor_frame.grid(row=10, sticky='NW') self.right.grid(row=0)class Gen_Box: def __init__(self, master, gen_obj, existing=None): self.master = master self.gen_obj = gen_obj self.existing = existing self.load(existing) def load(self, existing=None): if existing: self.obj_rows = [self.gen_obj(key) for key in existing.keys()] else: self.obj_rows = [self.gen_obj(0)] self.frame = ctk.CTkFrame(self.master, fg_color='#09275d') for index, obj in enumerate(self.obj_rows): add = lambda row=index+1 : self.add(row) remove = lambda row=index : self.remove(row) obj.row_no = index if existing: obj.load(self.frame, existing=existing[index]) else: obj.load(self.frame) obj.add_btn.configure(command=add) obj.remove_btn.configure(command=remove) self.frame.grid(sticky='NSEW') self.master.update() def add(self, row): self.obj_rows.insert(row, self.gen_obj(row)) existing_entries = self.save_existing() self.frame.destroy() self.load(existing_entries) def remove(self, row): self.obj_rows.pop(row) existing_entries = self.save_existing() self.frame.destroy() self.load(existing_entries) def save_existing(self): existing_dict = {} for index, object in enumerate(self.obj_rows): try: existing_dict[index] = object.generate_scheme() except AttributeError as e: existing_dict[index] = None return existing_dictclass Monitor: def __init__(self, row_no): self.row_no = row_no def generate_scheme(self): return_dict = {} scheme_dict = { 'output': self.output.get, 'includes': self.includes.save_existing, 'selectors': self.selectors.save_existing } for key, value in scheme_dict.items(): return_dict[key] = value() return return_dict def load(self, frame, existing=None): self.monitor_frame = ctk.CTkFrame(frame, fg_color='#E4E7F1', bg_color='#09275d') self.button_frame = ctk.CTkFrame(frame, bg_color='#09275d', fg_color='#09275d') self.button_frame.grid(row=self.row_no, column=1, sticky='NSEW', pady=10, padx=2) self.add_btn = ctk.CTkButton(self.button_frame, text='+', corner_radius=2, width=50, height=50, fg_color='#ecae3e') self.add_btn.grid(row=self.row_no, column=0, padx=10) self.remove_btn = ctk.CTkButton(self.button_frame, text='-', corner_radius=2, width=50, height=50, fg_color='#ecae3e') self.remove_btn.grid(row=self.row_no, column=1, padx=10) self.top_frame = ctk.CTkFrame(self.monitor_frame) self.top_frame.grid(sticky='NSEW', pady=10, padx=10) self.include_frame = ctk.CTkFrame(self.top_frame, fg_color='#E4E7F1', bg_color='#E4E7F1') self.include_frame.grid(row=0, sticky='NSEW') self.include_label = ctk.CTkLabel(self.include_frame, text='Include URLs', text_font=("Myriad", 15)) self.include_label.grid(row=0, sticky='NW') if not existing: self.includes = Gen_Box(self.include_frame, Include) else: self.includes = Gen_Box(self.include_frame, Include, existing=existing['includes']) self.selector_frame = ctk.CTkFrame(self.top_frame, fg_color='#E4E7F1', bg_color='#E4E7F1') self.selector_frame.grid(row=1, sticky='NSEW') self.selector_label = ctk.CTkLabel(self.selector_frame, text='Selectors (Optional)', text_font=("Myriad", 15)) self.selector_label.grid(row=1, sticky='NW') if not existing: self.selectors = Gen_Box(self.selector_frame, Include) else: self.selectors = Gen_Box(self.selector_frame, Include, existing=existing['selectors']) self.output_frame = ctk.CTkFrame(self.monitor_frame, fg_color='#E4E7F1', bg_color='#E4E7F1') self.output_frame.grid(row=2, pady=5, padx=2, sticky='NSEW', columnspan=2) self.output_label = ctk.CTkLabel(self.output_frame, text='Output', text_font=("Myriad", 15)) self.output_label.grid(row=2, column=0) self.output = ctk.StringVar(self.output_frame) if existing: self.output.set(existing['output']) else: self.output.set('TXT') self.output_options = ['TXT', 'PDF'] self.output_menu = ctk.CTkOptionMenu(self.output_frame, variable=self.output, values=self.output_options, fg_color='white', button_color='#6AA6DE', width=25, corner_radius=10) self.output_menu.grid(row=2, column=1) self.actions_frame = ctk.CTkFrame(self.monitor_frame, bg_color='#E4E7F1', fg_color='#E4E7F1') self.actions_frame.grid(row=3, pady=5) self.add_actions_btn = ctk.CTkButton(self.actions_frame, text='Actions Editor ▼', text_font=("Myriad", 10), fg_color='#3eecae', corner_radius=10) self.add_actions_btn.grid(sticky='NSEW') self.add_actions_btn.configure(width=self.actions_frame.winfo_width()) self.monitor_frame.grid(row=self.row_no, pady=10, padx=10) frame.update()class Include: def __init__(self, row_no): self.row_no = row_no def generate_scheme(self): return_dict = {} scheme_dict = { 'entry': self.entry.get } for key, value in scheme_dict.items(): return_dict[key] = value() return return_dict def load(self, master, existing=None): entry_frame = ctk.CTkFrame(master, bg_color='#E4E7F1', fg_color='#E4E7F1') self.entry = ctk.CTkEntry(entry_frame, corner_radius=10, border_width=1, width=300, bg_color='#E4E7F1') if existing: self.entry.insert(END, existing['entry']) self.entry.grid(row=self.row_no, column=0, pady=2, ipadx=2, sticky=NSEW) button_frame = ctk.CTkFrame(master, bg_color='#E4E7F1', fg_color='#E4E7F1') self.add_btn = ctk.CTkButton(button_frame, text='+', height=10, width=10, corner_radius=5, text_font=("Helvetica", 12), fg_color='#6AA6DE', bg_color='#E4E7F1', text_color='#E4E7F1') self.add_btn.grid(row=self.row_no, column=1, padx=3, pady=5) self.remove_btn = ctk.CTkButton(button_frame, text='-', height=10, width=10, corner_radius=5, text_font=("Helvetica", 12), fg_color='#6AA6DE', bg_color='#E4E7F1', text_color='#E4E7F1') self.remove_btn.grid(row=self.row_no, column=2, padx=3, pady=5) entry_frame.grid(row=self.row_no, column=0, sticky='NSEW') button_frame.grid(row=self.row_no, column=1, sticky='NSEW') root = ctk.CTk()root.configure(bg='#09275d')root.geometry('1000x650')Form(root)root.mainloop() | When a new monitor section is added, it is the inner frame (self.inner_frame) get resized, not the canvas (self.canvas). So the <Configure> event should be bound on the inner frame instead of the canvas:class Form(tk.Frame): def __init__(self, root): ... self.canvas.config(yscrollcommand=self.scrollbar.set) ### ----- don't bind on canvas #self.canvas.bind("<Configure>", lambda e: self.canvas.configure(scrollregion=self.canvas.bbox(ALL))) self.inner_frame = tk.Frame(self.canvas) ### ----- bind on inner frame instead self.inner_frame.bind("<Configure>", lambda e: self.canvas.configure(scrollregion=self.canvas.bbox(ALL))) self.canvas.create_window((0,0), window=self.inner_frame, anchor='nw') self.load() ... |
Python: an efficient way to slice a list with a index list I wish to know an efficient way and code saving to slice a list of thousand of elementsexample: b = ["a","b","c","d","e","f","g","h"] index = [1,3,6,7] I wish a result like as:c = ["b","d","g","h"] | The most direct way to do this with lists is to use a list comprehension:c = [b[i] for i in index]But, depending on exactly what your data looks like and what else you need to do with it, you could use numpy arrays - in which case:c = b[index]would do what you want, and would avoid the potential memory overhead for large slices - numpy arrays are stored more efficiently than lists, and slicing takes a view into the array rather than making a partial copy. |
How to import files while running? I have following Problem:file1.py has functions and variables wich I need for file2.py.With from file1 import myclass1 there is no problem with that.The problem is, I also want to "send" variables from file2.py to file1.py while running file2.pyfrom file1 import myclass1 in file2.py doesnt work because when i compile file2.py it apears an ImportError:pydev debugger: startingTraceback (most recent call last): File "****\file1.py", line 13, in <module> from file1 import myclass1 File "****\forfile1.py", line 7, in <module> from file2 import myclass2 File "****\file1.py", line 13, in <module> from file1 import myclass1ImportError: cannot import name s4dat_classSo, how can u import files, while running? or are there other ways to do what I want? Thx | If the question is "how do I import from module1 into module2 when module2 imports from module1", the simple answer is "you can't", and the solution is either merge both modules, or extract the common dependencies into a third moduleor pass needed objects (hint : classes and functions are objects too) as function or method params (simplest form of dependency injection).The complete answer is that there are workarounds (like importing from within a function body), but that's fugly and 99.8% of the time (approximately ) a sure design smell - you should not have cyclic dependencies so better to cure the design than resort to fugly workarounds. |
Generate a tree from a text file using python I Have a txt file which has data like this:arpshowshow ipshow ip routeshow ip route staticshow ip default-gatewayshow ip default-gateway staticshow ip interfaceshow partitionnono loggingno logging onno logging overrideI have to print a tree in the following way:arpshow ip route static default-gateway static interface partitionno logging on overrideThanks in advance for the help!!! | The count of words - 1 indicates tab depth. Loop through each line and prepend a tab using this heuristic in combination with pulling only word X for display. |
Django extending user model tutorial isn't work for me I've use this tutorial and do exactly what they do:https://docs.djangoproject.com/ja/1.9/topics/auth/customizing/#extending-the-existing-user-modelmy model.py:from django.db import modelsfrom django.contrib.auth.models import Userfrom datetime import datetimeclass Chess(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) last_activity = models.DateTimeField(default=datetime(1970,1,1,)) is_avaliable = models.BooleanField(default=False,) in_progress = models.BooleanField(default=False,)my admin.py:from django.contrib import adminfrom django.contrib.auth.admin import UserAdmin as BaseUserAdminfrom django.contrib.auth.models import Userfrom chess_tournament.models import Chessclass ChessInline(admin.StackedInline): model = Chess can_delete = False verbose_name_plural = 'Chess'# Define a new User adminclass UserAdmin(BaseUserAdmin): inlines = (ChessInline, )# Re-register UserAdminadmin.site.unregister(User)admin.site.register(User, UserAdmin)in managaer.py shell:from django.contrib.auth.models import Useru = User.objects.get(pk=1)u.username # return 'admin' it's worku.chess.last_activity # return ERROR (described below)AttrinbuteError: 'User' object has no attribute 'chess'but in django admin panel this field are available and worksPlease help to figure it out coz I already spent 4 hours for it... | Could it possible that you didn't register Chess model?Try add admin.site.register(Chess, ChessAdmin) at the bottom of admin.py. Of course you might have to create a simple ChessAdmin for display first. |
Cannot connect to ssh via python so I just setted up a fresh new raspberry pi and I want it to communicate with python using ssh from my computer to my ssh server, the pi.. I first try to connect using putty and it work, I could execute all the commands I wanted, then I tried using librarys such as Paramiko, Spur and they didn't work.Spur code:import spurshell = spur.SshShell("192.168.1.114", "pi", "raspberry")result = shell.run("ls")print resultParamiko code:ssh = paramiko.SSHClient()ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(host, username, password)Here's the error code:spur.ssh.ConnectionError: Error creating SSH connectionOriginal error: Server '192.168.1.114' not found in known_hostsThis is the error with spur but it pretty much said the same thing with paramiko.Thanks in advance :) | You need to accept the host key, similarly to what is shown hereimport spurshell = spur.SshShell("192.168.1.114", "pi", "raspberry", missing_host_key=spur.ssh.MissingHostKey.accept)result = shell.run("ls")print resultEDIT: More useful link (spur documentation) |
Yahoo! Fantasy API Maximum Count? I am trying to get all the available players for a position with JSON returned by the Yahoo! Fantasy API, using this resource:http://fantasysports.yahooapis.com/fantasy/v2/game/nfl/players;status=A;position=RBIt seems like it always returns a maximum of 25 players with this API. I've tried using the ;count=n filter as well, but if n is anything higher that 25 I still only get 25 players returned. Does anyone know why this is? And how I can get more?Here is my code:from yahoo_oauth import OAuth1oauth = OAuth1(None, None, from_file='oauth.json', base_url='http://fantasysports.yahooapis.com/fantasy/v2/')uri = 'league/nfl.l.91364/players;position=RB;status=A;count=100'if not oauth.token_is_valid(): oauth.refresh_access_tokenresponse = oauth.session.get(uri, params={'format': 'json'}) | I did solve this. What I found was that the maximum "count" is 25, but the "start" parameter is the key to this operation. It seems the the API attaches an index to each of the players (however it is sorted) and the "start" parameter is the index to start out. It might seem odd, but the only way I could find was to get all the players back in batches of 25. So my solution in code was something like the following:from yahoo_oauth import OAuth1oauth = OAuth1(None, None, from_file='oauth.json', base_url='http://fantasysports.yahooapis.com/fantasy/v2/')done = Falsestart = 1while(not done) : uri = 'league/nfl.l.<league>/players;position=RB;status=A;start=%s,count=25' % start if not oauth.token_is_valid(): oauth.refresh_access_token response = oauth.session.get(uri, params={'format': 'json'}) # parse response, get num of players, do stuff start += 25 if numPlayersInResp < 25: done = True |
How can I iterate through numpy 3d array So I have an array:array([[[27, 27, 28], [27, 14, 28]], [[14, 5, 4], [ 5, 6, 14]]])How can I iterate through it and on each iteration get the [a, b, c] values, I try like that:for v in np.nditer(a): print(v)but it just prints272728271428145456I need:[27 27 28][27 14 28]... | b = a.reshape(-1, 3)for triplet in b: ... |
How to Resolve 'Error while installing steem-python' I have a VM with Ubuntu 18.04.1.python3 --version says 3.6.5.I installed pip without any failure (seems like).Then I tried to install steem-python withpip install steembut I get a failure, which looks like:bla bla bla...^~~~~~~~~~~~~~~compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Failed building wheel for scryptRunning setup.py clean for scrypt...^~~~~~~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ----------------------------------------Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-0iqf8q/scrypt/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-1dMD0Y-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-0iqf8q/scrypt/Now I'm out of ideas. Can anybody help me to fix this? How can I install it?My goal is to interact with the steem blockchain. | You will need to install python3-dev on ubuntu (in addition to unixodbc-dev) and it did work.Please ensure you've installed these:$ sudo apt-get install python3-dev$ sudo apt-get install unixodbc-devFYI : Python 2.x users, will need python-dev instead. |
How to give command line args to scrapy? I want to give command line args to scrapy and use that sys.argv[] in spider to check which urls have that argument. How can I do like this for spider named urls?$scrapy crawl urls "August 01,2018"? | You can pass arguments to a spider's __init__() by using -a, as specified in the docs: https://doc.scrapy.org/en/latest/topics/spiders.html#spider-argumentsThe default method will make all of the arguments into spider attributes, but you can also create a custom one if you need to do something with them. |
using the re and urllib.request module im using python 3.7 and i wanted to write a program that takes name of a city and returns the weather forcast . i started my code with :import reimport urllib.request#https://www.weather-forecast.com/locations/Tel-Aviv-Yafo/forecasts/latestcity=input("entercity:")url="https://www.weather-forecast.com/locations/" + city +"/forecasts/latest"data=urllib.request.urlopen(url).readdata1=data.decode("uf-8")print(data1)but when I wanted to read my data i got this error : File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 503, in _call_chain result = func(*args) File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found Process finished with exit code 1=> can some one help me and tell what is the problem ? thanks:) | There must be some spelling mistake in the city name you have entered. I tried running the following code import re import urllib2 city=input("enter city:") url="https://www.weather-forecast.com/locations/" + city +"/forecasts/latest" data=urllib2.urlopen(url).read() print(data.decode('utf-8'))It works properly when i input enter city:'newyork'While the same code throws HTTPError: HTTP Error 404: Not Found error if the input is enter city:'newyolk'You can solve this by using a try-except statementcity=input("enter city:")url="https://www.weather-forecast.com/locations/" + city +"/forecasts/latest"try: data=urllib2.urlopen(url).read() print(data.decode('utf-8'))except: print('The entered city does not exist.Please enter a valid city name') |
Receiving error when trying to create list Below is the assignmentDesign a Python script that starts with a list of your own, containing a mix of integers, floating point decimals, and strings. Include this starting list near the beginning of your script. The output of your script should be another list containing only non-string elements in the original list, as follows.I'm struggling with how to remove the string elements from the list. Below is what I have tried, any suggestions? print("This script starts with a given list and outputs another list containing only non-string elements in the original list.") print() list1 = [21,99,-99,"cool","school","pool",41.34,-9.034] print("The inputted list was:",list1) for k in list1: if type(k) == str: list2 = list1 - k print("The output list is:", list2)This script starts with a given list and outputs another list containing only non-string elements in the original list.The inputted list was: [21, 99, -99, 'cool', 'school', 'pool', 41.34, -9.034]---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-30-4e9226325877> in <module> 5 for k in list1: 6 if type(k) == str:----> 7 list2 = list1 - k 8 print("The output list is:", list2)TypeError: unsupported operand type(s) for -: 'list' and 'str' | you can't use - to remove an item from list. and remove items from a list you are iterating overprint("This script starts with a given list and outputs another list containing only non-string elements in the original list.")print()list1 = [21,99,-99,"cool","school","pool",41.34,-9.034]list2 = []print("The inputted list was:",list1)for k in list1: if not isinstance(k, str): list2.append(k)print("The output list is:", list2) |
Matplotlib dot plot with two categorical variables I would like to produce a specific type of visualization, consisting of a rather simple dot plot but with a twist: both of the axes are categorical variables (i.e. ordinal or non-numerical values). And this complicates matters instead of making it easier.To illustrate this question, I will be using a small example dataset that is a modification from seaborn.load_dataset("tips") and defined as such:import pandasfrom six import StringIOdf = """total_bill | tip | sex | smoker | day | time | size 16.99 | 1.01 | Male | No | Mon | Dinner | 2 10.34 | 1.66 | Male | No | Sun | Dinner | 3 21.01 | 3.50 | Male | No | Sun | Dinner | 3 23.68 | 3.31 | Male | No | Sun | Dinner | 2 24.59 | 3.61 | Female | No | Sun | Dinner | 4 25.29 | 4.71 | Female | No | Mon | Lunch | 4 8.77 | 2.00 | Female | No | Tue | Lunch | 2 26.88 | 3.12 | Male | No | Wed | Lunch | 4 15.04 | 3.96 | Male | No | Sat | Lunch | 2 14.78 | 3.23 | Male | No | Sun | Lunch | 2"""df = pandas.read_csv(StringIO(df.replace(' ','')), sep="|", header=0)My first approach to produce my graph was to try a call to seaborn as such:import seabornaxes = seaborn.pointplot(x="time", y="sex", data=df)This fails with:ValueError: Neither the `x` nor `y` variable appears to be numeric.So does the equivalent seaborn.stripplot and seaborn.swarmplot calls. It does work however if one of the variables is categorical and the other one is numerical. Indeed seaborn.pointplot(x="total_bill", y="sex", data=df) works, but is not what I want.I also attempted a scatterplot like such:axes = seaborn.scatterplot(x="time", y="sex", size="day", data=df, x_jitter=True, y_jitter=True)This produces the following graph which does not contain any jitter and has all the dots overlapping, making it useless:Do you know of any elegant approach or library that could solve my problem ?I started writing something myself, which I will include below, but this implementation is suboptimal and limited by the number of points that can overlap at the same spot (currently it fails if more than 4 points overlap).# Modules #import seaborn, pandas, matplotlibfrom six import StringIO################################################################################def amount_to_offets(amount): """A function that takes an amount of overlapping points (e.g. 3) and returns a list of offsets (jittered) coordinates for each of the points. It follows the logic that two points are displayed side by side: 2 -> * * Three points are organized in a triangle 3 -> * * * Four points are sorted into a square, and so on. 4 -> * * * * """ assert isinstance(amount, int) solutions = { 1: [( 0.0, 0.0)], 2: [(-0.5, 0.0), ( 0.5, 0.0)], 3: [(-0.5, -0.5), ( 0.0, 0.5), ( 0.5, -0.5)], 4: [(-0.5, -0.5), ( 0.5, 0.5), ( 0.5, -0.5), (-0.5, 0.5)], } return solutions[amount]################################################################################class JitterDotplot(object): def __init__(self, data, x_col='time', y_col='sex', z_col='tip'): self.data = data self.x_col = x_col self.y_col = y_col self.z_col = z_col def plot(self, **kwargs): # Load data # self.df = self.data.copy() # Assign numerical values to the categorical data # # So that ['Dinner', 'Lunch'] becomes [0, 1] etc. # self.x_values = self.df[self.x_col].unique() self.y_values = self.df[self.y_col].unique() self.x_mapping = dict(zip(self.x_values, range(len(self.x_values)))) self.y_mapping = dict(zip(self.y_values, range(len(self.y_values)))) self.df = self.df.replace({self.x_col: self.x_mapping, self.y_col: self.y_mapping}) # Offset points that are overlapping in the same location # # So that (2.0, 3.0) becomes (2.05, 2.95) for instance # cols = [self.x_col, self.y_col] scaling_factor = 0.05 for values, df_view in self.df.groupby(cols): offsets = amount_to_offets(len(df_view)) offsets = pandas.DataFrame(offsets, index=df_view.index, columns=cols) offsets *= scaling_factor self.df.loc[offsets.index, cols] += offsets # Plot a standard scatter plot # g = seaborn.scatterplot(x=self.x_col, y=self.y_col, size=self.z_col, data=self.df, **kwargs) # Force integer ticks on the x and y axes # locator = matplotlib.ticker.MaxNLocator(integer=True) g.xaxis.set_major_locator(locator) g.yaxis.set_major_locator(locator) g.grid(False) # Expand the axis limits for x and y # margin = 0.4 xmin, xmax, ymin, ymax = g.get_xlim() + g.get_ylim() g.set_xlim(xmin-margin, xmax+margin) g.set_ylim(ymin-margin, ymax+margin) # Replace ticks with the original categorical names # g.set_xticklabels([''] + list(self.x_mapping.keys())) g.set_yticklabels([''] + list(self.y_mapping.keys())) # Return for display in notebooks for instance # return g################################################################################# Graph #graph = JitterDotplot(data=df)axes = graph.plot()axes.figure.savefig('jitter_dotplot.png') | you could first convert time and sex to categorical type and tweak it a little bit:df.sex = pd.Categorical(df.sex)df.time = pd.Categorical(df.time)axes = sns.scatterplot(x=df.time.cat.codes+np.random.uniform(-0.1,0.1, len(df)), y=df.sex.cat.codes+np.random.uniform(-0.1,0.1, len(df)), size=df.tip)Output:With that idea, you can modify the offsets (np.random) in the above code to the respective distance. For example:# groupinggroups = df.groupby(['time', 'sex'])# compute the number of samples per groupnum_samples = groups.tip.transform('size')# enumerate the samples within a groupsample_ranks = df.groupby(['time']).cumcount() * (2*np.pi) / num_samples# compute the offsetx_offsets = np.where(num_samples.eq(1), 0, np.cos(df.sample_rank) * 0.03)y_offsets = np.where(num_samples.eq(1), 0, np.sin(df.sample_rank) * 0.03)# plotaxes = sns.scatterplot(x=df.time.cat.codes + x_offsets, y=df.sex.cat.codes + y_offsets, size=df.tip)Output: |
find max-min values for one column based on another I have a dataset that looks like this.datetime id2020-01-22 11:57:09.286 UTC 52020-01-22 11:57:02.303 UTC 62020-01-22 11:59:02.303 UTC 5Ids are not unique and give different datetime values. Let's say:duration = max(datetime)-min(datetime).I want to count the ids for what the duration max(datetime)-min(datetime) is less than 2 seconds. So, for example I will output:count = 1because of id 5. Then, I want to create a new dataset which contains only those rows with the min(datetime) value for each of the unique ids. So, the new dataset will contain the first row but not the third. The final data set should not have any duplicate ids.datetime id2020-01-22 11:57:09.286 UTC 52020-01-22 11:57:02.303 UTC 6How can I do any of these?P.S: The dataset I provided might not be a good example since the condition is 2 seconds but here it's in minutes | Do you want this? :df.datetime = pd.to_datetime(df.datetime)c = 0def count(x): global c x = x.sort_values('datetime') if len(x) > 1: diff = (x.iloc[-1]['datetime'] - x.iloc[0]['datetime']) if diff < timedelta(seconds=2): c += 1 return x.head(1)new_df = df.groupby('id').apply(count).reset_index(drop=True)Now, if you print c it'll show the count which is 1 for this case and new_df will hold the final dataframe. |
Check if one series is subset of another in Pandas I have 2 columns from 2 different dataframes. I want to check if column 1 is a subset of column 2.I was using the following code:set(col1).issubset(set(col2))The issue with this is that if col1 has only integers and col2 has both integers and strings, then this returns false. This happens because elements of col2 are coerced into strings. For example,set([376, 264, 365, 302]) & set(['302', 'water', 'nist1950', '264', '365', '376'])I tried using isin from pandas. But if col1 and col2 are series then this gives a series of Boolean values. I want True or False.How do I solve this? Is there a simpler function that I have missed?Edit 1Adding an example.col10 3651 3762 3023 264Name: subject, dtype: int64col20 nist19501 nist19502 water3 water4 3765 3766 3027 3028 3659 36510 26411 26412 37613 376Name: subject, dtype: objectEdit 2col1 and col2 can have integers, strings, floats etc. I would like to not make any prejudgement about what is in these columns. | You could use isin with all to check whether all of your col1 elements contains in col2. For converting to numeric you could use pd.to_numeric:s1 = pd.Series([376, 264, 365, 302])s2 = pd.Series(['302', 'water', 'nist1950', '264', '365', '376'])res = s1.isin(pd.to_numeric(s2, errors='coerce')).all()In [213]: resOut[213]: TrueMore detailed:In [214]: pd.to_numeric(s2, errors='coerce')Out[214]:0 3021 NaN2 NaN3 2644 3655 376dtype: float64In [215]: s1.isin(pd.to_numeric(s2, errors='coerce'))Out[215]:0 True1 True2 True3 Truedtype: boolNote pd.to_numeric works with pandas version >=0.17.0 for previous you cound use convert_objects with convert_numeric=TrueEDITIf you prefer solution with set you could convert your first set to str as well and then compare them with your code:s3 = set(map(str, s1))In [234]: s3Out[234]: {'264', '302', '365', '376'}Then you could use issubset for s2:In [235]: s3.issubset(s2)Out[235]: Trueor for set(s2):In [236]: s3.issubset(set(s2))Out[236]: TrueEDIT2s1 = pd.Series(['376', '264', '365', '302'])s4 = pd.Series(['nist1950', 'nist1950', 'water', 'water', '376', '376', '302', '302', '365', '365', '264', '264', '376', '376'])In [263]: s1.astype(float).isin(pd.to_numeric(s4, errors='coerce')).all()Out[263]: True |
Does twisted epollreactor use non-blocking dns lookup? It seems obvious that it would use the twisted names api and not any blocking way to resolve host names.However digging in the source code, I have been unable to find the place where the name resolution occurs. Could someone point me to the relevant source code where the host resolution occurs ( when trying to do a connectTCP, for example).I really need to be sure that connectTCP wont use blocking DNS resolution. | It seems obvious, doesn't it?Unfortunately:Name resolution is not always configured in the obvious way. You think you just have to read /etc/resolv.conf? Even in the specific case of Linux and DNS, you might have to look in an arbitrary number of files looking for name servers.Name resolution is much more complex than just DNS. You have to do mDNS resolution, possibly look up some LDAP computer records, and then you have to honor local configuration dictating the ordering between these such as /etc/nsswitch.conf.Name resolution is not exposed via a standard or useful non-blocking API. Even the glibc-specific getaddrinfo_a exposes its non-blockingness via SIGIO, not just a file descriptor you can watch. Which means that, like POSIX AIO, it's probably just a kernel thread behind your back anyway.For these reasons, among others, Twisted defaults to using a resolver that just calls gethostbyname in a thread.However, if you know that for your application it is appropriate to have DNS-only hostname resolution, and you'd like to use twisted.names rather than your platform resolver - in other words, if scale matters more to you than esoteric name-resolution use-cases - that is supported. You can install a resolver from twisted.names.client onto the reactor, appropriately configured for your application and all future built-in name resolutions will be made with that resolver. |
recursively counting the number of elements in a list that are not v For a list I want to recursively count the number of elements that are not v. My code so far looks like:def number_not(thelist, v):"""Returns: number of elements in thelist that are NOT v.Precondition: thelist is a list of ints v is an int"""total = 0if thelist is []: return totalelif thelist[0] is v: print "is v" total += 0 print total return number_not(thelist[1:],v)elif thelist[0] is not v: print "is not v" total += 1 print total return number_not(thelist[1:],v)return totalIt will print the total for each of the individual numbers, but not the final total. For example, for list = [1,2,2,2,1], it will print:is not v1is v0is v0is v0is not v1But then my code gets a traceback(list index out of range) error because it keeps going. How do I make it so that it recurses only for the length of the list and that it return the proper total, which for the example is 2 | All the code is fine, just the termination condition you have added is not correct,it should be if not thelist:Change your code to check the empty list, if thelist is []: to the above. |
Monkey patching pandas and matplotlib to remove spines for df.plot() The question:I'm trying to grasp the concept of monkey patching and at the same time make a function to produce the perfect time-series plot. How can I include the following matplotlib functionality in pandas pandas.DataFrame.plot()?ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)ax.spines['bottom'].set_visible(False)ax.spines['left'].set_visible(False)complete code at the end of the questionThe details:I think the default settings in df.plot() is pretty neat, especially if you're running a Jupyter Notebook with a dark theme like chesterish from dunovank:And I'd like to use it for as much of my data analysis workflow as possible, but I'd really like to remove the frame (or what's called spines) like this:Arguably, this is a perfect time-series plot. But df.plot() doesn't have a built-in argument for this. The closest thing seems to be grid = False, but that takes away the whole grid in the same run:What I've triedI know I can wrap the spine snippet in a function along with df.plot() so I end up with this:Snippet 1:def plotPerfect(df, spline): ax = df.plot() if not spline: ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) return(ax)plotPerfect(df = df, spline = False)Output 1:But is that the "best" way to do it with regards to flexibilty and readability for future amendments? Or even the fastest with regards to execution time if we're talking hundreds of plots?I know how I can get the df.plot() source, but everything from there leaves me baffled. So how do I include those settings in df.plot? And perhaps the wrapped function approach is just as good as monkey patching?Snippet with full code and sample data:To reproduce the example 100%, paste this into a Jupyter Notebook cell with the chesterish theme activated:# importsimport pandas as pdimport numpy as npfrom jupyterthemes import jtplot# Sample datanp.random.seed(123)rows = 50dfx = pd.DataFrame(np.random.randint(90,110,size=(rows, 1)), columns=['Variable Y'])dfy = pd.DataFrame(np.random.randint(25,68,size=(rows, 1)), columns=[' Variable X'])df = pd.concat([dfx,dfy], axis = 1)jtplot.style()# Plot with default settingsdf.plot()# Wrap df.plot() and matplotlib spine in a functiondef plotPerfect(df, spline): ax = df.plot() if not spline: ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) return(ax)# Plot the perfect time-series plotplotPerfect(df = df, spline = False) | This seems like an xyproblem.Monkey patching (The Y)The question asks for monkey patching pandas plot function to add additional features. This can in this case be done by replacing the pandas.plotting._core.plot_frame function with a custom version of it.import pandas as pdimport pandas.plotting._coreorginal = pandas.plotting._core.plot_framedef myplot(*args, **kwargs): spline = kwargs.pop("spline", True) ax = orginal(*args, **kwargs) ax.set_frame_on(spline) ax.grid(not spline) ax.tick_params(left=spline, bottom=spline) return axpandas.plotting._core.plot_frame = myplotThen use it asdf = pd.DataFrame([[0.1, 0.1], [0.9, 0.9]]).set_index(0)df.plot() ## Normal Plotdf.plot(spline=False) ## "Despined" Plot Note that if in jupyter notebook, the cell with the monkey patching cannot be run more than once, else it would end up in recursion.Styling (The X)The above is pretty overkill for changing the style of a plot. One should rather use the style options of matplotlib. mystyle = {"axes.spines.left" : False, "axes.spines.right" : False, "axes.spines.bottom" : False, "axes.spines.top" : False, "axes.grid" : True, "xtick.bottom" : False, "ytick.left" : False,}Then to apply this for some plots in the notebook, use the plt.style.context manager,import pandas as pdimport matplotlib.pyplot as pltdf = pd.DataFrame([[0.1, 0.1], [0.9, 0.9]]).set_index(0)df.plot() ## Normal Plotwith plt.style.context(mystyle): df.plot() ## "Despined" Plot Or, if you want to apply this style globally, update the rcParams.plt.rcParams.update(mystyle) |
How to fix positioning of labels in tkinter? So I am trying to have a GUI for my Converter (it is intentionally meant to go up to 255 aka 8bits)And I have got it to work on the button layout as planned. But to make it more user-friendly I wanted to put a 'Convert From' label/text above the buttons. However, as soon as I shifted the buttons down a row it didn't go as planned and after careful examination of it. I have got nowhere.image reference of how it looks without label above the buttons (row = 0)image reference of how it looks with the label above the buttonsas you can see it takes it off screen and out of uniform (row = 1)#Converter GUI#importing necessary libaries import tkinterimport tkinter.ttkimport tkinter.messageboximport sys#defining the types of conversion def bin2den(value): if len(value) != 8: #checks that it has a length of 8 binary digits return "None" return int(value, 2)def bin2hex(value): if len(value) != 8: #checks that it has a length of 8 binary digits return "None" Ox = hex(int(value, 2))[2:].upper() Ox = Ox.zfill(2) return Oxdef bin2bin(value): if len(value) != 8: #checks that it has a length of 8 binary digits print("Invalid input, 8 bits required!") return "None" return value.zfill(8)def den2bin(value): if int(value) > 255 or int(value) < 0: return "None" Ob = bin(int(value))[2:]#removing the ob prefix filled = Ob.zfill(8) #filling it to become an 8 bit binary if worth less than 8 bits return filleddef den2hex(value): if int(value) > 255 or int(value) < 0: return "None" Ox = hex(int(value))[2:].upper() #removing the ox prefix and capitalising the hex output Ox = Ox.zfill(2)#filling the output if not double digits return Oxdef den2den(value): if int(value) > 255 or int(value) < 0: print("Out Of Range") return "None" return value def hex2bin(value): while len(value) != 2 and len(value) > 2: #checking if hex value outside of ff return "None" Ob = bin(int(value, 16))[2:] #removing the ob prefix Ob = Ob.zfill(8)#filling binary to be 8bits if value is below 8 bits return Obdef hex2den(value): while len(value) != 2 and len(value) > 2: #checking if hex value outside of ff return "None" return int(value, 16)def hex2hex(value): while len(value) != 2 and len(value) > 2: #checking if hex value outside of ff print("Invalid input, try again") return "None" value = value.upper() #capitaliseing for formality return valuedef readFile(fileName): fileObj = open(fileName, "r")#opens file in read only mode HexArray = fileObj.read().splitlines()#puts file into an array fileObj.close() return HexArray#setting main window class#and defining main attributes of the windowclass MainWindow(): FONT = ("Consolas", 16) TxtMaxLen = 32 def __init__(self): self._window = tkinter.Tk() self._window.title("Converter") #self._window.geometry("800x240") redundant as set below due to positioning self._window["bg"] = "#20A3FF" #background colour self._window.resizable(False, False) #stops the window being resized windowWidth = self._window.winfo_reqwidth() windowHeight = self._window.winfo_reqheight() # Gets both half the screen width/height and window width/height positionRight = int(self._window.winfo_screenwidth()/2 - windowWidth/2)-330 positionDown = int(self._window.winfo_screenheight()/2 - windowHeight/2)-200 self._window.geometry(f"{windowWidth}x{windowHeight}+{positionRight}+{positionDown}") self._window.geometry("800x240")#setting window size label = tkinter.Label(self._window, text="Number: ", font=MainWindow.FONT)#label defined for number input box label.grid(row=0, column=0, padx=10, pady=5)#positioning / dimensions of input box self._txt_input = tkinter.Entry(self._window, width=MainWindow.TxtMaxLen, font=MainWindow.FONT) #defined input box self._txt_input.grid(row=0, column=1, pady=5)#postioning / dimensions of input box self._txt_input.focus() #this is the label that is just runining it #label = tkinter.Label(self._window, text="Convert From ", font=MainWindow.FONT) #label.grid(row=0, column=2, padx=5, pady=5)#positioning / dimensions of input box #following 6 bits of code when row = 0 it works fine but when shifted down a row it goes wrong self._bt_bin = tkinter.Button(self._window, text="Bin", font=MainWindow.FONT, command=self.BinSelection)#button defined for bin self._bt_bin.grid(row=0, column=2, padx=5, pady=5)#postioning / dimensions of button box self._bt_den = tkinter.Button(self._window, text="Den", font=MainWindow.FONT, command=self.DenSelection)#button defined for den self._bt_den.grid(row=0, column=4, padx=5, pady=5)#postioning / dimensions of button box self._bt_hex = tkinter.Button(self._window, text="Hex", font=MainWindow.FONT, command=self.HexSelection)#button defined for bin self._bt_hex.grid(row=0, column=6, padx=5, pady=5)#postioning / dimensions of button box separator = tkinter.ttk.Separator(self._window,orient=tkinter.HORIZONTAL) separator.grid(row=1, column=1, pady=4) #setting the Output boxes and the labels accordingly #binary output box and label label = tkinter.Label(self._window, text="Binary:", font=MainWindow.FONT)#label defined for number box label.grid(row=2, column=0, padx=10, pady=5) self._stringvar_bin = tkinter.StringVar() txt_output = tkinter.Entry(self._window, textvariable=self._stringvar_bin, width=MainWindow.TxtMaxLen, state="readonly", font=MainWindow.FONT)#entry box set to readonly to act as a display box txt_output.grid(row=2, column=1, pady=5) #denary output box and label label = tkinter.Label(self._window, text="Denary:", font=MainWindow.FONT)#label defined for number box label.grid(row=3, column=0, padx=5, pady=5) self._stringvar_den = tkinter.StringVar() txt_output = tkinter.Entry(self._window, textvariable=self._stringvar_den, width=MainWindow.TxtMaxLen, state="readonly", font=MainWindow.FONT) txt_output.grid(row=3, column=1, pady=5) #hexadecimal output box and label label = tkinter.Label(self._window, text="Hexadecimal:", font=MainWindow.FONT)#label defined for number box label.grid(row=4, column=0, padx=5, pady=5) self._stringvar_hex = tkinter.StringVar() txt_output = tkinter.Entry(self._window, textvariable=self._stringvar_hex, width=MainWindow.TxtMaxLen, state="readonly", font=MainWindow.FONT) txt_output.grid(row=4, column=1, pady=5) def BinSelection(self): try: Bin = self._txt_input.get().strip().replace(" ", "") BinValue = bin2bin(Bin) DenValue = bin2den(Bin) HexValue = bin2hex(Bin) self._set_values(BinValue, DenValue, HexValue) except Exception as ex: tkinter.messagebox.showerror("Error", "Invalid conversion") tkinter.messagebox.showinfo("Error", "Enter a valid 8 bit binary number or a integer / hexadecimal value") print(ex, file=sys.stderr) def DenSelection(self): try: Den = self._txt_input.get().strip().replace(" ", "") DenValue = den2den(Den) BinValue = den2bin(Den) HexValue = den2hex(Den) self._set_values(BinValue, DenValue, HexValue) except Exception as ex: tkinter.messagebox.showerror("Error", "Invalid conversion") print(ex, file=sys.stderr) def HexSelection(self): try: Hex = self._txt_input.get().strip().replace(" ", "") HexValue = hex2hex(Hex) BinValue = hex2bin(Hex) DenValue = hex2den(Hex) self._set_values(BinValue, DenValue, HexValue) except Exception as ex: tkinter.messagebox.showerror("Error", "Invalid conversion") print(ex, file=sys.stderr) def _set_values(self, BinValue, DenValue, HexValue): if not BinValue.startswith(""): BinValue = "" + BinValue if not HexValue.startswith(""): HexValue = "" + HexValue self._stringvar_bin.set(BinValue) self._stringvar_den.set(DenValue) self._stringvar_hex.set(HexValue) def mainloop(self): self._window.mainloop()if __name__ == "__main__": win = MainWindow() win.mainloop()any insight on how to fix this would be great thanks. and sorry if this whole question was just a silly one. | Answer to my question as the Issue is now resolved. Fixed it using columnspan which allows you to set a box across several columns.label = tkinter.Label(self._window, text="Convert From ", font=MainWindow.FONT) label.grid(row=0, column=2,columnspan=3, padx=5, pady=5)Here is what it looks like with the resolved problem |
IP address must be specified when pinging a website with Python So I'm trying to ping a website such as Microsoft or Google and print out the results, but when I run the script it just says: "IP address must be specified.". I've tried looking around to see why this is happening, but can't seem to narrow down a solution. Here's my code:import subprocessprint('Ping www.microsoft.com')print()address = 'www.microsoft.com'subprocess.call(['ping', '-c 3', address])Am I doing something wrong? If so, any help or explanation would be greatly appreciated! | To show the output of the subprocess call you can use check_output method: See this answer for detailsimport subprocessdef ping(): print('Ping www.microsoft.com') print() address = 'www.microsoft.com' print(subprocess.check_output(['ping', '-c', '3', address]).decode())ping()Output:Ping www.microsoft.comPING e13678.dspb.akamaiedge.net (23.53.160.151) 56(84) bytes of data.64 bytes from a23-53-160-151.deploy.static.akamaitechnologies.com (23.53.160.151): icmp_seq=1 ttl=55 time=83.6 ms64 bytes from a23-53-160-151.deploy.static.akamaitechnologies.com (23.53.160.151): icmp_seq=2 ttl=55 time=83.5 ms64 bytes from a23-53-160-151.deploy.static.akamaitechnologies.com (23.53.160.151): icmp_seq=3 ttl=55 time=83.7 ms--- e13678.dspb.akamaiedge.net ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2003msrtt min/avg/max/mdev = 83.567/83.648/83.732/0.067 ms |
Programming a run schedule Currently I'm trying to add a seasonal run schedule to a current program that I have. I've come up with the following code bellow that works but I'm trying to do this without having to regularly update the year in my defined dates.from datetime import datetd = date.today()fs = date(2018, 3, 31)fe = date(2018, 10, 18)ws = date(2018, 5, 31)we = date(2019, 3, 18)sps = date(2018, 10, 1)spe = date(2019, 5, 18)sus = date(2018, 11, 1)sue = date(2019, 9, 17)if fs < td < fe: print "FALL"if ws < td < we: print "WINTER"if sps < td < spe: print "SPRING"if sus < td < sue: print "SUMMER"With the current code, if today were 10/5/18, it prints:FALLWINTERSPRING | It looks like the code is independent of the year (or am I missunderstanding your code??), so maybe you could just take the current year as the base.from datetime import datetd = date.today()current_year = td.yearfa_s = date(current_year , 3, 31)fa_e = date(current_year , 10, 18)wi_s = date(current_year , 5, 31)wi_e = date(current_year + 1, 3, 18)sp_s = date(current_year , 10, 1)sp_e = date(current_year + 1, 5, 18)su_s = date(current_year , 11, 1)su_e = date(current_year + 1, 9, 17) |
Empty response after submit a form with (requests) python package I'm trying to submit a form using (requests) module.Here is the form that I want to submit:<form method="POST" enctype="multipart/form-data" action="/cgi-bin/claws72.pl"><input type="hidden" name="email" value="a.nobody@here.ac.uk"><br><br>Select tagset:<input type="radio" name="tagset" value="c5" checked=""> C5<input type="radio" name="tagset" value="c7"> C7<br><br>Select output style:<input type="radio" name="style" value="horiz" checked=""> Horizontal<input type="radio" name="style" value="vert"> Vertical<input type="radio" name="style" value="xml"> Pseudo-XML<br><br><textarea name="text" rows="10" cols="50" wrap="virtual">Type (or paste) your text to be tagged into this box.</textarea><br><input type="SUBMIT" value="Tag text now"><input type="reset" value="Reset form"></form>This is the website that contains this form: http://ucrel.lancs.ac.uk/claws/trial.htmlHere is my code:import requestsdata = {'email' : 'a.nobody@here.ac.uk','tagset':'c7','style' : 'xml','text' : 'TEST' }r = requests.post('http://ucrel.lancs.ac.uk/cgi-bin/claws72.pl', data=data)print(r.text) #0 words tagged (Why? it should tag the (TEST) word)print(r.ok) #TrueI use the same code with my own website form and it works, but can't figure out why it doesn't do so here! Do you think the website itself block such requests?Thanks | If You check request data using Firebug or some similar tool, You'd see that request data is actually in following format:-----------------------------41184676334Content-Disposition: form-data; name="email"a.nobody@here.ac.uk-----------------------------41184676334Content-Disposition: form-data; name="tagset"c7-----------------------------41184676334Content-Disposition: form-data; name="style"xml-----------------------------41184676334Content-Disposition: form-data; name="text"TEST-----------------------------41184676334--Try formatting your data this way, and then try again. Also, it would be good idea to pass other request header fields (such as Accept-Encoding, Content-Type...). |
fakename() not populating into gmail fake.name() appears to give a random name, no errors, but I can see in selenium chromedriver that nothing is input. Any idea why this is ? from faker import Fakerfake = Faker('it_IT')for _ in range(1): print(fake.name()) username = driver.find_element_by_css_selector("#emailPass") username.send_keys(fake.name()) time.sleep(2) | Use this for storing the name as a variable rather than using the function.from faker import Fakerfake = Faker('it_IT')for _ in range(1): name = fake.name() print(name) username = driver.find_element_by_css_selector("#emailPass") username.send_keys(name) time.sleep(2) |
MYSQL python list index out of range I'm wring a web scraping program to collect data from truecar.commy database has 3 columnsand when I run the program I get an error which is this : list indext out of rangehere is what I've done so far:import mysql.connectorfrom bs4 import BeautifulSoupimport requestsimport re# take the car's namerequested_car_name = input()# inject the car's name into the URLmy_request = requests.get('https://www.truecar.com/used-cars-for-sale/listings/' + requested_car_name + '/location-holtsville-ny/?sort[]=best_match')my_soup = BeautifulSoup(my_request.text, 'html.parser')# ************ car_model column in database ******************car_model = my_soup.find_all( 'span', attrs={'class': 'vehicle-header-make-model text-truncate'})# we have a list of car modelscar_list = []for item in range(20): # appends car_model to car_list car_list.append(car_model[item].text)car_string = ', '.join('?' * len(car_list))# ************** price column in database *****************************price = my_soup.find_all( 'div', attrs={'data-test': 'vehicleCardPricingBlockPrice'})price_list = []for item in range(20): # appends price to price_list price_list.append(price[item].text)price_string = ', '.join('?' * len(price_list))# ************** distance column in database ***************************distance = my_soup.find_all('div', attrs={'data-test': 'vehicleMileage'})distance_list = []for item in range(20): # appends distance to distance_list distance_list.append(distance[item].text)distance_string = ', '.join('?' * len(distance_list))# check the connectionprint('CONNECTING ...')mydb = mysql.connector.connect( host="xxxxx", user="xxxxxx", password="xxxxxx", port='xxxxxx', database='xxxxxx')print('CONNECTED')# checking the connection is donemy_cursor = mydb.cursor(buffered=True)insert_command = 'INSERT INTO car_name (car_model, price, distance) VALUES (%s, %s, %s);' % (car_string, price_string, distance_string)# values = (car_string, price_string, distance_string)my_cursor.execute(insert_command, car_list, price_list, distance_list)mydb.commit()print(my_cursor.rowcount, "Record Inserted")mydb.close()and I have another problem that I can't insert a list into my columns and I have tried many ways but unfortunately I wasn't able to get it workingI think the problem is in this line:IndexError Traceback (most recent call last)<ipython-input-1-4a3930bf0f57> in <module> 23 for item in range(20): 24 # appends car_model to car_list---> 25 car_list.append(car_model[item].text) 26 27 car_string = ', '.join('?' * len(car_list))IndexError: list index out of rangeI don't want it to insert the whole list to 1 row in database . I want the first 20 car's price, model, mileage in truecar.com in my database | Ya you are hard coding the length. Change how you are iterating through your soup elements. So:import mysql.connectorfrom bs4 import BeautifulSoupimport requests# take the car's namerequested_car_name = input('Enter car name: ')# inject the car's name into the URLmy_request = requests.get('https://www.truecar.com/used-cars-for-sale/listings/' + requested_car_name + '/location-holtsville-ny/?sort[]=best_match')my_soup = BeautifulSoup(my_request.text, 'html.parser')# ************ car_model column in database ******************car_model = my_soup.find_all( 'span', attrs={'class': 'vehicle-header-make-model text-truncate'})# we have a list of car modelscar_list = []for item in car_model: # appends car_model to car_list car_list.append(item.text)# ************** price column in database *****************************price = my_soup.find_all( 'div', attrs={'data-test': 'vehicleCardPricingBlockPrice'})price_list = []for item in price: # appends price to price_list price_list.append(item.text)# ************** distance column in database ***************************distance = my_soup.find_all('div', attrs={'data-test': 'vehicleMileage'})distance_list = []for item in distance: # appends distance to distance_list distance_list.append(item.text)# check the connectionprint('CONNECTING ...')mydb = mysql.connector.connect( host="xxxxx", user="xxxxxx", password="xxxxxx", port='xxxxxx', database='xxxxxx')print('CONNECTED')# checking the connection is donemy_cursor = mydb.cursor(buffered=True)insert_command = 'INSERT INTO car_name (car_model, price, distance) VALUES (%s, %s, %s)'values = list(zip(car_list, price_list, distance_list))my_cursor.executemany(insert_command, values)mydb.commit()print(my_cursor.rowcount, "Record Inserted")mydb.close()ALTERNATE:there's also the API where you can fetch the dat:import mysql.connectorimport requestsimport math# take the car's namerequested_car_name = input('Enter car name: ')# inject the car's name into the URLurl = 'https://www.truecar.com/abp/api/vehicles/used/listings'payload = {'city': 'holtsville','collapse': 'true','fallback': 'true','include_incentives': 'true','include_targeted_incentives': 'true','make_slug': requested_car_name,'new_or_used': 'u','per_page': '30','postal_code': '','search_event': 'true','sort[]': 'best_match','sponsored': 'true','state': 'ny','page':'1'}jsonData = requests.get(url, params=payload).json()total = jsonData['total']total_pages = math.ceil(total/30)total_pages_input = input('There are %s pages to iterate.\nEnter the number of pages to go through or type ALL: ' %total_pages)if total_pages_input.upper() == 'ALL': total_pages = total_pageselse: total_pages = int(total_pages_input)values = []for page in range(1,total_pages+1): if page == 1: car_listings = jsonData['listings'] else: payload.update({'page':'%s' %page}) jsonData = requests.get(url, params=payload).json() car_listings = jsonData['listings'] for listing in car_listings: vehicle = listing['vehicle'] ex_color = vehicle['exterior_color'] in_color = vehicle['interior_color'] location = vehicle['location'] price = vehicle['list_price'] make = vehicle['make'] model = vehicle['model'] mileage = vehicle['mileage'] style = vehicle['style'] year = vehicle['year'] engine = vehicle['engine'] accidentCount = vehicle['condition_history']['accidentCount'] ownerCount = vehicle['condition_history']['ownerCount'] isCleanTitle = vehicle['condition_history']['titleInfo']['isCleanTitle'] isFrameDamaged = vehicle['condition_history']['titleInfo']['isFrameDamaged'] isLemon = vehicle['condition_history']['titleInfo']['isLemon'] isSalvage = vehicle['condition_history']['titleInfo']['isSalvage'] isTheftRecovered = vehicle['condition_history']['titleInfo']['isTheftRecovered'] values.append((ex_color, in_color,location,price,make,model,mileage, style,year,engine,accidentCount,ownerCount,isCleanTitle,isFrameDamaged, isLemon, isSalvage,isTheftRecovered)) print('Completed: Page %s of %s' %(page,total_pages)) # check the connectionprint('CONNECTING ...')mydb = mysql.connector.connect( host="xxxxx", user="xxxxxx", password="xxxxxx", port='xxxxxx', database='xxxxxx')print('CONNECTED')# checking the connection is donemy_cursor = mydb.cursor(buffered=True)# create_command = ''' create table car_information (exterior_color varchar(255), interior_color varchar(255),location varchar(255),price varchar(255),make varchar(255),model varchar(255),mileage varchar(255),# style varchar(255),year varchar(255),engine varchar(255),accidentCount varchar(255),ownerCount varchar(255),isCleanTitle varchar(255),isFrameDamaged varchar(255),# isLemon varchar(255), isSalvage varchar(255),isTheftRecovered varchar(255))'''# my_cursor.execute(create_command)# print('created')insert_command = '''INSERT INTO car_name (exterior_color, interior_color,location,price,make,model,mileage, style,year,engine,accidentCount,ownerCount,isCleanTitle,isFrameDamaged, isLemon, isSalvage,isTheftRecovered) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)'''my_cursor.executemany(insert_command, values)mydb.commit()print(my_cursor.rowcount, "Record Inserted")mydb.close() |
Alternative to using 'in' with numpy.where() Lets say I have an array, 'foo', with two columns. Column 0 has values 1 to 12 indicating months. Column 1 has the corresponding measurement values. If I wanted to create a mask of measurement values from December, January and February (12,1,2) I would suspect that I could:numpy.where(foo[:,1] in (12, 1, 2), False, True)But it would appear that my clever 'in (12, 1, 2)' doesn't work as a conditional for where(). Nor does it appear to work as [12, 1, 2], etc...Is there another clever way to do this? Is there a better way for me to collect all of the (12, 1, 2) measurements into an array? What is the numpy way?(Reshaping the array is out of the question because there are an irregular number of measurements for each month) | I think the reason the 'in (12, 1, 2)' does not work is that the element before the 'in' has to be a single element. But for this, numpy has the function in1d (documentation) to do a 'in' with a numpy array. With your code:np.where(np.in1d(foo[:,0], [12, 1, 2]), False, True)To complete the answer with the comment: in this case using where is redundant, the output of in1d can be used to index foo:foo[np.in1d(foo[:,0], [12, 1, 2])]or for foo[~np.in1d(foo[:,0], [12, 1, 2])]Note: in1d is only available from numpy 1.4 or higher. |
Split the string 'abcde' into a list with separate elements I have a string like 'g fmnc wms bgblr rpylqjyrc gr zw fylb'. I use the .split() function in python and get ['g', 'fmnc', 'wms', 'bgblr', 'rpylqjyrc', 'gr', 'zw', 'fylb']Now I want to split each of the elements into seperated lists like: [['g'], [['f'],['m'],['n'],['c']],...] and so on.My problem is to split the element ['abcbd'] into [['a'],['b'],['c'],['b'],['d']] | Try this:[list(item) for item in s.split()]It will give you this [['g'], ['f', 'm', 'n', 'c'], ...] which isn't quite what you asked for, but probably what you meant. |
Convert from integer to dictionary with repeating digits I need to take an integer n and add all its digits into a Python dictionary (hashtable) to later access them with an O(1) complexity.n = 941726149d = {}for i in str(n): d[i] = Noneprint(d)The problem is when there are repeating digits, the dictionary overrides them changing its order. For example, the above code outputs:{'9': None, '4': None, '1': None, '7': None, '2': None, '6': None}But the output I need is :{'9': None, '4': None, '1': None, '7': None, '2': None, '6': None, '1': None, '4': None, '9': None}I know it's impossible to add repeating keys into a hashtable, so I'm wondering if that data structure can be modified to fit this behavior, or use another (maybe custom) structure that supports it.Edit:Example input and output:n = 123455 d[5] = [4, 5]n = 987385 d[8] = [1, 4] | d[9] = [0]Thanks in advance. | Here's an example, that for each digit present in the number, it stores its indexes. As a note, digits not present in the number won't be present in the structure either, so trying to index them (structure[digit_not_existing_in_number]) will yield KeyError (that's why dict.get is required). But that can be easily fixed.code00.py:#!/usr/bin/env pythonimport sysdef o1_access_structure(n): ret = {} for i, e in enumerate(str(n)): ret.setdefault(int(e), []).append(i) return retdef main(*argv): ns = ( 123455, 987385, 941726149, ) for n in ns: print("\n", n) s = o1_access_structure(n) for e in range(10): print(" ", e, s.get(e))if __name__ == "__main__": print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform)) rc = main(*sys.argv[1:]) print("\nDone.") sys.exit(rc)Output:[cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q072713761]> "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Scripts\python.exe" ./code00.pyPython 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] 064bit on win32 123455 0 None 1 [0] 2 [1] 3 [2] 4 [3] 5 [4, 5] 6 None 7 None 8 None 9 None 987385 0 None 1 None 2 None 3 [3] 4 None 5 [5] 6 None 7 [2] 8 [1, 4] 9 [0] 941726149 0 None 1 [2, 6] 2 [4] 3 None 4 [1, 7] 5 None 6 [5] 7 [3] 8 None 9 [0, 8]Done.Or, if you don't care about the digit indexes, but only their presence / frequency, you could use [Python.Docs]: class collections.Counter([iterable-or-mapping]):>>> from collections import Counter>>>>>>>>> c = Counter(int(e) for e in str(941726149))>>>>>> for e in range(10):... print(e, c[e])...0 01 22 13 04 25 06 17 18 09 2 |
python - reconnect stomp connection when dead As per the documentation, given that I've instantiated a connection:>>> import stomp>>> c = stomp.Connection([('127.0.0.1', 62615)])>>> c.start()>>> c.connect('admin', 'password', wait=True)How do I monitor it so that it reconnects on c.is_connected == False?>>> reconnect_on_dead_connection(c)...>>> [1479749503] reconnected dead connection | You can wrap your connection and check if its connected every call.import stompdef reconnect(connection): """reconnect here"""class ReconnectWrapper(object): def __init__(self, connection): self.__connection = connection def __getattr__(self, item): if not self.__connection.is_connected: reconnect(self.__connection) return getattr(self.__connection, item)if __name__ == '__main__': c = stomp.Connection([('127.0.0.1', 62615)]) c.start() c.connect('admin', 'password', wait=True) magic_connection = ReconnectWrapper(c)Test:from scratch_35 import ReconnectWrapperimport unittestimport mockclass TestReconnection(unittest.TestCase): def setUp(self): self.connection = mock.MagicMock() self.reconnect_patcher = mock.patch("scratch_35.reconnect") self.reconnect = self.reconnect_patcher.start() def tearDown(self): self.reconnect_patcher.stop() def test_pass_call_to_warapped_connection(self): connection = ReconnectWrapper(self.connection) connection.send("abc") self.reconnect.assert_not_called() self.connection.send.assert_called_once_with("abc") def test_reconnect_when_disconnected(self): self.connection.is_connected = False connection = ReconnectWrapper(self.connection) connection.send("abc") self.reconnect.assert_called_once_with(self.connection) self.connection.send.assert_called_once_with("abc")if __name__ == '__main__': unittest.main()result:..----------------------------------------------------------------------Ran 2 tests in 0.004sOKThe key is magic method __getatter__ it's called everytime you try to access an attribute that is not provided by an object. More about method __getattr__ you can find in doucmentation https://docs.python.org/2/reference/datamodel.html#object.getattr . |
difference between the next function and the next method When you make a generator by calling a function or method that has the yield keyword in it you get an object that has a next method.So far as I can tell, there isn't a difference between using this method and using the next builtin function.e.g. my_generator.next() vs next(my_generator)So is there any difference? If not, why are there two ways of calling next on a generator? | In Python 2 the internal method for an iterator is next() in Python 3 it is __next__(). The builtin function next() is aware of this and always calls the right method making the code compatible with both versions. Also it adds the default argument for more easy handling of the iteration end. |
Is there an equivalent of typedefs for mypy? Sometimes when coding, I want "special sorts of strings" and "special sorts of integers" for documentation.For example you might have.def make_url(name:str) -> URL:where URL is really a string. In some languages like C you can use a typedef for this, and in python you could do something like.URL = strIs there a correct way to do this? You can do stuff in a very programmatic way, and have:class URL(str): passor evenclass URL: def __init__(self, url): self.urlBut both of these feel excessive such that for a lot of use cases they aren't really worth the overhead. | You can use the NewType helper function to create new types.Here's a small example:from typing import NewTypeUserId = NewType('UserId', int)some_id = UserId(524313)def foo(a: UserId): passdef bar(a: int): passfoo(some_id) # OKfoo(42) # error: Argument 1 to "foo" has incompatible type "int"; expected "UserId"bar(some_id) # OKPay attention to some points:The static type checker will treat the new type as if it were a subclass of the original type. This is useful in helping catch logical errors [...]Note that these checks are enforced only by the static type checker. At runtime, the statement Derived = NewType('Derived', Base) will make Derived a function that immediately returns whatever parameter you pass it. That means the expression Derived(some_value) does not create a new class or introduce any overhead beyond that of a regular function call. |
creating a .mat file from python I have a variable exon = [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]]. I would like to create a mat file like the following >>exon : [3*2 double] [2*2 double]When I used the python code to do the same it is showing error message. here is my python codeimport scipy.ioexon = [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]]scipy.io.savemat('/tmp/out.mat', mdict={'exon': (exon[0], exon[1])})It will be great anyone can give a suggestion for the same.thanks in advanceVipin T S | You seem to want two different arrays linked to same variable name in Matlab. That is not possible. In MATLAB you can have cell arrays, or structs, which contain other arrays, but you cannot have just a tuple of arrays assigned to a single variable (which is what you have in mdict={'exon': (exon[0], exon1)) - there is no concept of a tuple in Matlab.You will also need to make your objects numpy arrays:import numpy as npexon = [ np.array([[1, 2], [3, 4], [5, 6]]), np.array([[7, 8], [9, 10]]) ]There is scipy documentation here with details of how to save different Matlab types, but assuming you want cell array:obj_arr = np.zeros((2,), dtype=np.object)obj_arr[0] = exon[0]obj_arr[1] = exon[1]scipy.io.savemat('/tmp/out.mat', mdict={'exon': obj_arr})this will result in the following at matlab:or possibly (untested):obj_arr = np.array(exon, dtype=np.object) |
Can access AppEngine SDK sites via local ip-address when localhost works just fine and a MacOSX Can access AppEngine SDK sites via local ip-address when localhost works just fine and a MacOSX using the GoogleAppEngineLauncher.I'm trying to setup facebook development site (using a dyndns.org hostname pointing at my firewall which redirects the call to my mac book).It seems like GoogleAppEngineLauncher defaults to localhost and blocks access to the ip-address directly. Is there a way to change that behaviour in GoogleAppEngineLauncher?Is this some kind of limitation built in by Google?It doesn't seem to be an issue of configuration, because there isn't any settings for this.So I'm guessing patching the source will be required? | As per the latest documentation -a wont work anymore.This is possible by passing --host argument with dev_appserver.py commanddev_appserver --host=<your_ip_address> <your_app> --host= The host address to use for the server. You may need to set this to be able to access the development server from another computer on your network. An address of 0.0.0.0 allows both localhost access and hostname access. Default is localhost.if you want to access development server using localhost & ip address, use this command:dev_appserver.py --host=0.0.0.0 <your_app> |
Python turtle tic tac toe I'm new to python and I programed a tic tac toe with an AI that plays against you. Everything is working but I used textboxes to inform the AI what the player chose. Now I want to upgrade my game so that the player can click on the box he wants to fill instead of typing it in the textbox. My idea was to use onscreenclick() but I'm having some issues. onscreenclick() returns the coordinates that have been clicked on the canvas and I want to use a function to determine in which box the player clicked.I got this:from turtle import * def whichbox(x,y): #obviously i got 9 boxes but this is just an example for box 1 if x<-40 and x>-120: if y>40 and y<120: return 1 else: return 0 else: return 0box=onscreenclick(whichbox)print(box)It's obvious that I want box to be 0 or 1 in this case but instead the value of box is None. Does anyone know how to fix this? It has to do something with the variable box because I if replace return 1 with print("1") it works. I assume that the variable gets defined to quickly.The second question that I have is if its possible pause the programm until the player clicked on a box but its more important to look at the first problem first. | Assuming you have named your Screen() in the turtle module, you should then putscreen.onscreenclick(whichbox)instead of:onscreenclick(whichbox)Example:from turtle import Turtle, Screenturtle = Turtle()screen = Screen()def ExampleFunction(): return 7screen.onscreenclick(ExampleFunction)Furthermore, jasonharper is correct when he says the onscreenclick() function is unable to return any value. As such, you can include a print function within your function whichbox() in order to print out a value, like:def whichbox(x,y): if x<-40 and x>-120: if y>40 and y<120: print(1) return 1 else: print(0) return 0 else: print(0) return 0Alternatively, if you wanted to keep the print statement outside of whichbox(), you could also do the following:screen.onscreenclick(lambda x, y: print(whichbox(x, y)))which creates a lambda function that gives (x, y) from onscreenclick() to a print statement containing whichbox(). |
Why do changes to a temporary variable representing a row of a matrix, affect the row of the matrix itself? I was trying to write a code that can do elementary row operations on matrices and I have run into some issues. I realize that there are libraries that have functions that can be used to complete these operations; however, I am doing this for my own gratification.The problem arises with the replacement operation. The intended purpose of this operation is to replace a row by the sum of itself and the multiple of another row. For example, if I have a matrix [[1,2,3],[2,1,3],[3,2,1]] , I want to replace the top row (the row [1,2,3]) with the sum of itself and the second row (the row [2,1,3]) multiplied by a factor of 2. I would like the code to give me: [[5,4,9],[2,1,3],[3,2,1]]When I enter this particular matrix, the answer I get is: [[5, 4, 9], [4, 2, 6], [3, 2, 1]]My code is as follows:def multiply_row(matrix,row_num,factor): #Let row_num = 1 indicate the top row temp_row = matrix[row_num - 1] entries = len(matrix[row_num - 1]) current_term = 0 while current_term < entries: temp_row[current_term] = temp_row[current_term] * factor current_term = current_term + 1 return temp_rowdef replacement(matrix,rowA,rowB,factor): #Replace 1 row be by the sum of itself and a multiple of another row #rowA is the row being replaced #rowB is being multiplied by the factor #Let rowA = 1 indicate the top row temp_rowB = multiply_row(matrix,rowB,factor) entries = len(matrix[rowA - 1]) current_term = 0 while current_term < entries: matrix[rowA - 1][current_term] = temp_rowB[current_term] + matrix[rowA - 1][current_term] current_term = current_term + 1 return matrixm = [ [1,2,3], [2,1,3], [3,2,1] ]print replacement(m, 1, 2, 2)Clearly, the problem lies within my “multiply_row” function. I created this function so that I could create a temporary place where I can multiply a row by a factor, without actually affecting the row in the matrix itself. This is not working. I was wondering if someone could explain why this temporary row is actually altering the row in the matrix itself. Also, I realize that I am probably not doing the operation in the most efficient way possible and I would be curious to know what the more efficient way to do it would be (this is only secondary, I would really appreciate an answer to my first question).Thank you for the help | The problem is that temp_row is not a copy of the row in your matrix but a reference to it. Anything you do to temp_row therefore happens to the corresponding row in your matrix, since it is happening to the same object (which happens to be referenced in two different ways). Replace the line in multiply_row() withtemp_row = matrix[row_num - 1][:]to make a copy. You then get:[[5, 4, 9], [2, 1, 3], [3, 2, 1]]as you required. |
Why do Python regex strings sometimes work without using raw strings? Python recommends using raw strings when defining regular expressions in the re module. From the Python documentation: Regular expressions use the backslash character ('\') to indicate special forms or to allow special characters to be used without invoking their special meaning. This collides with Python’s usage of the same character for the same purpose in string literals; for example, to match a literal backslash, one might have to write '\\' as the pattern string, because the regular expression must be \, and each backslash must be expressed as \ inside a regular Python string literal.However, in many cases this is not necessary, and you get the same result whether you use a raw string or not:$ ipythonIn [1]: import reIn [2]: m = re.search("\s(\d)\s", "a 3 c")In [3]: m.groups()Out[3]: ('3',)In [4]: m = re.search(r"\s(\d)\s", "a 3 c")In [5]: m.groups()Out[5]: ('3',)Yet, in some cases this is not the case:In [6]: m = re.search("\s(.)\1\s", "a 33 c")In [7]: m.groups()---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-12-84a8d9c174e2> in <module>()----> 1 m.groups()AttributeError: 'NoneType' object has no attribute 'groups'In [8]: m = re.search(r"\s(.)\1\s", "a 33 c")In [9]: m.groups()Out[9]: ('3',)And you must escape the special characters when not using a raw string:In [10]: m = re.search("\\s(.)\\1\\s", "a 33 c")In [11]: m.groups()Out[11]: ('3',)My question is why do the non-escaped, non-raw regex strings work at all with special characters (as in command [2] above)? | The example above works because \s and \d are not escape sequences in python. According to the docs: Unlike Standard C, all unrecognized escape sequences are left in the string unchanged, i.e., the backslash is left in the string. But it's best to just use raw strings and not worry about what is or isn't a python escape, or worry about changing it later if you change the regex. |
Tensorflow/Python: Function tuple returning copies of first element I want to return 7 values from a function and later print those values.But when I print them out, I always get seven copies of the first value.I can't seem to understand what I'm doing wrong though.What the code is doing is comparing a tensor(array) of n elements, and checking the percentage of elements which are under certain thresholds.def accuracy(): n=7850 m = 100.0 difference = tf.abs(tf.subtract(prediction,(labels_tf))) one = tf.reduce_sum(tf.cast(tf.less(difference,[1]),dtype=tf.int32 )) one = tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) two = tf.reduce_sum(tf.cast(tf.less(difference,[2]),dtype=tf.int32 )) two = tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) three =tf.reduce_sum(tf.cast(tf.less(difference,[3]),dtype=tf.int32 )) three =tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) four= tf.reduce_sum(tf.cast(tf.less(difference,[4]),dtype=tf.int32 )) four= tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) five= tf.reduce_sum(tf.cast(tf.less(difference,[5]),dtype=tf.int32 )) five= tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) ten = tf.reduce_sum(tf.cast(tf.less(difference,[10]),dtype=tf.int32 )) ten = tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) fifteen =tf.reduce_sum(tf.cast(tf.less(difference,[15]),dtype=tf.int32 )) fifteen = tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) return (one, two, three, four, five, ten, fifteen)a1,a2,a3,a4,a5,a10,a15 = accuracy() | You're passing one into each set of operations on the second line i.e.fifteen = tf.multiply(tf.divide(tf.cast(one,dtype=tf.float32),n ),m) ^^ |
PyUSB: No backend available I'm trying to reach out and find what USB devices are tied to my computer. I'm going through the "Programming with PyUSB 1.0" tutorial. I can't get anything I'm using there to work, I keep getting a "ValueError: No background available" error. What is missing from my code? I'm on a 64 bit Windows 10 machine. I installed the libusb1 package as well as the pylibusb package.import usbimport usb.coreimport usb.utilimport sysdevices = usb.core.find(find_all=True)sys.stdout.write('There are ' + len(devices) + ' in the system\n.') | If it's a windows OS you should download the installer from hereIf it's Linux, did you install libusb-1.0 or openusb as a backends? If no, you should. |
Operator != vs Learning python. I’ve doubt on use of following operators:!= —> Checks if the value of two operands is equal or not, if values are not equal then condition becomes true.<> —> Checks if the value of two operands is equal or not, if values are not equal then condition becomes true.The above definition is right(referred from book)? If yes, when we use both operators. Please share your experience with correct situation!!! | From the documentation - The forms <> and != are equivalent; for consistency with C, != is preferred; where != is mentioned below <> is also accepted. The <> spelling is considered obsolescent.(Emphasis mine)They are equivalent , but you should use != . <> operator is not there in Python 3.x as well. |
Read label values and store into a variable I'm trying to read data from a website and have that data stored into a variable.Example:http://www.example.com/example-info.php -> Name: BobAddress: 1234 StreetTelephone: 000-000-0000Email: Bobs-email@nothing.comWhat I would like to do is get the value from the label "Name" which is Bob and store it into a variable such as "Username". or "Telephone" and store the value into a variable named "Phone". Can someone point me in the right direction. Perhaps I need to use Mechanize? | Are you trying to screen scrape the website? If so, using requests or BeautifulSoup are good things to look at.import requestsx = requests.get('http://www.google.com')if x.status_code == 200: print x.contentx.content in the above scenario will hold the data/html from the page you send the request to. Then, you could use BeautifulSoup or regex to extract out the particular information you need. |
How to export Python 3.4 input information to JSON? I am trying to build an artificial intelligence program that will keep track of information and other stuff, but am having issues with trying to get it to export the information to a a file. I am new to Python and would like some help identifying what could be the issue.Here is the source code.#Importationsimport jsonpickleimport mathimport osimport sysimport timefrom random import randrange, uniform#SetupsSAVEGAME_FILENAME = 'aisetup.json'game_state = dict()#Program AIclass Computer(object): def __init__(self, name, creator): self.name = name self.creator = creator#User Informationclass Human(object): def __init__(self, name, birth): self.name = name self.birth = birth #Load Program Savedef load_game(): """Load game state from a predefined savegame location and return the game state contained in that savegame. """ with open(SAVEGAME_FILENAME, 'r') as savegame: state = jsonpickle.decode(savegame.read()) return state#Save Program to JSONdef save_game(): """Save the current game state to a savegame in a predefined location. """ global game_state with open(SAVEGAME_FILENAME, 'w') as savegame: savegame.write(jsonpickle.encode(game_state))#Initialize Programdef initialize_game(): """Runs if no AISave is found""" UserProg = Human('User', '0/0/0') AISystem = Computer('TempAI', 'Austin Hargis') state = dict() state['UserProg'] = [UserProg] state['AISystem'] = [AISystem] return state#TextPad#Main Code - Runs Multiple Times per Being Openneddef prog_loop(): global game_state name = input (str("What is your name?: ")) save_game()#Main Programdef main(): """Main function. Check if a savegame exists, and if so, load it. Otherwise initialize the game state with defaults. Finally, start the game. """ global game_state if not os.path.isfile(SAVEGAME_FILENAME): game_state = initialize_game() else: game_state = load_game() prog_loop()#Launch Codeif __name__ == '__main__': main()Every time I run this, it exports the information to a file like this:{"UserProg": [{"birth": "0/0/0", "py/object": "__main__.Human", "name": "User"}], "AISystem": [{"py/object": "__main__.Computer", "name": "TempAI", "creator": "Austin Hargis"}]}I want it to export your name to the folder but it does not work right. | You never do anything with the name the user enters:def prog_loop(): global game_state name = input (str("What is your name?: ")) save_game()name is just a local variable there. If you wanted to save that as the name for the human, then you need to set that on the UserProg entry in your game state:def prog_loop(): name = input("What is your name?: ") # game_state['UserProg'] is a list with one Human instance in it game_state['UserProg'][0].name = name save_game()Because you are altering the mutable object contained in game_state rather than assign to game_state itself, you don't need the global statement there. The str() call was also redundant, the "..." syntax already produced a string, there is little point in converting that to a string again. |
Reversing a Dict I am currently trying to make a function which reverses a dict's keys and values. I was looking online and came across this:def reverse(d): return dict([(v, k) for k, v in d.iteritems()])My problem is that I'm not sure what this means. I understand the idea of a for loop on the single line but I'm not sure how the (v, k) for k, v leads to the keys and values being reversed. Could someone please offer me a hand. (I did search for this, both online and on Stack Overflow but couldn't find anything.) | for k, v in d.iteritems() is each key k and value v so reversing v and k with (v, k) makes the old value the key and the old key the new valueIn [7]: d = {1:10,2:20}In [8]: d.items()Out[8]: dict_items([(1, 10), (2, 20)]) # tuples of key and value In [1]: d = {1:10,2:20}In [2]: for k,v in d.iteritems(): print k,v ...: 1 10 # 1 is the key 10 is the value2 20In [3]: new_d = {v:k for k,v in d.iteritems()} # swap key for value and value for keyIn [4]: new_dOut[4]: {10: 1, 20: 2}Two problems you may encounter are duplicate values or values that are not hashable so they cannot be used as keys like lists, sets etc...In [5]: d = {1:2,2:2}In [6]: new_d = {v:k for k,v in d.iteritems()}In [7]: new_dOut[7]: {2: 2} # now only one key and value in the dictIn [8]: d = {1:2,2:[2]}In [9]: new_d = {v:k for k,v in d.iteritems()}---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-9-46a3901ce850> in <module>()----> 1 new_d = {v:k for k,v in d.iteritems()}<ipython-input-9-46a3901ce850> in <dictcomp>((k, v))----> 1 new_d = {v:k for k,v in d.iteritems()}TypeError: unhashable type: 'list'dict([(v, k) for k, v in d.iteritems()]) will have the same output as {v:k for k,v in d.iteritems()}, the main difference is the former is also compatible with python < 2.7.If you were using python < 2.7 there is no need to use a list you can just use a generator expression: dict((v, k) for k, v in d.iteritems()) |
How to merge list to become string without adding any character in python? I found that I can join them with '-'.join(name) but I dont want to add any character. Lets say I have ['stanje1', '|', 'st6', ',' 'stanje2', '|', '#']and I want to be like thisstanje1|st6,stanje2|# | Just ommit the -:''.join(name) |
How to Get The field value dynamically in python? I have a form view. I entered a value in a field. How can i retrieve that value and have it assigned to some variable in .py for making operationsFor Example:I have ActiveFrom field.I entered value 23-11-2011 to field in form view. I want to get that value dynamically in openerp. How can I do that? | I suspect you want the on_change event. It lets you trigger server-side code when the user changes a field's value. You can then change the value of other fields, or pop up a warning message.Here's an example of how to pop up a warning from the warning module (slightly edited):def onchange_partner_id(self, cr, uid, ids, part): warning = {} title = False message = False partner = self.pool.get('res.partner').browse(cr, uid, part) if partner.sale_warn != 'no-message': title = _("Warning for %s") % partner.name message = partner.sale_warn_msg warning = { 'title': title, 'message': message, } result = super(sale_order, self).onchange_partner_id(cr, uid, ids, part) return {'value': result.get('value',{}), 'warning':warning} |
Tensorflow: jointly training CNN + LSTM There are quite a few examples on how to use LSTMs alone in TF, but I couldn't find any good examples on how to train CNN + LSTM jointly. From what I see, it is not quite straightforward how to do such training, and I can think of a few options here:First, I believe the simplest solution (or the most primitive one) would be to train CNN independently to learn features and then to train LSTM on CNN features without updating the CNN part, since one would probably have to extract and save these features in numpy and then feed them to LSTM in TF. But in that scenario, one would probably have to use a differently labeled dataset for pretraining of CNN, which eliminates the advantage of end to end training, i.e. learning of features for final objective targeted by LSTM (besides the fact that one has to have these additional labels in the first place). Second option would be to concatenate all time slices in the batchdimension (4-d Tensor), feed it to CNN then somehow repack thosefeatures to 5-d Tensor again needed for training LSTM and then apply a cost function. My main concern, is if it is possible to do such thing. Also, handling variable length sequences becomes a little bit tricky. For example, in prediction scenario you would only feed single frame at the time. Thus, I would be really happy to see some examples if that is the right way of doing joint training. Besides that, this solution looks more like a hack, thus, if there is a better way to do so, it would be great if someone could share it.Thank you in advance ! | For joint training, you can consider using tf.map_fn as described in the documentation https://www.tensorflow.org/api_docs/python/tf/map_fn.Lets assume that the CNN is built along similar lines as described here https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py.def joint_inference(sequence): inference_fn = lambda image: inference(image) logit_sequence = tf.map_fn(inference_fn, sequence, dtype=tf.float32, swap_memory=True) lstm_cell = tf.contrib.rnn.LSTMCell(128) output_state, intermediate_state = tf.nn.dynamic_rnn(cell=lstm_cell, inputs=logit_sequence) projection_function = lambda state: tf.contrib.layers.linear(state, num_outputs=num_classes, activation_fn=tf.nn.sigmoid) projection_logits = tf.map_fn(projection_function, output_state) return projection_logitsWarning: You might have to look into device placement as described here https://www.tensorflow.org/tutorials/using_gpu if your model is larger than the memory gpu can allocate.An Alternative would be to flatten the video batch to create an image batch, do a forward pass from CNN and reshape the features for LSTM. |
Maintain a streaming microphone input in Python I'm streaming microphone input from my laptop computer using Python. I'm currently using PyAudio and .wav to create a 2 second batches (code below) and then read out the frame representations of the newly created .wav file in a loop. However I really just want the np.ndarray represented by "signal" in the code that is the Int16 representation of the .wav file. Is there a way to bypass writing to .wav entirely and make my application appear to be "real-time" instead of micro-batch?import pyaudioimport wave#AUDIO INPUTFORMAT = pyaudio.paInt16CHANNELS = 1RATE = 44100CHUNK = 1024RECORD_SECONDS = 2WAVE_OUTPUT_FILENAME = "output.wav"audio = pyaudio.PyAudio()# start Recordingstream = audio.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK)while(1): print "recording" frames = [] for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)): data = stream.read(CHUNK) frames.append(data) waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb') waveFile.setnchannels(CHANNELS) waveFile.setsampwidth(audio.get_sample_size(FORMAT)) waveFile.setframerate(RATE) waveFile.writeframes(b''.join(frames)) waveFile.close() spf = wave.open(WAVE_OUTPUT_FILENAME,'r') #Extract Raw Audio from Wav File signal = spf.readframes(-1) signal = np.fromstring(signal, 'Int16') copy= signal.copy()# stop Recording stream.stop_stream() stream.close() audio.terminate() | Yes, you can give a callback to the stream variable and do with that audio whatever you would like:def callback(input_data, frame_count, time_info, flags): ... return input_data, pyaudio.paContinuestream = audio.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, stream_callback=callback, frames_per_buffer=CHUNK)More here. |
How to determine an object's value in Python From the Documentation Every object has an identity, a type and a value.type(obj) returns the type of the objectid(obj) returns the id of the objectis there something that returns its value? What does the value of an object such as a user defined object represent? | to really see your object values/attributes you should use the magic method __dict__.Here is a simple example: class My: def __init__(self, x): self.x = x self.pow2_x = x ** 2a = My(10)# print is not helpful as you can see print(a) # output: <__main__.My object at 0x7fa5c842db00>print(a.__dict__.values())# output: dict_values([10, 100])or you can use:print(a.__dict__.items())# output: dict_items([('x', 10), ('pow2_x', 100)]) |
Calling C++ dll from python I have a created dll library in c++ and exported it as c type dll. The library header is this:library.hstruct Surface{ char surfReq[10];};struct GeneralData{ Surface surface; char weight[10];};struct Output{ GeneralData generalData; char message[10];};extern "C" __declspec(dllexport) int __cdecl Calculation(Output &output);library.cppint Calculation(Output &output){ strcpy_s(output.message, 10, "message"); strcpy_s(output.generalData.weight, 10, "weight"); strcpy_s(output.generalData.surface.surfReq, 10, "surfReq"); return 0;}Now I have this Python script:#! python3-32from ctypes import *import sys, os.pathclass StructSurface(Structure): _fields_ = [("surfReq", c_char_p)]class StructGeneralData(Structure): _fields_ = [("surface", StructSurface), ("weight", c_char_p)]class OutData(Structure): _fields_ = [("generalData", StructGeneralData), ("message", c_char_p)]my_path = os.path.abspath(os.path.dirname(__file__))path = os.path.join(my_path, "../../../libs/Python.dll")testDll = cdll.LoadLibrary(path)surfReq = (b''*10)structSurface = StructSurface(surfReq)weight = (b''*10)structGeneralData = StructGeneralData(structSurface, weight)message = (b''*10)outData = OutData(structGeneralData, message) testDll.restyp = c_inttestDll.argtypes = [byref(outData)]testDll.Calculation(outData)print(outData.message)print(outData.generalData.weight)print(outData.generalData.surface.surfReq)When I print the fields from outData I get the same results in all of them:b'surfReq'b'surfReq'b'surfReq'Can you please tell me how to specify the char arrays/fields so I get the correct result. I am only allowed to change the python script.I called this library from C# with no problems. | Changed the python ctypes to c_char * 10:class StructSurface(Structure): _fields_ = [("surfReq", c_char * 10)]class StructGeneralData(Structure): _fields_ = [("surface", StructSurface), ("weight", c_char * 10)]class OutData(Structure): _fields_ = [("generalData", StructGeneralData), ("message", c_char * 10)]And changed the argtypes and actual call totestDll.argtypes = [POINTER(OutData)]testDll.Calculation(byref(outData)) |
Stiff ODE-solver I need an ODE-solver for a stiff problem similar to MATLAB ode15s.For my problem I need to check how many steps (calculations) is needed for different initial values and compare this to my own ODE-solver.I tried usingsolver = scipy.integrate.ode(f)solver.set_integrator('vode', method='bdf', order=15, nsteps=3000)solver.set_initial_value(u0, t0)And then integrating with:i = 0while solver.successful() and solver.t<tf: solver.integrate(tf, step=True) i += 1print(i)Where tf is the end of my time interval.The function used is defined as:def func(self, t, u): u1 = u[1] u2 = mu * (1-numpy.dot(u[0], u[0]))*u[1] - u[0] return numpy.array([u1, u2])Which with the initial value u0 = [ 2, 0] is a stiff problem.This means that the number of steps should not depend on my constant mu.But it does. I think the odeint-method can solve this as a stiff problem - but then I have to send in the whole t-vector and therefore need to set the amount of steps that is done and this ruins the point of my assignment.Is there anyway to use odeint with adaptive stepsize between two t0 and tf?Or can you see anything I miss in the use of the vode-integrator? | I'm seeing something similar; with the 'vode' solver, changing methods between 'adams' and 'bdf' doesn't change the number of steps by very much. (By the way, there is no point in using order=15; the maximum order of the 'bdf' method of the 'vode' solver is 5 (and the maximum order of the 'adams' solver is 12). If you leave the argument out, it should use the maximum by default.)odeint is a wrapper of LSODA. ode also provides a wrapper of LSODA:change 'vode' to 'lsoda'. Unfortunately the 'lsoda' solver ignoresthe step=True argument of the integrate method.The 'lsoda' solver does much better than 'vode' with method='bdf'.You can get an upper bound onthe number of steps that were used by initializing tvals = [],and in func, do tvals.append(t). When the solver completes, settvals = np.unique(tvals). The length of tvals tells you thenumber of time values at which your function was evaluated.This is not exactly what you want, but it does show a huge differencebetween using the 'lsoda' solver and the 'vode' solver withmethod 'bdf'. The number of steps used by the 'lsoda' solver ison the same order as you quoted for matlab in your comment. (I used mu=10000, tf = 10.)Update: It turns out that, at least for a stiff problem, it make a huge difference for the 'vode' solver if you provide a function to compute the Jacobian matrix.The script below runs the 'vode' solver with both methods, and itruns the 'lsoda' solver. In each case, it runs the solver with and without the Jacobian function. Here's the output it generates:vode adams jac=None len(tvals) = 517992vode adams jac=jac len(tvals) = 195vode bdf jac=None len(tvals) = 516284vode bdf jac=jac len(tvals) = 55lsoda jac=None len(tvals) = 49lsoda jac=jac len(tvals) = 49The script:from __future__ import print_functionimport numpy as npfrom scipy.integrate import odedef func(t, u, mu): tvals.append(t) u1 = u[1] u2 = mu*(1 - u[0]*u[0])*u[1] - u[0] return np.array([u1, u2])def jac(t, u, mu): j = np.empty((2, 2)) j[0, 0] = 0.0 j[0, 1] = 1.0 j[1, 0] = -mu*2*u[0]*u[1] - 1 j[1, 1] = mu*(1 - u[0]*u[0]) return jmu = 10000.0u0 = [2, 0]t0 = 0.0tf = 10for name, kwargs in [('vode', dict(method='adams')), ('vode', dict(method='bdf')), ('lsoda', {})]: for j in [None, jac]: solver = ode(func, jac=j) solver.set_integrator(name, atol=1e-8, rtol=1e-6, **kwargs) solver.set_f_params(mu) solver.set_jac_params(mu) solver.set_initial_value(u0, t0) tvals = [] i = 0 while solver.successful() and solver.t < tf: solver.integrate(tf, step=True) i += 1 print("%-6s %-8s jac=%-5s " % (name, kwargs.get('method', ''), j.func_name if j else None), end='') tvals = np.unique(tvals) print("len(tvals) =", len(tvals)) |
Python - List of lists of different lengths I need to create a list which contains two lists.Something likebiglist = [list1,list2]with list1 = [1,2,3]list2 = [4,5,6,7,8]where list1 and list2 have DIFFERENT length and are imported from file.I did it the following way:biglist = []list1 = #...taken from file. I checked this and it poduces a list exactly how I want it to be: [1,2,3]biglist.append(list1)and likewise for list2but the problem is that I get biglist = [array([1,2,3]),array([4,5,6,7,8])]as opposed to biglist = [[1,2,3],[4,5,6,7,8]]and I really don't want the array thing, I prefer to have simple lists.how to get around this? | please try:biglist.append(list(list1))biglist.append(list(list2))or if they are numpy arraysbiglist.append(list1.tolist())biglist.append(list2.tolist()) |
Gathering statistics on how a program is used I have a few programs that do similar things, but they're written in different languages. I want to somehow monitor the way the programs I write are used - how many times is the code ran? How many times is a particular method/function used? How long did it take to compile? My goal with this is to get a graphical representation so that I can easily compare the programs in pretty graphs - because of this I obviously need my solution to be language independent.What would be the best way to approach this task? Someone hinted me towards Ganglia but I'm not sure that's exactly what I'm looking for - I don't want to monitor clusters, I want to monitor the way different pieces of code is handled. | I'll recommend looking at sentry. It's free and has clients for many languages.Basic usage: import timefrom raven import Clientclient = Client('https://<key>:<secret>@app.getsentry.com/<project>')start_time = time.time()some_func()client.captureMessage("Execution time of some_func %s seconds" % (time.time() - start_time)) |
How to save Matplotlib.pyplot.loglog to file? I am trying to generate the log-log plot of a vector, and save the generated plot to file. This is what I have tried so far:import matplotlib.pyplot as plt... plt.loglog(deg_distribution,'b-',marker='o')plt.savefig('LogLog.png')I am using Jupyter Notebook, in which I get the generated graph as output after statement 2 in the above code, but the saved file is blank. | Notice that pyplot has the concept of the current figure and the current axes. All plotting commands apply to the current axes. So, make sure you plot in the right axes. Here is a WME.import matplotlib.pyplot as pltfig, ax = plt.subplots()ax.loglog(range(100), 'b-',marker='o')plt.savefig('test.png') # apply to the axes `ax` |
Add values to a data frame column at a specific row number I have a data frame df call which looks like: A B CDate 02/02/2007 14.8966 0.289371 0.00998483605/02/2007 14.8719 0.288368 -0.00165947306/02/2007 14.9295 0.279869 0.00386559507/02/2007 15.0035 0.283038 0.00494438608/02/2007 15.0528 0.277092 0.00328051309/02/2007 14.7733 0.28663 -0.01874252312/02/2007 14.6911 0.286458 -0.00557962913/02/2007 14.996 0.275362 0.02054163114/02/2007 15.5731 0.253263 0.037761568I have a for loop in which I am using to manipulate a timeseries. The result is a variable called testValue that I would like to add to df in a new column called 'D' at the row reference endpoint (endpoint is a sequentially increasing integer).for i in range(buildRange,len(pair_data)-calcRange+1): startpoint=i endpoint = startpoint+calcRange Calc_df=df[startpoint:endpoint].copy() Calc_df['C']=Calc_df.iloc[-1, Calc_df.columns.get_loc('A')]-Calc_df['B'] testValue= sum(x for x in Calc_df["C"] if x > 0) df.loc[endpoint, 'D'] = testValue So over the course of the loop column 'D' will build up with testValue values. Is it possible to reference a row in a dataframe and not the index? At the moment using df.loc[endpoint, 'D'] = testValue creates a new column 'D' but adds the data to the bottom of the data-frame and not to the correct row. (I think it's because endpoint is an integer and the index is date so it can't find the reference so creates a new one at the bottom of the data frame).So for example if endpoint started at 4 the desired output to look like: A B C DDate 02/02/2007 14.8966 0.289371 0.00998483605/02/2007 14.8719 0.288368 -0.00165947306/02/2007 14.9295 0.279869 0.00386559507/02/2007 15.0035 0.283038 0.004944386 1.3653508/02/2007 15.0528 0.277092 0.003280513 0.2782109/02/2007 14.7733 0.28663 -0.018742523 0.2535612/02/2007 14.6911 0.286458 -0.005579629 278043513/02/2007 14.996 0.275362 0.020541631 0.3663514/02/2007 15.5731 0.253263 0.037761568 0.25368 : :31/12/2007 15.9364 0.763263 0.047435768 0.24663(Values in column 4 are just for illustrative purposes and would note correct if the code was run). | It's hard to verify what you are trying to do. From your code, this is what I guess:df['D'] = (df['A'].shift() # correspond to `Calc_df.iloc[-1, Calc_df.columns.get_loc('A')]` .sub( df['B']) # subtract df['B'] .mul(df['C'].gt(0)) # only look at `df['C']>0` .rolling(4).sum() # sum the last 4 occurrences )Output: A B C DDate 02/02/2007 14.8966 0.289371 0.009985 NaN05/02/2007 14.8719 0.288368 -0.001659 NaN06/02/2007 14.9295 0.279869 0.003866 NaN07/02/2007 15.0035 0.283038 0.004944 NaN08/02/2007 15.0528 0.277092 0.003281 43.96490109/02/2007 14.7733 0.286630 -0.018743 43.96490112/02/2007 14.6911 0.286458 -0.005580 29.37287013/02/2007 14.9960 0.275362 0.020542 29.14214614/02/2007 15.5731 0.253263 0.037762 29.158475 |
Django+ HTML: EmptyPage at /HomeFeed/ Error: This page gets no errors Django+ HTML: EmptyPage at /HomeFeed/ Error: This page gets no errors. This happens when i log out and try to access the same page. When I am logged in, I am able to access the same exact page. Does it have to do with my views or my template. If you require a part of my template, please let me know :)Thank you!def home_feed_view(request, *args, **kwargs): context = {} blog_posts = BlogPost.objects.all() context['blog_posts'] = blog_posts type_of_tinc = TypeoftincFilter(request.GET, queryset=BlogPost.objects.all()) context['type_of_tinc'] = type_of_tinc paginated_type_of_tinc = Paginator(type_of_tinc.qs, 4) page = request.GET.get('page') tinc_page_obj = paginated_type_of_tinc.get_page(page) context['tinc_page_obj'] = tinc_page_obj blog_post = BlogPost.objects.filter(author=request.user.id).order_by('date_updated') page = request.GET.get('page2') own_account_post = Paginator(blog_post, 2) try: blog_post = own_account_post.page(page) except PageNotAnInteger: blog_post = own_account_post.page(2) except EmptyPage: blog_post = own_account_post.page(blog_post_paginator.num_pages) context['blog_post'] = blog_post return render(request, "HomeFeed/snippets/home.html", context)models.pyclass BlogPost(models.Model): chief_title = models.CharField(max_length=50, null=False, blank=False) author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) body = models.TextField(max_length=5000, null=False, blank=False) slug = models.SlugField(blank=True, unique=True)class Account(AbstractBaseUser): email = models.EmailField(verbose_name="email", max_length=60, unique=True) username = models.CharField(max_length=30, unique=True) date_joined = models.DateTimeField(verbose_name='date joined', auto_now_add=True) last_login = models.DateTimeField(verbose_name='last login', auto_now=True) is_admin = models.BooleanField(default=False) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) is_superuser = models.BooleanField(default=False)TracebackDuring handling of the above exception (int() argument must be a string, a bytes-like object or a number, not 'NoneType'), another exception occurred: blog_post = own_account_post.page(page) | I think it's caused by this.blog_post = BlogPost.objects.filter(author=request.user.id)if a user is not logged in it can't get the request.user.idso either you make this page only for the authenticated or display something else if user is not authenticated.you can add the @login_required() decorator to your view to make it only the authenticated can viewor# this is like what SLDem postedif request.user.is_authenticated: blog_posts = ...else: blog_posts = ... |
How to create a new column with a null value using Pyspark DataFrame? I'm having issues with using pyspark dataframes. I have a column called eventkey which is a concatenation of the following elements: account_type, counter_type and billable_item_sid. I have a function called apply_event_key_transform in which I want to break up the concatenated eventkey and create new columns for each of the elements.def apply_event_key_transform(data_frame: DataFrame): output_df = data_frame.withColumn("account_type", getAccountTypeUDF(data_frame.eventkey)) \ .withColumn("counter_type", getCounterTypeUDF(data_frame.eventkey)) \ .withColumn("billable_item_sid", getBiSidUDF(data_frame.eventkey)) output_df.drop("eventkey") return output_dfI've created UDF functions to retrieve the account_type, counter_type and billable_item_sid from a given eventkey value. I have a class called EventKey that takes the full eventkey string as a constructor param, and creates an object with data members to access the account_type, counter_type and billable_item_sid.getAccountTypeUDF = udf(lambda x: get_account_type(x))getCounterTypeUDF = udf(lambda x: get_counter_type(x))getBiSidUDF = udf(lambda x: get_billable_item_sid(x))def get_account_type(event_key: str): event_key_obj = EventKey(event_key) return event_key_obj.account_type.namedef get_counter_type(event_key: str): event_key_obj = EventKey(event_key) return event_key_obj.counter_typedef get_billable_item_sid(event_key: str): event_key_obj = EventKey(event_key) return event_key_obj.billable_item_sidThe issue that I'm running into is that a billable_item_sid can be null, but when I attempt to call withColumn with a None, the entire frame drops the column when I attempt to aggregate the data later. Is there a way to create a new column with a Null value using withColumn and a UDF?Things I've tried (for testing purposes):.withColumn("billable_item_sid", lit(getBiSidUDF(data_frame.eventkey))).withColumn("billable_item_sid", lit(None).castString())Tried a when/otherwise condition for billable_item_sid for null checking | Found out the issue was caused when writing the DataFrame to json.Fixed this by upgrading pyspark to 3.1.1, which has a called ignoreNullFields=False |
splitting a string->list to be checked I've been lurking for a few weeks, and decided to join in order to be more hands-on with my learning of Python.What I'm trying to do is take a single string, containing several web addresses, and come up with a list containing all the addresses with a domain name of 2-4 characters. The hypothetical addresses are not all simple.com types, they may contain multiple periods. Here's a sample string that I wish to convert:urlstring = 'albatross.org,boogaloo.boolean.net,zenoparadox.hercules.gr,takeawalkon.the.wildside,fuzzy.logic.it,bronzeandiron.age,areyou.serious'To get the addresses in a list: list(urlstring.split(',')). But I can't determine how to discern the length of the domain name and delete it or not based on that length. Is it necessary to split each address string into substrings by split('.')? =/I'm pretty sure that this is somehow answered elsewhere, but I couldn't really find something exactly similar. I apologize for the super noobish question, and promise that my questions will improve in quality as I learn. | Assuming you only care about the length of the TLD:[url for url in urlstring.split(',') if 2 <= len(url.split('.')[-2]) <= 4] |
most negative value for python I expect the most negative for python is -maxint-1I expect having -2, will make integer overflow.from sys import maxintmaximum_int = maxintminimum_int = -maxint - 2# 2147483647# -2147483649print maximum_intprint minimum_intYet. Correct result is displayed, and a value which is more negative than -maxint-1 is shown.May I know why? | Here you can see the result is promoted to a long>>> from sys import maxint>>> type(-maxint)<type 'int'>>>> type(-maxint-1)<type 'int'>>>> type(-maxint-2)<type 'long'>>>> note that the usual convention for signed values is to have one more negative number than positive, so in this case -2147483648 is still an int |
Django migration not working properly on Heroku I have deployed a Django App like 3 months ago and i was able to migrate changes easily on the heroku bash. Right now i'm trying this:heroku run python manage.py migrateAlso tried this:heroku run python manage.py migrate --no-inputAnd i tried accesing to the heroku bash like this:heroku run bashAnd then run:~ $ python manage.py migrateAll this commands seems to work:https://i.stack.imgur.com/JWW6S.pngBut they don't. When i tried to migrate again, i thought it would show me the typical No migrations to apply.. But this is not the case¿What should i do to migrate changes? | If you are using SQlite, the migration changes on Heroku are applied and removed immediately. You have to use Heroku Postgres add-on (check on the your project overview if it's already installed) and add this to your settings.pyif "DATABASE_URL" in os.environ: import dj_database_url DATABASES = {"default": dj_database_url.config()}you can also add this to your Procfile to migrate on deployrelease: python manage.py migrate |
Python: How to change data type and override a variable from imported module I have two python programs in same folder named main.py and tk_version.pyI am importing main.py in tk_version.pyI have a variable in main.py which has default value lets say 'xyz'Now in tk_version.py I am taking the value for the variable from user using Tkinter GUI.So what I want is when main.py is executed the constant value must be taken and when tk_version.py is executed the value given by user override the default value.Example:main.pyvar="default"def show(): print(var)if __name__ == '__main__': show()tk_version.pyfrom tkinter import *import mainMain = Tk()var= StringVar()def e1chk(): global var var = e1.get() main.show() return e1=Entry(Main,textvariable=var,width=50)e1.grid(row=0,column=5,sticky=NSEW)b1=Button(Main,text="Save",command=e1chk)b1.grid(row=0,column=8,sticky=NSEW) Main.mainloop()OUTPUT for main.py>>> =============== RESTART: C:/Users/Gupta Niwas/Desktop/main.py ===============default>>>OUTPUT for tk_version.py (I have entered "abc" in Entry box)>>> ============ RESTART: C:/Users/Gupta Niwas/Desktop/tk_version.py ============default>>>I want variable to be StringVar to control the entry box values. | Just add main. before your variable name.tk_version.pyfrom tkinter import *import mainMain = Tk()main.var = StringVar()def e1chk(): main.var = e1.get() main.show() returne1 = Entry(Main, textvariable=main.var, width=50)e1.grid(row=0, column=5, sticky=NSEW)b1 = Button(Main, text="Save", command=e1chk)b1.grid(row=0, column=8, sticky=NSEW)Main.mainloop()Output:abc>>> |
no module named requests I will first state I have searched for this problem, and found the exact same problem here ( ImportError: No module named 'requests' ) but that hasn't helped me.I am using macports on osx (mountain lion). I have successfully installed and run a few scripts without any issues. from the macports page, I have installed requests via the method it detailed and as far as I can tell, it has installed successfully:daves-mbp:~ Dave$ port search requestsarpwatch @2.1a15 (net) Monitor ARP & RARP requestshttp_ping @29jun2005 (net, www) Sends HTTP requests every few seconds and times how long they takehttping @2.0 (net, www) Ping-like tool for http-requestspy-requests @1.2.3 (python, devel) Python HTTP for Humans.py26-requests @1.2.3 (python, devel) Python HTTP for Humans.py27-requests @1.2.3 (python, devel) Python HTTP for Humans.py31-requests @1.2.3 (python, devel) Python HTTP for Humans.py32-requests @1.2.3 (python, devel) Python HTTP for Humans.py33-requests @1.2.3 (python, devel) Python HTTP for Humans.webredirect @0.3 (www) small webserver which redirects all requestsFound 10 ports.I have python 2.7, so I installed it via:daves-mbp:~ Dave$ sudo port install py27-requestsPassword:---> Computing dependencies for py27-requests---> Fetching archive for py27-requests---> Attempting to fetch py27-requests-1.2.3_0.darwin_12.noarch.tbz2 from http://jog.id.packages.macports.org/macports/packages/py27-requests---> Attempting to fetch py27-requests-1.2.3_0.darwin_12.noarch.tbz2.rmd160 from http://jog.id.packages.macports.org/macports/packages/py27-requests---> Installing py27-requests @1.2.3_0---> Activating py27-requests @1.2.3_0---> Cleaning py27-requests---> Updating database of binaries: 100.0%---> Scanning binaries for linking errors: 100.0%---> No broken files found.daves-mbp:~ Dave$ I think that looks good. Using macports is there something else I have to do before using it? I thought the python setup.py install (in the aforementioned post) may have solved my problem, however, when I search for requests in my filesystem, the only reference is burried in a path (that macports says is a store for user installed modules. And besides, there is no setup.py within that or it's parent directory. I have restarted my terminal window (that fixed another problem earlier), but it made no difference here.Any help is appreciatededit:which python reports /opt/local/bin/python the first lines of the python interpreter DID report: daves-mbp:~ Dave$ python Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwinbut now I have done something and it's responding with new errors:daves-mbp:~ Dave$ pythonTraceback (most recent call last): File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 548, in <module> main() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 530, in main known_paths = addusersitepackages(known_paths) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 266, in addusersitepackages user_site = getusersitepackages() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 241, in getusersitepackages user_base = getuserbase() # this will also set USER_BASE File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 231, in getuserbase USER_BASE = get_config_var('userbase') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 516, in get_config_var return get_config_vars().get(name) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 449, in get_config_vars import re File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/re.py", line 105, in <module> import sre_compile File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_compile.py", line 14, in <module> import sre_parse File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_parse.py", line 17, in <module> from sre_constants import * File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_constants.py", line 18, in <module> from _sre import MAXREPEATImportError: cannot import name MAXREPEAT | In trying to sort this out, I have broken python, and eventually I got it going again. I think initially I had not run one of the port select --set... commands. Once I realised this might be the case, I did so, but that produced the errors at the top. MAXREPEATS, a circular reference perhaps? No idea. I have read here (macports didn't place python_select in /opt/local/bin) and here (How do I uninstall python from OSX Leopard so that I can use the MacPorts version?) about the --set command not working and to try sudo port select python python26 (i used python27) instead. I checked the PATH and python didn't appear, so I updated that as well. I got my python interpreter back and low-and-behold imports requests now works. I think at the end of it all, there were two errors:I used --set instead of the newer command, andmy path wasn't setedit: Actually, after more debugging, I found the error was on the first line of my script, I had defined which python to use (which was the default apple one, which doesn't include the module). Once I updated the shebang line, it worked. |
How can I use the ValueError function correctly? This is my code:from time import sleepdef Kontostand_Berechnen(): if float(Kontostand_Nachfragen) >= float(Preis_Nachfragen): sleep(1) print("") print("Du hast genug Geld!") print("") sleep(1) else: sleep(1) print("") print("Du hast nicht genug Geld!") print("") sleep(1)sleep(1)print("")Kontostand_Nachfragen = input("Wie viel Geld hast du?: ")sleep(2)print("")Preis_Nachfragen = input("Wie viel kostet das Produkt?: ")if ValueError: print("") sleep(1) print("Bitte nur Zahlen eingeben! Kein Text!") print("") sleep(1)else: Kontostand_Berechnen()I wrote this code just for practice and for fun. The print text is german. But I don't think that's hindering. I would like that if the user writes text and not a number, an error message is received. But that doesn't work as hoped. With this code, I ALWAYS get the error message, even when I write numbers.(I'm sure the code is not very understandable. I also only started using Python recently) | If you want to take advantage of the ValueError you need to use try/except. You will deal with the error in the except block. A simplified example might look like:while True: try: Preis_Nachfragen = float(input("Wie viel kostet das Produkt?: ")) break except ValueError: print("Bitte nur Zahlen eingeben! Kein Text!")print(Preis_Nachfragen) |
Encoding binary file chunk by chunk fails The Zip file I need to encode seems to be too heavy and the method below gives me error:with open("/tmp/pdf/pdffiles.zip", "rb") as f: binary_file = f.read() encoded = base64.b64encode(binary_file)self.download_zip = encodedso I tried to chunk it but the file I finaly download is damaged,can anyone take a look at the following code and give me any hint please:zipfile = open("/tmp/pdf/pdffiles.zip", "rb")encoded = Falsewhile True: chunk = zipfile.read(8192) if not chunk: break if encoded: encoded += base64.b64encode(chunk) else: encoded = base64.b64encode(chunk)zipfile.close()self.download_zip = encoded | When chunking base64, it's important that your chunk sizes are multiples of 6, otherwise the data won't concatenate properly. You can try a number like 8208 and it should work. |
No module in python I am getting this warning. What should I do about it?File "alexis.py", line 17, in <module> import wikipediaModuleNotFoundError: No module named 'wikipedia' | if you are using VS Code terminal try:py -m pip install wikipedia |
I'm using http.server with python3 and I want to store a request as a variable I have this codehttpd = HTTPServer(('127.0.0.1', 8000),SimpleHTTPRequestHandler)httpd.handle_request()httpd.handle_request() serves one request and then kills the server like intended. I want to capture this request as a variable so I can parse it later on.Something likeRequest_Variable = httpd.handle_request()*This code above doesn't work. But I'm looking for something similarThanks | You could extend the BaseHTTPRequestHandler and implement your own do_GET (resp. do_POST) method which is called when the server receives a GET (resp. POST) request.Check out the documentation to see what instance variables a BaseHTTPRequestHandler object you can use. The variables path, headers, rfile and wfile may be of your interest.from http.server import BaseHTTPRequestHandler, HTTPServerclass MyRequestHandler(BaseHTTPRequestHandler): def do_GET(self): print(self.path) def do_POST(self): content_length = int(self.headers.get('Content-Length')) print(self.rfile.read(content_length))httpd = HTTPServer(('127.0.0.1', 8000), MyRequestHandler)httpd.handle_request()# make your GET/POST request |
How to logging to different log files by using for loop in python I am writing python script which establish ssh connection using paramiko and receive response of executed different commands on different NE and write logs for each NE in different log file. I am using below code in which i have defined logger main function and writing log to other function within same class. It is working fine while writing single log file. Please let me know how to write different log file for different NE.CODE:def main(self): global logger with open(self.hostfile, 'r') as ip: ip_list = ip.read().splitlines() for host in ip_list: filename = "connection_debug-{0}.log".format(host) print('filename is:', filename) logging.basicConfig(filename=filename, format='%(asctime)s %(message)s', filemode='w') logger = logging.getLogger() logger.setLevel(logging.DEBUG) def send_to_ne(self, command, prompt): channel.send('%s \n' % command) while not channel.recv_ready(): time.sleep(2) #self.get_channel_ready() global response response = " " while not response.endswith(prompt): received_result = channel.recv(9999) logger.debug(received_result.decode()) #self.logging_func(received_result, host) received_result = str(received_result) | Here is the answer of above asked questiondef debug_file(self, debugfile): global logger logger = logging.getLogger(debugfile) logger.setLevel(logging.DEBUG) log_format = logging.Formatter('%(asctime)s %(message)s') debug = logging.FileHandler(debugfile, mode='w') debug.setFormatter(log_format) logger.addHandler(debug)debugfile = "connection_debug-{0}.log".format(host)self.debug_file(debugfile) |
Code optimization by reducing number of if else statement I am writing code for library where thousand different color codes are stored and according to the normalized passed value the color will be selected. Here is the code for reference for just ten color to be returned :colour_coding=[]i=0step=0while i<1000: temp=(step,0,0) colour_coding.append(temp) step+=0.001 i+=1def color_code(value): if value>=0 and value<=0.1: return color_code[0] elif value>0.1 and value<=0.2: return color_code[1] elif value>0.2 and value<=0.3: return color_code[2] elif value>0.3 and value<=0.4: return color_code[3] elif value>0.4 and value<=0.5: return color_code[4] elif value>0.5 and value<=0.6: return color_code[5] elif value>0.6 and value<=0.7: return color_code[6] elif value>0.7 and value<=0.8: return color_code[7] elif value>0.8 and value<=0.9: return color_code[8] else: return color_code[9]Now i want to have more precision for color for that i have to write around thousand if else which will be a tedious and repetition job, is there any way by which i can optimize this code ? | Not a solution but this might give you an idea: import math def color_code(value): return color_code[math.ceil(value * 10) -1]This code should be able to handle all of the conditions in your code above. You'll need to add condition to handle the index > 9 scenario.For thousand iterations, you just need to find the right math function to calculate the right index based on the range.Couple of caveats:I'm not a python dev, so my syntax may be off.Your if conditions probably don't need the greater than checks. The else if will guarantee that the first matched condition will automatically exclude all other conditions.i.e. you should be able to get away with code like this:def color_code(value): if value<=0.1: return color_code[0] elif value<=0.2: return color_code[1]etc. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.