questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
Django trying to add up values in the django template Hi Guys I am trying to figure this out but not having any luck.So I am showing my events in the homepage which shows how many seats are available, once the user has made a booking I would like to minus that from the amount showing on the homepage.But I am already stuck at adding all the values up for that event in the booking model to minus from that amount.So this is what I havemodel for eventsclass Events(models.Model): ACTIVE = (('d', "Deactivated"), ('e', "Expired"), ('a', "Active"), ('b', "Drafts"),) ALCOHOL = (('0','No bring own alcohol'),('1','There will be complimentary wine pairing')) user = models.ForeignKey(User, on_delete=models.CASCADE) title = models.CharField(max_length=50, blank=True, default='') date = models.DateField() time = models.TimeField() price = models.CharField(max_length=240, blank=True, default='') seats = models.IntegerField() created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True)model for bookingsclass Bookings(models.Model): OPTIONS_STATUS = (('y', "Yes"), ('n', "No"), ('p', "Pending"),) user = models.ForeignKey(User, on_delete=models.CASCADE) event = models.ForeignKey(Events, on_delete=models.CASCADE) eventdate = models.DateField() event_amount = models.CharField(max_length=50, blank=True, default='') guests = models.IntegerField() bookingstatus = models.CharField(max_length=50, default='p', blank=True, choices=OPTIONS_STATUS) created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True)my homepage how I get my data into a loop form the viewtoday = datetime.now().strftime('%Y-%m-%d')events_list_data = Events.objects.filter(active='a').filter(Q(date__gte=today)|Q(date=today)).order_by('date')How I am trying to show this in my template{% for event_list in events_list_data %}SHOW WHAT EVER DATA I AM SHOWING NOT NEEDED FOR HELP ON {% for bookingguests in event_list.bookings_set.all %} {{ bookingguests.guests }} {% endfor %} Seats Left{% endif %} | Generally, purpose of templates is not to implement logic. All the logic should go into your views. I would recommend you to do that in your views and either store it in a dict or a list and send it to front-end. Once the user made a booking, if you want to modify the value on the HTML without reloading, you may need to use jQuery/javascript. Otherwise, if you are fine with reloading the page by rendering it again with calculations from the backend. By using jQuery:$("#balance-id").html(logic to get the balance)By Calculating in views:from django.db.models import Sumuser_balance = user.balance - events_data.aggregate(Sum('price'))return render('path/to/template', {'events_list': events_list_object, 'user_balance':user_balance})In the template:{{user_balance}} seats leftLet me know in case of any questions.Note: If you want to write some logic into your templates, use template tags. It can help you with whatever it can with its limited functionality. |
In an MVC model, where is the position of http handlers? I'm developing Python tornado apps in MVC. I have a folder for models which contains all of classes to access database. another for controller which contains classes to do some controls an more logical works. the problem is that I don't know exactly where to put my HTTP handlers. should I put them in View folder of controller? if not, what should I put in that folder? | You can take example from Django, which use model named MTV (Model-Template-View) - you can read more about this here or in FAQ. In this model request handlers are placed inside view section, which can be thought as controllers of standard MVC (and templates of MTV are views of MVC - it's a bit confusing). |
Sorting dictionary inside a dictionary python Im trying to sort this dictionary from highest number to lowest number. However I tried to sort the dictionary, but every time the error of : TypeError: string indices must be integerskeeps coming up. This is what I coded aurl_params = {}dayum = requests.get(starturl, params = aurl_params)liste = dayum.json()newlist = sorted(liste, key=lambda k: k['data'[0].get('media_count'),reverse=True)print newlistHow should I sort this so that, the "media_count" cound go highest to lowest..{u'data': [{u'media_count': 103, u'name': u'h\xe9llo'}, {u'media_count': 12507183, u'name': u'hello'}, {u'media_count': 867, u'name': u'hell\xf4'}, {u'media_count': 588, u'name': u'hell\xf3'}, {u'media_count': 321, u'name': u'he\u013alo'}, {u'media_count': 236, u'name': u'hell\xf8'}, {u'media_count': 6009, u'name': u'hell\xf6'}, {u'media_count': 405, u'name': u'hello\U0001f61c'}, {u'media_count': 405, u'name': u'hello\U0001f30e'}, {u'media_count': 5717, u'name': u'hello\u270c'}, {u'media_count': 47420, u'name': u'hellosun'}, {u'media_count': 590676, u'name': u'helloworld'}, {u'media_count': 94422, u'name': u'hellomay'}, {u'media_count': 87159, u'name': u'helloicp'}, {u'media_count': 344138, u'name': u'helloweekend'}, {u'media_count': 341243, u'name': u'hellospring'}, {u'media_count': 538183, u'name': u'helloween'}, {u'media_count': 235522, u'name': u'hello_france'}, {u'media_count': 375091, u'name': u'hellosummer'}, {u'media_count': 319766, u'name': u'hellobc'}, {u'media_count': 455104, u'name': u'hello2016'}, {u'media_count': 43682, u'name': u'hellogoodbye'}, {u'media_count': 166595, u'name': u'hellofresh'}, {u'media_count': 135937, u'name': u'hellothere'}, {u'media_count': 42887, u'name': u'hellopeople'}, {u'media_count': 62131, u'name': u'helloinstagram'}, {u'media_count': 347414, u'name': u'hello2015'}, {u'media_count': 331175, u'name': u'hellodecember'}, {u'media_count': 49119, u'name': u'hellovenus'}, {u'media_count': 41032, u'name': u'hellonwheels'}, {u'media_count': 64925, u'name': u'hello2013'}, {u'media_count': 69764, u'name': u'helloproject'}, {u'media_count': 70193, u'name': u'hello_bluey'}, {u'media_count': 64549, u'name': u'hellosunday'}, {u'media_count': 42035, u'name': u'hellonearth'}, {u'media_count': 56714, u'name': u'helloladies'}, {u'media_count': 198943, u'name': u'helloseptember'}, {u'media_count': 67861, u'name': u'helloapril'}, {u'media_count': 31560, u'name': u'hellotuesday'}], u'meta': {u'code': 200}} | Use the following approach to get the needed result:newlist = sorted(liste['data'], key=lambda o: o['media_count'], reverse=True)print(newlist)The sequence that need to be sorted is liste['data'] |
Kivy kv file is not working I have the same issue like described in this theme kv incorrect. When I use Builder and load the kv file I have normal working app. But when I try to use autoload kv file I have only black screen. Could someone explain me why? Thanks for any help.My code. main.pyimport kivykivy.require('1.9.1') # replace with your current kivy version !from kivy.app import Appfrom kivy.lang import Builderfrom kivy.uix.screenmanager import ScreenManager, Screen, FadeTransitionclass MainScreen(Screen): passclass AnotherScreen(Screen): passclass ScreenManagement(ScreenManager): passclass Test(App): def build(self): return ScreenManagement()if __name__ == "__main__": Test().run()kv file. test.kv#:kivy 1.9.1#: import FadeTransition kivy.uix.screenmanager.FadeTransitionScreenManagement: transition: FadeTransition() MainScreen: AnotherScreen:<MainScreen>: name: "main" Button: on_release: app.root.current = "other" text: "Next Screen" font_size: 50<AnotherScreen>: name: "other" Button: on_release: app.root.current = "main" text: "Prev Screen" font_size: 50 | In your kv file, you define ScreenManagement to be the root element with its associated screens. But in build, you return a newly created ScreenManagement object, which will not have any children defined.Solution:Define build as def build(self): passor change the definition of ScreenManagement in the kv file to<ScreenManagement>: transition: FadeTransition() MainScreen: AnotherScreen:so this will apply to all new ScreenManagement objects. |
Why does pandas dataframe indexing change axis depending on index type? when you index into a pandas dataframe using a list of ints, it returns columns.e.g. df[[0, 1, 2]] returns the first three columns.why does indexing with a boolean vector return a list of rows?e.g. df[[True, False, True]] returns the first and third rows. (and errors out if there aren't 3 rows.)why? Shouldn't it return the first and third columns?Thanks! | Because if use:df[[True, False, True]]it is called boolean indexing by mask:[True, False, True]Sample:df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6], 'C':[7,8,9]})print (df) A B C0 1 4 71 2 5 82 3 6 9print (df[[True, False, True]]) A B C0 1 4 72 3 6 9Boolean mask is same as:print (df.B != 5)0 True1 False2 TrueName: B, dtype: boolprint (df[df.B != 5]) A B C0 1 4 72 3 6 9 |
Odoo: OSError: [Errno 2] No such file or directory Trying to re-install OdooDid the following steps:Deleted the previous odoo dirDeleted previous postgres users and databases, except the user which I was using and that user created databasesTried the regular user and database creation in postgresBut once I try to install a module.I am getting the following error.OSError: [Errno 2] No such file or directory: 'MY OLDER DIRECTORY'How can I change the path for my new directory? | Try to find the Odoo configuration file. This file contains directory name where Odoo looks up for the modules. In this case, your Odoo settings are left behind after deletion of the Odoo. Delete or modify this file accordingly to your new Odoo installation.Where to look: for different platforms (Linux-Windows) or different OS's (CentOs, Ubuntu) it might be different directories |
How to reorder a dataframe based on a list? pandas I have a df and I want to reorder it based on athe list as shown using Python:df=pd.DataFrame({'Country':["AU","DE","UR","US","GB","SG","KR","JP","CN"],'Stage #': [3,2,6,6,3,2,5,1,1],'Amount':[4530,7668,5975,3568,2349,6776,3046,1111,4852]})dflist=["US","CN","GB","AU","JP","KR","UR","DE","SG"]How can I do that? Any thoughts? Thanks! | Use pd.Categoricallist_ = ["US","CN","GB","AU","JP","KR","UR","DE","SG"]df['Country'] = pd.Categorical(df.Country, categories = list_, ordered = True)df.sort_values(by='Country')Also, do not name your variable list because that would override the built-in list command |
Replacing cells in a column, but not header, in a csv file with python I've been looking for a few hours now and not found what I'm looking for...I'm looking to make a program that takes an already compiled .csv file with some information missing and asking the user what they would like to add and then placing this in the csv but not effecting the header line. Then saving the file with the additions.It would look like (input):data 1, data 2, data 32,,44,,66,,33,,2program asks "what would you like in data 2 column?answer: 5(output):data 1, data 2, data 32,5,44,5,66,5,33,5,2All help highly appreciated. | We open the input file and the output file with a python context manager.get the user input using input() (python 3) or raw_input() (python 2) functionsgrab the 1st row in the file and write it out without changing anything and write that outLoop through the rest of the file splitting the columns out and replacing column 2 with the user's inputwith open('in.csv', 'r') as infile, open('out.csv', 'w') as outfile: middle_col = input('What would you like in data 2 column>: ') outfile.write(infile.readline()) # write out the 1st line for line in infile: cols = line.strip().split(',') cols[1] = middle_col outfile.write(','.join(cols) + '\n') |
How to dynamically visualize dataset on web? I am developing a website where I have around 800 data sets. I want to visualize my data using bar charts and pie charts, but I don't want to hard code this for every data set. What technology can I use to dynamically read the data from a json/csv/xml and render the graph? (btw I'm going to use a Python based backend (either Django or Flask)) | Js library like d3.js or highcharts can be helpful to solve your problem. You can easily send the data from sever to front-end where these library can gracefully plot the data. |
Is it possible to use query parameters on the Django Admin Site I work with multi-tenancy and am passing the schema name through a query parameter. My middleware takes care of the parameter and sets the correct schema. It works very well on my API requests (direct posts and gets), but now I need to access the admin page for each of my schemas by specifying the schema name on a URL query parameter.Here's the problem:When I access http://localhost:8000/admin/?schema_name=myschemamy middleware catches the parameter, but the admin site redirects me to a login page with this URL:http://localhost:8000/admin/login/?next=/admin/%3Fschema_name%3DmyschemaAfter this redirection it seems that it goes through my middleware again, this time without the ?schema_name=myschema parameter, causing my middleware to set the schema to public everytime Django redirects a URL.Is there a way to make the Django Admin site aware of this parameter even when some redirection changes the URL? (or maybe even a suggestion of a different approach I could use to make the Admin Site tenant aware).Thanks in advance. | Was just working on the same problem.Problem is that django admin catches any uknown(unregistered via some filter on admin view) query params and if any found raises exception which redirectsSolution is to call something like that inside middleware:def extract_client_id_from_admin_api_url(request): request.GET._mutable = True client_id = request.GET.pop('client_id', None) request.GET._mutable = False return client_id[0] if client_id else NoneSo that your query parameter is uknown to django admin site(removed before request reaches admin views). Please note, that this middleware should be on top of middleware list in settings.This is dirty, and hacky. Though i didn't find better solution yet. |
How do global variables work in recursion? count=0global countdef fact(n): count+=1 if n==1:return 1 else:return(n*fact(n-1))print(fact(5))When the variable count is declared as global, is the variable count accessible in all recursive frames?The above code doesn't work, however the below code works. Can someone explain why?count=0def fact(n): global count count+=1 if n==1:return 1 else:return(n*fact(n-1))print(fact(5))Why is it necessary to specify global inside the function, when the entire point of global variables is to use them inside functions? | count += 1 is a local assignment that shadows the global count. It doesn't matter that there is a global variable available to increment. You have to declare the global in order for the assignment to affect the global.Using the global keyword outside the function doesn't do anything; it has to be used in a functionto mark a name that would otherwise be local to the function scope as a global instead. |
django.db.utils.OperationalError: (1170, "BLOB/TEXT column 'message' used in key specification without a key length") I am trying to create a model using django 1.11 and mysql(latest) as my backend using mysqlclient. I have searched various blogs and docs but was still not able to find my solution.This is my code Posts.models.pyPlease forgive the indentation error here if any.class Post(models.Model): user=models.ForeignKey(User,related_name='posts', on_delete=models.CASCADE) created_at=models.DateTimeField(auto_now=True) message=models.TextField() group=models.ForeignKey(Group,related_name='posts', null=True,blank=True,on_delete=models.CASCADE) def __str__(self): return self.message def save(self,*args,**kwargs): super().save(*args,**kwargs) def get_absolute_url(self): return reverse('posts:single',kwargs{'username':self.user.username,'pk':self.pk}) class Meta: ordering=['-created_at'] unique_together=['user','message'] | Set length to the text field. It did work for me. Likemodels.TextField(max_length=1000) |
How to drop the columns in pandas with multiple condtions I am new to python and pandasOn the below data frame ,I need to the drop the columns which are totally "None" , with "blanks and None", but not the columns with values and NoneOn the above table, I want Column A and C to be dropped because they are totally "None" or "blank and None", but Column B has some valid data at least in 3 cells, it should not be disturbedhow to give this condition in df.drop (pandas) | You can test missing values NaN and None like Nonetype by DataFrame.isna, then possible strings by DataFrame.isin, chain by | for bitwise OR and pass to DataFrame.loc with invert mask for test if all values are Trues per columns (default axis=0) by DataFrame.all:m = df.isna() | df.isin(['', 'None', 'none'])df = df.loc[:, ~m.all()]Or like comment, only in output are replaced values:df = df.replace(['', 'None', 'none'],np.nan).dropna(axis=1, how='all') |
How to access prediction from another python module? I have a file file_calling_class.py that needs to access the prediction value from another python module in file_with_class.py. However, I do not know how to access the prediction. The function alone works fine if it is the only script but if I want to pass the budget value from file_calling_class.py to file_with_class.py by using self I do not know how to access the prediction result of calculate_sales(self).This is my file_calling_class.py import file_with_class budget = 100 sales = file_with_class.CalcSales(budget=budget).__str__() print('Your sales are: ' +sales)This is my file_with_class.py import pickle import pandas as pdclass CalcSales():def __init__(self, budget: int): self.budget = budget self.sales = 0 self.prediction = 0def calculate_sales(self): #budget = request.args.get('budget') print(self.budget) budget = [int(self.budget)] df = pd.DataFrame(budget, columns=['Marketing Budget']) model = pickle.load(open('simple_linear_regression.pkl', 'rb')) prediction = model.predict(df) self.prediction = int(prediction[0]) # return(self.prediction)def __str__ (self): return (str(self.prediction))Output Your sales are: 0which is just the value with which I initialized self.prediction | You are not invoking the .calculate_sales() method in your call. Try changing sales = file_with_class.CalcSales(budget=budget).__str__() in file_calling_class.py to:sales = file_with_class.CalcSales(budget=budget).calculate_sales().__str__() |
Keras `steps=None` error even when using Sequence class I am trying to do some custom training with Keras with Tensorflow backend. I am using the fit_generator() to supply data. My generator is a derived class of keras.utils.Sequence. gen = PitsSequence( PITS_PATH,nP=nP, nN=nN, n_samples=n_samples, initial_epoch=initial_epoch, image_nrows=image_nrows, image_ncols=image_ncols, image_nchnl=image_nchnl)gen_validation = PitsSequence(PITS_VAL_PATH, nP=nP, nN=nN, n_samples=n_samples, image_nrows=image_nrows, image_ncols=image_ncols, image_nchnl=image_nchnl )history = t_model.fit_generator( generator = gen, epochs=2200, verbose=1, initial_epoch=initial_epoch, validation_data = gen_validation , callbacks=[tb,saver_cb,reduce_lr], use_multiprocessing=True, workers=0, )However, I get the following error as I run this. Epoch 1/2200m_int_logr= ./models.keras/tmp/12/13 [==========================>...] - ETA: 1s - loss: 1.7347 - allpair_count_goodfit: 0.0000e+00 - positive_set_deviation: 0.0039Traceback (most recent call last): File "noveou_train_netvlad_v3.py", line 260, in <module> use_multiprocessing=True, workers=0, File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1415, in fit_generator initial_epoch=initial_epoch) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_generator.py", line 230, in fit_generator workers=0) File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1469, in evaluate_generator verbose=verbose) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_generator.py", line 298, in evaluate_generator raise ValueError('`steps=None` is only valid for a generator'ValueError: `steps=None` is only valid for a generator based on the `keras.utils.Sequence` class. Please specify `steps` or use the `keras.utils.Sequence` class.How can this be fixed? Keras version: 2.2.2 Tensorflow version: 1.11.0Here is the implementation of PitSequence class; It involves 2 external functions self.pr = PittsburgRenderer( PTS_BASE ) self.D = self.pr.step_n_times(n_samples=self.n_samples_pitts, nP=nP, nN=nN, resize=self.resize, return_gray=self.return_gray, ENABLE_IMSHOW=False )AND self.D = do_typical_data_aug( self.D )Here,class PitsSequence(keras.utils.Sequence): """ This class depends on CustomNets.dataload_ for loading data. """ def __init__(self, PTS_BASE, nP, nN, n_samples=500, initial_epoch=0, image_nrows=240, image_ncols=320, image_nchnl=1 ): # assert( type(n_samples) == type(()) ) self.n_samples_pitts = int(n_samples) self.epoch = initial_epoch self.batch_size = 4 self.refresh_data_after_n_epochs = 20 self.nP = nP self.nN = nN # self.n_samples = n_samples print tcolor.OKGREEN, '-------------PitsSequence Config--------------', tcolor.ENDC print 'n_samples : ', self.n_samples_pitts print 'batch_size : ', self.batch_size print 'refresh_data_after_n_epochs : ', self.refresh_data_after_n_epochs print 'image_nrows: ', image_nrows, '\timage_ncols: ', image_ncols, '\timage_nchnl: ', image_nchnl print '# positive samples (nP) = ', self.nP print '# negative samples (nP) = ', self.nN print tcolor.OKGREEN, '----------------------------------------------', tcolor.ENDC self.resize = (image_ncols, image_nrows) if image_nchnl == 3: self.return_gray = False else : self.return_gray = True # PTS_BASE = '/Bulk_Data/data_Akihiko_Torii/Pitssburg/' self.pr = PittsburgRenderer( PTS_BASE ) self.D = self.pr.step_n_times(n_samples=self.n_samples_pitts, nP=nP, nN=nN, resize=self.resize, return_gray=self.return_gray, ENABLE_IMSHOW=False ) print 'len(D)=', len(self.D), '\tD[0].shape=', self.D[0].shape self.y = np.zeros( len(self.D) ) self.steps = int(np.ceil(len(self.D) / float(self.batch_size))) def __len__(self): return int(np.ceil(len(self.D) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.D[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array( batch_x ), np.array( batch_y ) # return np.array( batch_x )*1./255. - 0.5, np.array( batch_y ) #TODO: Can return another number (sample_weight) for the sample. Which can be judge say by GMS matcher. If we see higher matches amongst +ve set ==> we have good positive samples, def on_epoch_end(self): N = self.refresh_data_after_n_epochs if self.epoch % N == 0 and self.epoch > 0 : print '[on_epoch_end] done %d epochs, so load new data\t' %(N), int_logr.dir() # Sample Data # self.D = dataload_( n_tokyoTimeMachine=self.n_samples_tokyo, n_Pitssburg=self.n_samples_pitts, nP=nP, nN=nN ) self.D = self.pr.step_n_times(n_samples=self.n_samples_pitts, nP=self.nP, nN=self.nN, resize=self.resize, return_gray=self.return_gray, ENABLE_IMSHOW=False ) print 'len(D)=', len(self.D), '\tD[0].shape=', self.D[0].shape # if self.epoch > 400: if self.epoch > 400 and self.n_samples_pitts<0: # Data Augmentation after 400 epochs. Only do for Tokyo which are used for training. ie. dont augment Pitssburg. self.D = do_typical_data_aug( self.D ) print 'dataload_ returned len(self.D)=', len(self.D), 'self.D[0].shape=', self.D[0].shape self.y = np.zeros( len(self.D) ) # modify data self.epoch += 1 | I think your problem lies in the combination of use_multiprocessing=True and workers=0. If you look at the documentation you can read about their settings. Hope that helps. |
How to save a randonly generated image with Python? I'm trying to solve a CAPTCHA on a website.< img src="/kcap.php?PHPSESSID=iahvgmjcb93a0k7fqrf43sq9sk" >Html-code contains a redirect to php-generated img. But, if i try to follow this link, the php generates a new image and the CAPTCHA solution fails.I need to save this image by jpg. The image should not change. | The image changing is being done at the server side, if you need a copy of this image you will need to save the image at the point of the page loading.Looking at the data these images are JPEG's, this will download the image from that link,import urllib.requestlocal_filename, headers = urllib.request.urlretrieve("https://www.list-org.com/kcap.php?PHPSESSID=iahvgmjcb93a0k7fqrf43sq9sk")print(local_filename) |
How can I make python tts.sapi speak asynchronously? Here is the text to speech code I use in my voicebot program :import tts.sapivoice = tts.sapi.Sapi()def say(text): voice.say(text)It works great but the thing is I want to be able to interrupt the function if needed.I mean being able to execute other commands while it speaks (such as saying "stop speaking").As the say() function is just one command, I can't manage to make it work. However I could do that when I did my voicebot in C# with a method called speakAsync(). Is there such a method in the tts.sapi library? Or using Sapi win32com? Thank you | Using the tts.sapi wrapper, you'll need to set up an event loop and event interests (so that SAPI will call you back). Instead, you might want to look at the pyttsx package. It appears to support async speaking. |
Do I need to load the weights of another class I use in my NN class? I have a model that needs to implement self-attention and this is how I wrote my code:class SelfAttention(nn.Module): def __init__(self, args): self.multihead_attn = torch.nn.MultiheadAttention(args) def foward(self, x): return self.multihead_attn.forward(x, x, x) class ActualModel(nn.Module): def __init__(self): self.inp_layer = nn.Linear(arg1, arg2) self.self_attention = SelfAttention(some_args) self.out_layer = nn.Linear(arg2, 1) def forward(self, x): x = self.inp_layer(x) x = self.self_attention(x) x = self.out_layer(x) return xAfter loading a checkpoint of ActualModel, in ActualModel.__init__ during continuing-training or during prediction time should I load a saved model checkpoint of class SelfAttention?If I create an instance of class SelfAttention, would the trained weights corresponding to SelfAttention.multihead_attn be loaded if I do torch.load(actual_model.pth) or would be they be reinitialized?In other words, is this necessary?class ActualModel(nn.Module): def __init__(self): self.inp_layer = nn.Linear(arg1, arg2) self.self_attention = SelfAttention(some_args) self.out_layer = nn.Linear(arg2, 1) def pred_or_continue_train(self): self.self_attention = torch.load('self_attention.pth')actual_model = torch.load('actual_model.pth')actual_model.pred_or_continue_training()actual_model.eval() | In other words, is this necessary?In short, No.The SelfAttention class will be automatically loaded if it has been registered as a nn.module, nn.Parameters, or manually registered buffers.A quick example:import torchimport torch.nn as nnclass SelfAttention(nn.Module): def __init__(self, fin, n_h): super(SelfAttention, self).__init__() self.multihead_attn = torch.nn.MultiheadAttention(fin, n_h) def foward(self, x): return self.multihead_attn.forward(x, x, x) class ActualModel(nn.Module): def __init__(self): super(ActualModel, self).__init__() self.inp_layer = nn.Linear(10, 20) self.self_attention = SelfAttention(20, 1) self.out_layer = nn.Linear(20, 1) def forward(self, x): x = self.inp_layer(x) x = self.self_attention(x) x = self.out_layer(x) return xm = ActualModel()for k, v in m.named_parameters(): print(k)You will get as follows, where self_attention is successfully registered.inp_layer.weightinp_layer.biasself_attention.multihead_attn.in_proj_weightself_attention.multihead_attn.in_proj_biasself_attention.multihead_attn.out_proj.weightself_attention.multihead_attn.out_proj.biasout_layer.weightout_layer.bias |
What is bias node in googlenet and how to remove it? I am new to deep learning, i want to build a model that can identify similar images, i am reading classification is a Strong Baseline for Deep Metric Learning research paper. and here is they used the phrase: "remove the bias term inthe last linear layer". i have no idea what is bias term is and how to remove it from googlenet or other pretrained models. if someone help me out with this it would be great! :) | To compute the layer n outputs, a linear neural network computes a linear combination of the layer n-1 output for each layer n output, adds a scalar constant value to each layer n output (the bias term), and then applies an activation function. In pytorch, one could disable the bias in a linear layer using:layer = torch.nn.Linear(n_in_features, n_out_features, bias=False)To overwrite the existing structure of, say, the googlenet included in torchvision.models defined here, you can simply override the last fully connected layer after initialization:from torchvision.models import googlenetnum_classes = 1000 # or whatever you need it to bemodel = googlenet(num_classes)model.fc = torch.nn.Linear(1000,num_classes,bias = False) |
Pandas GroupBy columns to get 'mode' Dataset as beloe and I want to aggregate the by 'Name' and 'Weeks' to get their mode.I tried 2 ways but neither worked:import pandas as pdfrom io import StringIOcsvfile = StringIO("""Name Weeks SalesAmelia 202106 57Amelia 202105 61Amelia 202106 59Amelia 202103 49Amelia 202105 87Amelia 202104 95Elijah 202106 49Elijah 202105 40Elijah 202106 57Elijah 202103 97Elijah 202105 67Elijah 202104 89James 202106 66James 202105 92James 202106 57James 202103 82James 202105 53James 202104 71""")df = pd.read_csv(csvfile, sep = '\t', engine='python')df['Weeks'] = df['Weeks'].astype(str)# tried both but neither worked:mode = df.groupby(['Name', 'Weeks']).agg({'Sales': ['mode']})# or mode = df.groupby(df['Name', 'Weeks'])[['Sales']].mode()What's the right way? Thank you. | TRY:from statistics import modemode = df.groupby(['Name', 'Weeks'])['Sales'].apply(mode)Name Weeks Amelia 202103 49 202104 95 202105 61 202106 57Elijah 202103 97 202104 89 202105 40 202106 49James 202103 82 202104 71 202105 92 202106 66 |
Generating multiple strings by replacing wildcards So i have the following strings:"xxxxxxx#FUS#xxxxxxxx#ACS#xxxxx""xxxxx#3#xxxxxx#FUS#xxxxx"And i want to generate the following strings from this pattern (i'll use the second example):Considering #FUS# will represent 2."xxxxx0xxxxxx0xxxxx" "xxxxx0xxxxxx1xxxxx" "xxxxx0xxxxxx2xxxxx""xxxxx1xxxxxx0xxxxx" "xxxxx1xxxxxx1xxxxx" "xxxxx1xxxxxx2xxxxx""xxxxx2xxxxxx0xxxxx" "xxxxx2xxxxxx1xxxxx" "xxxxx2xxxxxx2xxxxx""xxxxx3xxxxxx0xxxxx" "xxxxx3xxxxxx1xxxxx" "xxxxx3xxxxxx2xxxxx"Basically if i'm given a string as above, i want to generate multiple strings by replacing the wildcards that can be #FUS#, #WHATEVER# or with a number #20# and generating multiple strings with the ranges that those wildcards represent.I've managed to get a regex to find the wildcards.wildcardRegex = f"(#FUS#|#WHATEVER#|#([0-9]|[1-9][0-9]|[1-9][0-9][0-9])#)"Which finds correctly the target wildcards.For 1 wildcard present, it's easy.re.sub()For more it gets complicated. Or maybe it was a long day...But i think my algorithm logic is failing hard because i'm failing to write some code that will basically generate the signals. I think i need some kind of recursive function that will be called for each number of wildcards present (up to maybe 4 can be present (xxxxx#2#xxx#2#xx#FUS#xx#2#x)).I need a list of resulting signals.Is there any easy way to do this that I'm completely missing?Thanks. | import restringV1 = "xxx#FUS#xxxxi#3#xxx#5#xx"stringV2 = "XXXXXXXXXX#FUS#XXXXXXXXXX#3#xxxxxx#5#xxxx"regex = "(#FUS#|#DSP#|#([0-9]|[1-9][0-9]|[1-9][0-9][0-9])#)"WILDCARD_FUS = "#FUS#"RANGE_FUS = 3def getSignalsFromWildcards(app, can): sigList = list() if WILDCARD_FUS in app: for i in range(RANGE_FUS): outAppSig = app.replace(WILDCARD_FUS, str(i), 1) outCanSig = can.replace(WILDCARD_FUS, str(i), 1) if "#" in outAppSig: newSigList = getSignalsFromWildcards(outAppSig, outCanSig) sigList += newSigList else: sigList.append((outAppSig, outCanSig)) elif len(re.findall("(#([0-9]|[1-9][0-9]|[1-9][0-9][0-9])#)", stringV1)) > 0: wildcard = re.search("(#([0-9]|[1-9][0-9]|[1-9][0-9][0-9])#)", app).group() tarRange = int(wildcard.strip("#")) for i in range(tarRange): outAppSig = app.replace(wildcard, str(i), 1) outCanSig = can.replace(wildcard, str(i), 1) if "#" in outAppSig: newSigList = getSignalsFromWildcards(outAppSig, outCanSig) sigList += newSigList else: sigList.append((outAppSig, outCanSig)) return sigListif "#" in stringV1: resultList = getSignalsFromWildcards(stringV1, stringV2)for item in resultList: print(item)results in('xxx0xxxxi0xxxxx', 'XXXXXXXXXX0XXXXXXXXXX0xxxxxxxxxx')('xxx0xxxxi1xxxxx', 'XXXXXXXXXX0XXXXXXXXXX1xxxxxxxxxx')('xxx0xxxxi2xxxxx', 'XXXXXXXXXX0XXXXXXXXXX2xxxxxxxxxx')('xxx1xxxxi0xxxxx', 'XXXXXXXXXX1XXXXXXXXXX0xxxxxxxxxx')('xxx1xxxxi1xxxxx', 'XXXXXXXXXX1XXXXXXXXXX1xxxxxxxxxx')('xxx1xxxxi2xxxxx', 'XXXXXXXXXX1XXXXXXXXXX2xxxxxxxxxx')('xxx2xxxxi0xxxxx', 'XXXXXXXXXX2XXXXXXXXXX0xxxxxxxxxx')('xxx2xxxxi1xxxxx', 'XXXXXXXXXX2XXXXXXXXXX1xxxxxxxxxx')('xxx2xxxxi2xxxxx', 'XXXXXXXXXX2XXXXXXXXXX2xxxxxxxxxx')long day after-all... |
Django FileField file does not open correctly In my django app i create a model with a fiels of type FileField for store some documents:...device_file = models.FileField(upload_to='uploads/')...in my settings.py i have:STATIC_URL = 'mqtt_site/static/'MEDIA_ROOT='mqtt_site/static/media/'whell, when i my django admin Add form i save data everything work good, my documents was saved in the right position, but when i click on link the path is not correct:i get 127.0.0.1:8000/uploads/Weekly_Report-2022-01-14_19-07.pdfthet is wrong, instead of the correct127.0.0.1:8000/mqtt_site/static/media/uploads/Weekly_Report-2022-01-14_19-07.pdfWhy djangodont' add the /mqtt_site/static/media path as prefix as defined in my settings.py file?So many thanks in advanceManuel | Media and static shouldn’t share the same folder. |
How to find identical rows of two arrays with different size? I have two arrays with different sizea = np.array([[5, 0], [2, 4], [0, 1], [3, 4], [1, 5], [5, 6], [7, 9]])b = np.array([[0, 3], [5, 6], [2, 5], [2, 4]])I needc = np.array([False, True, False, False, False, True, False])i.e. array 'b' have rows [5, 6] and [2, 4] in array 'a'. Currently, I do this bylogical = np.zeros(a.shape[0]).astype(bool)for i in range(b.shape[0]): logical += np.all(a == b[i], axis=1)Is there any numpy code for doing this? | Let's try broadcasting:(a[None,:] == b[:,None]).all(-1).any(0)Output:array([False, True, False, False, False, True, False]) |
How to best print dictionaries created from user defined instance variables? I am trying to organize my cows into a dictionary, access their values, and print them to the console.Each instance of a cow is assigned to an index in list cow.I am attempting to create a dictionary as follows:for i in cows: cowDict[i.getName] = (i.getWeight, i.getAge)I'd like to be able to access the values of my dictionary using my cows names, i.e:c["maggie"]however, my code produces a key error.If I print the entire dictionary, I get something to this effect:"{<'bound method cow.getName of <'maggie, 3, 1>>: (<'bound method cow.getWeight of <'maggie, 3, 1>>, <'bound method cow.getAge of <'maggie, 3, 1>>), etc...}"I can replace .getName with the instance variable and get the desired result, however, I've been advised away from that approach. What is best practice to create a dictionary using instance variables of type cow?Code:class cow(object): """ A class that defines the instance objects name, weight and age associated with cows """ def __init__(self, name, weight, age): self.name = name self.weight = weight self.age = age def getName(self): return self.name def getWeight(self): return self.weight def getAge(self): return self.age def __repr__(self): result = '<' + self.name + ', ' + str(self.weight) + ', ' + str(self.age) + '>' return resultdef buildCows(): """ this function returns a dictionary of cows """ names = ['maggie', 'mooderton', 'miles', 'mickey', 'steve', 'margret', 'steph'] weights = [3,6,10,5,3,8,12] ages = [1,2,3,4,5,6,7] cows = [] cowDict = {} for i in range(len(names)): #creates a list cow, where each index of list cow is an instance #of the cow class cows.append(cow(names[i],weights[i],ages[i])) for i in cows: #creates a dictionary from the cow list with the cow name as the key #and the weight and age as values stored in a tuple cowDict[i.getName] = (i.getWeight, i.getAge) #returns a dictionary return cowDictc = buildCows() | getName is a function so try for i in cows: cowDict[i.getName()] = (i.getWeight(), i.getAge()) |
Generate Test data using TfIdfVectorizer I have separated my data into train and test parts. My data table has a 'text' column. Consider that I have ten other columns representing numerical features. I have used TfidfVectorizer and the training data to generate term matrix and combine that with numerical features to create the training dataframe. tfidf_vectorizer=TfidfVectorizer(use_idf=True, max_features=5000, max_df=0.95)tfidf_vectorizer_train = tfidf_vectorizer.fit_transform(X_train['text'].values)df1_tfidf_train = pd.DataFrame(tfidf_vectorizer_train.toarray(), columns=tfidf_vectorizer.get_feature_names())df2_train = df_main_ques.iloc[train_index][traffic_metrics]#to collect numerical featuresdf_combined_train = pd.concat([df1_tfidf_train, df2_train], axis=1)To calculate the tf-idf score for test part, I need to reuse the training data set. I am not sure how to generate the test data part. Related post: [1]Append tfidf to pandas dataframe: discuss only creating training dataset part[2]How does TfidfVectorizer compute scores on test data: Discussed test data part but it is not clear how to generate the test dataframe that contains both terms and numerical features. | you can use transform method of trained vectorizer for transforming your test data on already trained vectorizer. you can reuse the trained vectorizer for test data set TF-IDF score generation bytfidf_vectorizer_test = tfidf_vectorizer.transform(X_test['text'].values) |
open all the text files in a folder i have this code that takes in a text folder and takes the 25th element in the first line of the file and place it in the 7th. However, this code opens only one text file and writes it to another but what i want that the code reads all the files in the folder and writes them in the same path.index= 1with open("3230c237cnc274c.txt", "r") as f: file = f.readlines()line = file[index].split(';')target = line[24]blank = line[6]line[6] = targetline[24] = ""file[index] = ';'.join(line)with open("aaaaaaaaaaaaaaaa.txt", 'w') as f: for line in file: f.write(line) | I like to use the glob module for things like this. See if this helps:import globall_text_files = glob.glob("*.txt")for text_file in all_text_files: with open(text_file, "r") as f: lines = f.readlines() # do something with the lines...The syntax "*.txt" indicates all files ending with the .txt extension. This then returns a list of all those filenames. If your files are in a folder somewhere, you can also do "folder/*.txt", and there's a few other nice tricks with glob |
Openpyxl yields TypeError on saving file, why? since the last package update the following code does not run any more. (This is an example, i have a couple of scripts that unfortunately require this functionality) The following code snippet is the simplest example i can imagine which worked before.Current specs:Win10 64bit,Python 3.7.5 64bit,IPython 7.10.2,conda 4.8.0,openpyxl 3.0.2def write_to_default_excel(): wb = Workbook() wb.save('sample.xlsx')if __name__ == "__main__": write_to_default_excel()yields:TypeError: got invalid input value of type <class 'xml.etree.ElementTree.Element'>, expected string or ElementI tried downgrading the openpyxl to 3.0.0 or 2.6.4., however conda cannot resolve the resulting dependency conflicts.Any ideas why this happens out of the sudden? What am i missing/overlooking?Can you recommend any alternative packages? | I got the same problem. It seems to be a bug of new version openpyxl package. You need to roll back to older version to get it work or you can switch to xlsxwriter & xlrd packages.Check these posts for more information:The function to_excel of pandas generate an unexpected TypeErroropenpyxl can't save to a file |
Gaussian NB vs LDA in scikit learn From my understanding, if we only have one feature, then Gaussian NB (naive bayes classification) and LDA (Linear Discriminant Analysis) should give the same result.But I didn't succeed with scikit learn.First I generate some toy datafrom sklearn.datasets import make_blobsX, y = make_blobs(n_samples=20, centers=2, n_features=1, random_state=0)Then I create a NB model with Gaussian distributionfrom sklearn.naive_bayes import GaussianNBgnb = GaussianNB()gnb.fit(X, y)Then a LDA modelfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysislda = LinearDiscriminantAnalysis()lda.fit(X, y)Now it is possible to plot the results.plt.scatter(X,y)X_test = np.linspace(-1, 8, 300).reshape(-1,1)plt.plot(X_test, gnb.predict_proba(X_test.reshape(-1,1))[:,1],color="black")plt.plot(X_test, lda.predict_proba(X_test.reshape(-1,1))[:,1],color="green")plt.ylabel('y')plt.xlabel('X')plt.axhline(.5, color='.5')plt.show()But I got the following plotMaybe I didn't these algorithms. Could you explain why the differences? | The main high-level difference between GNB, LDA and QDA when there are two classes C1 and C2 is as follows:GNB : assumes covariance of X under classes C1 and C2 are different, but the off-diagonal elements are 0.QDA : assumes covariance of X under classes C1 and C2 are different, and the off-diagonal elements are not equal to 0.LDA : assumes covariance of X under classes C1 and C2 are same, and the off-diagonal elements are not equal to 0.All three assume that the means of X under C1 and C2 are different.When X is only 1 variable, GNB should default to be the same as QDA as both assume that the variance of X under the two classes are different, while LDA would be generally different as it assumes variance of X under the two classes are the same (unless by coincidence the variances of the two classes are equal).In your example, the variance of X when y = 1 and y = 0 are 1.045 and 0.950 respectively (which is what GNB will assume for two classes), while the variance of X is 1.678 (which is what LDA will assume for both the classes). Hence, their solutions will be slightly different.Unfortunately, though, even if I replace LDA with Sklearn's QDA implementation, I get different curves. I suspect it could be due to implementation differences in Sklearn (e.g. rounding off issues, or perhaps n vs n-1 in calculation of variances, etc.). |
Python script to copy some messages from slack which were posted in some time range I want to write a python script using some slack API's which will be able to copy some messages which were pasted between say 10AM to 11AM in a channel A & then paste the same messages in a different channel B.I know that it's easy to write a message in slack via a python script, but is it also possible to pull some messages out of slack ? | One way would be to use Requests (or any http client you like) with this endpoint: https://api.slack.com/methods/channels.historyThis returns a list of messages for a given channel that you can filter with the oldest and latest arguments. |
Client side to connect to sybase IQ using Python3 I am using Ubuntu and I want to connect to a sybase IQ server (remote) from my client machine ,I tried installing/using sqlanydb according to sybase documentation, but i don't see any parameter in sqlanydb.connect() related to IP of the sybase server. I think this routine imagines that sybase db is on localhost, am I right?Do i need to install the sybase on client side as well to be able toconnect to that remote sybase db? or just the sqlanydb is enough? How can I make this driver to connect to a remote server? | You do need to install the client software. The python driver is basically a python interface to the dbcapi client library, so you can't use it without the client software installed on the machine.For connecting to a remote server, you can use the HOST parameter. The connect() function takes as arguments any valid connection parameter, so a connection string like uid=steve;pwd=secretpassword;host=myserverhost:4567;dbn=mydatabase would translate to:sqlanydb.connect( uid = 'steve', pwd = 'secretpassword', host = 'myserverhost:4567', dbn = 'mydatabase' )Connection parameters are documented here. If HOST is not used, the client attempts a shared memory connection. Shared memory is faster than TCP but obviously only works if the client and server are on the same machine. |
What does the "scale" parameter in scipy.stats.t.std() stand for? My goal is, to find the standard deviation of a dataset with a supposed t-distribution to calculate the survival function given a quantile.As the documentation of scipy.stats is very counter intuitive to me, I tried several things and ended up with the implementation below. (Note: the numerated variables only demonstrate, that there are different results. My goal is to end up with only one result each!)import scipydf, loc, scale = scipy.stats.t.fit(data, fdf=len(data)-1)std1 = scipy.stats.t.std(df=df, loc=loc, scale=scale)std2 = scipy.stats.t.std(df=df, loc=loc)res1 = scipy.stats.sf(some_x, df, loc, scale)res2 = scipy.stats.sf(some_x, df, loc, std1)res3 = scipy.stats.sf(some_x, df, loc, std2)I encountered that, loc equals the stats.t.mean() function, when given the values from the fit-function. But scale does not equal stats.t.std(). Hence the std1 and std2 are different and not equal to scale.I can only find sources for the normal distribution, where it's stated that scale equals std.How should I use the functions above appropriatly?Any help or suggestions for editing the question would be much appreciated :)Code on and stay healthy! | Student's T distribution is not supposed to be shifted or scaled, it's used as a standard distribution with mean=0, usually to test the difference between two means of normally distributed populations https://en.wikipedia.org/wiki/Student%27s_t-distribution.Given a sample with n observation and the Student's T distribution with v=n-1 degrees of freedom, the standard deviation is sqrt(v / (v-2)).You can check in scipy that this is truen = 11v = n - 1dist = sps.t(df=v)# standard deviation# from scipy distributionprint(dist.std()) # will return 1.118033988749895# standard deviation# from theoryprint(np.sqrt(v / (v - 2))) # will return 1.118033988749895 |
Python 'delete' class ids for graph theory program in tkinter I am writing a program, that is able to create vertices and edges with 'onclick'. In my menu I have an option 'New' that should clean the canvas in order to start anew. I am creating vertices with create_oval and as far as I understood every object gets a class id 1,2,3,... if I press now the button for new I would like them to be reset/deleted otherwise my idea how to program this isn't working. Can someone help me?I wrote in spyder and defined a function def initCanvas(self): self.canvas.delete(tk.ALL)it is clearing the canvas, but not the ids, what is missing/what do I have to change? | The tkinter canvas does not re-use ids. If you create an item with an id of 1 and then delete it, the next item will have an id of 2. This is one of the reasons why the canvas has performance problems if you repeatedly create and delete many objects. |
Perform method of action chains does not work I have a case where I need to drag and drop an element using Selenium webdriver and Python.I tried using the ActionChains class of the Selenium, the code somewhat looks like this:from selenium import webdriverfrom selenium.webdriver.common.action_chains import ActionChainssource = ("//span[text()='user1']", Selector.XPATH)target = ("//span[text()='user2']", Selector.XPATH)acs = ActionChains(webdriver_api)change = acs.drag_and_drop(source, target)change.perform()The error I am getting is:File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/action_chains.py", line 201, in <lambda>self._driver.execute(Command.MOVE_TO, {'element': to_element.id}))AttributeError: 'module' object has no attribute 'execute' | The source and target need to be WebElement instances:source = webdriver_api.find_element_by_xpath("//span[text()='user1']")target = webdriver_api.find_element_by_xpath("//span[text()='user2']")acs = ActionChains(webdriver_api)change = acs.drag_and_drop(source, target)change.perform() |
Fill in values between given indices of 2d numpy array Given a numpy array,a = np.zeros((10,10))[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]And for a set of indices, e.g.:start = [0,1,2,3,4,4,3,2,1,0]end = [9,8,7,6,5,5,6,7,8,9]how do you get the "select" all the values/range between the start and end index and get the following:result = [[1, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 1, 0, 0, 0, 0, 0, 0, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 0, 0, 0, 0, 0, 0, 1, 1], [1, 0, 0, 0, 0, 0, 0, 0, 0, 1]]My goal is to 'select' the all the values between the each given indices of the columns.I know that using apply_along_axis can do the trick, but is there a better or more elegant solution?Any inputs are welcomed!! | You can use broadcasting -r = np.arange(10)[:,None]out = ((start <= r) & (r <= end)).astype(int)This would create an array of shape (10,len(start). Thus, if you need to actually fill some already initialized array filled_arr, do -m,n = out.shapefilled_arr[:m,:n] = outSample run -In [325]: start = [0,1,2,3,4,4,3,2,1,0] ...: end = [9,8,7,6,5,5,6,7,8,9] ...: In [326]: r = np.arange(10)[:,None]In [327]: ((start <= r) & (r <= end)).astype(int)Out[327]: array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 1, 0, 0, 0, 0, 0, 0, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 0, 0, 0, 0, 0, 0, 1, 1], [1, 0, 0, 0, 0, 0, 0, 0, 0, 1]])If you meant to use this as a mask with 1s as the True ones, skip the conversion to int. Thus, (start <= r) & (r <= end) would be the mask. |
uploading files from python to GCS I'm able to list the buckets of GCS from Python boto.Able to copy files to GCS using gsutil command.Able to download files from GCS to compute instance using python API.I have followed steps from below document. https://cloud.google.com/storage/docs/xml-api/gspythonlibraryGetting below error in uploading files from instance to GCS.GSResponseError: 403 ForbiddenAccessDeniedAccess denied.Provided scope(s) are not authorized | That generally happens when you did not include the storage scopes in the access scopes when you set up the vm. Unfortunately you cannot change them after you start the vm, you will need to recreate it.https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam |
Gstreamer adding dynamic demuxer element chains We have multiple cameras that send muxed RTP and RTCP to the same port of a video processor. In this example I just use raw video frames to make it simple, later it will be H.264 that I hope to decode on the GPU.With gst-launch I get it to work:gst-launch-1.0 rtpbin name=rtpbin funnel name=frtp videotestsrc pattern=ball is-live=true ! "video/x-raw,framerate=10/1" ! rtpvrawpay ssrc=10 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! frtp.sink_0 rtpbin.send_rtcp_src_0 ! frtp.sink_1 frtp.src ! udpsink host=127.0.0.1 port=5000gst-launch-1.0 -v rtpssrcdemux name=rtpdemux udpsrc name=udpsrc port=5000 ! rtpdemux.sink rtpdemux.src_10 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)10, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96" ! rtpvrawdepay ! videoconvert ! autovideosinkLikewise if I in the python code use the same string and parse it with Gst.parse_launch it works, in the case I have set the ssrc at the sender since that is part of the pad name on the demuxer.But when I try to build the chain in python dynamically it fails. Any suggestions on how to solve this? Here is my test code:import giimport timegi.require_version('Gst', '1.0')from gi.repository import Gstclass Video(): def __init__(self, port=5000): Gst.init(None) self.port = port # UDP video mux stream (:5000) self.launch_pipline = [ 'rtpssrcdemux name=rtpdemux', f'udpsrc name=udpsrc port={port}', '! rtpdemux.sink', # 'rtpdemux.src_10' # '! application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)RAW,sampling=(string)RGBA,depth=(string)10,width=(string)320,height=(string)240,colorimetry=(string)BT601-5,payload=(int)96', # '! rtpvrawdepay name=depay ! videoconvert', # f'! appsink name=appsink{port} emit-signals=true sync=false max-buffers=0 drop=true' ] self.start_gst(self.launch_pipline) def start_gst(self, config): command = ' '.join(config) print(command) self.video_pipe = Gst.parse_launch(command) self.video_pipe.set_state(Gst.State.PLAYING) self.demuxer = self.video_pipe.get_by_name('rtpdemux') self.demuxer.connect("pad-added", self._demuxer_new_pad) bus = self.video_pipe.get_bus() bus.add_signal_watch() bus.connect("message::error", self._on_error) bus.connect("message::eos", self._on_eos) self.app_sink = {} #------ # app_sink = self.video_pipe.get_by_name(f'appsink{self.port}') # app_sink.connect('new-sample', self._new_frame) def _on_error(self, _, message): err, debug = message.parse_error() print("Error: %s" % err, debug) def _on_eos(self, _, message): print("EOF!") def _demuxer_new_pad(self, demuxer, pad): name = pad.get_name() print(f"---------\n{demuxer}\n{pad}\n{name}\n-------") is_rtcp = name.startswith("rtcp") sink = Gst.ElementFactory.make("appsink", f"appsink_{name}") sink.set_property("emit-signals", True) sink.set_property("sync", False) sink.set_property("max-buffers", 0) sink.set_property("drop", True) if not is_rtcp: sink.connect('new-sample', self._new_frame) self.video_pipe.add(sink) self.app_sink[name] = sink if is_rtcp: chain_pad = sink.get_static_pad("sink") else: # caps = Gst.caps_from_string("application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)RAW,sampling=(string)RGBA,depth=(string)10,width=(string)320,height=(string)240,colorimetry=(string)BT601-5,payload=(int)96") depay = Gst.ElementFactory.make("rtpvrawdepay") # depay_pad = depay.get_static_pad("sink") # depay_pad.set_caps(caps) # pad.set_caps(caps) convert = Gst.ElementFactory.make("videoconvert") self.video_pipe.add(depay) self.video_pipe.add(convert) chain_pad = depay.get_static_pad("sink") depay.link(convert) convert.link(sink) pad.link(chain_pad) def _new_frame(self, sink): print("in video_udp callback") sample = sink.emit('pull-sample') caps = sample.get_caps() name = sink.get_name() print(f"caps: {caps}, name: {name}") # height = caps.get_structure(0).get_value('height') # width = caps.get_structure(0).get_value('width') # got_buf = sample.get_buffer() is not None # print(f"{height}x{width} {got_buf}") return Gst.FlowReturn.OKif __name__ == '__main__': # Create the video object # Add port= if is necessary to use a different one video = Video(port=5000) time.sleep(1000) | So it turns out I missed that any new elements added to a pipeline needs to be set to playing. I had miss-read the documentation that indicated that all elements of a pipeline are in the same state. Here are the changes that make it work: def _demuxer_new_pad(self, demuxer, pad): name = pad.get_name() print(f"---------\n{demuxer}\n{pad}\n{name}\n-------") is_rtcp = name.startswith("rtcp") sink = Gst.ElementFactory.make("appsink", f"appsink_{name}") sink.set_property("emit-signals", True) sink.set_property("sync", False) sink.set_property("max-buffers", 0) sink.set_property("drop", True) if not is_rtcp: sink.connect('new-sample', self._new_frame) self.video_pipe.add(sink) self.app_sink[name] = sink if is_rtcp: chain_pad = sink.get_static_pad("sink") pad.link(chain_pad) sink.set_state(Gst.State.PLAYING) else: caps = Gst.caps_from_string("application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)RAW,sampling=(string)RGBA,depth=(string)10,width=(string)320,height=(string)240,colorimetry=(string)BT601-5,payload=(int)96") depay = Gst.ElementFactory.make("rtpvrawdepay") pad.set_caps(caps) # gives warning, but ok convert = Gst.ElementFactory.make("videoconvert") self.video_pipe.add(depay) self.video_pipe.add(convert) chain_pad = depay.get_static_pad("sink") depay.link(convert) convert.link(sink) pad.link(chain_pad) depay.set_state(Gst.State.PLAYING) convert.set_state(Gst.State.PLAYING) sink.set_state(Gst.State.PLAYING) |
how to mark the x axis more than 8 points in pyplot polar I want to mark my plot as 24 hours and i need to mark every hour.I tried the following code, but the plot only divide into 8 and only mark upto 7.theta = np.arange(0, 360 + 360 / 144, 360 / 144) * np.pi / 180fig1 = plt.figure()ax1 = fig1.add_axes([0.1, 0.1, 0.8, 0.8], polar=True)ax1.set_ylim(0, 4000)ax1.set_yticks(device_dict[name])ax1.set_xticklabels(range(24))ax1.plot(theta, inter_data)how can i make it mark all 24 | You can set everything between 0 and 2*np.pi like : fig1 = plt.figure()ax1 = fig1.add_axes([0.1, 0.1, 0.8, 0.8], polar=True) ax1.set_xlim((0,2*np.pi))tick_array=np.arange(0,2*np.pi+2*np.pi/24,2*np.pi/24)label_array=np.arange(1,25)ax1.set_xticks(tick_array)ax1.set_xticklabels(label_array)Is that what you meant ? |
Pandas - Merge rows on column A, taking first values from each column B, C etc I have a dataframe, with recordings of statistics in multiple columns.I have a list of the column names: stat_columns = ['Height', 'Speed'].I want to combine the data to get one row per id.The data comes sorted with the newest records on the top. I want the most recent data, so I must use the first value of each column, by id.My dataframe looks like this:Index id Height Speed0 100007 8.31 100007 54 2 100007 8.63 100007 52 4 100035 39 5 100014 44 6 100035 5.6And I want it to look like this:Index id Height Speed0 100007 54 8.31 100014 44 2 100035 39 5.6I have tried a simple groupby myself:df_stats = df_path.groupby(['id'], as_index=False).first()But this seems to only give me a row with the first statistic found. | For me your solution working, maybe is necessary replace empty values to NaNs:df_stats = df_path.replace('',np.nan).groupby('id', as_index=False).first()print (df_stats) id Index Height Speed0 100007 0 54.0 8.31 100014 5 44.0 NaN2 100035 4 39.0 5.6 |
Eclipse can't find library for compiled executable In Eclipse 4.10.0 I'm working on a Python script that calls a C++/CUDA executable (that I wrote and compiled myself too with Nsight) at one point via subprocess.call(). This causes an error message: error while loading shared libraries: libcufft.so.10.0: cannot open shared object file: No such file or directoryI had the same problem when running the file in an Ubuntu terminal until I updated ~/.bashrc with: export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH, but how do I apply that in Eclipse? I tried adding both /usr/local/cuda/lib64/libcufft.so.10.0 and /usr/local/cuda-10.0/lib64/libcufft.so.10.0 to the Eclipse project under Project properties->Resource->Linked Resources->Path Variables, but the error persists. | I found the answer here: In the Python project's run configuration, go to the Environment tab and add the path variable (in my case LD_LIBRARY_PATH) with the value of the directory of the library (in my case /usr/local/cuda/lib64). |
Pycharm debug watch: Can I show directly an image Similar to the functionality available in Visual Studio I'd like to have a look at some of the variables in my code as images using PyCharm Community Edition 2016.1.4. In my case it's a mask that is applied to an image later on, that I want to visually check during debug when hitting a breakpoint.So far I have tried adding the mask itself to the watch window, but then I only see the numerical values of the array. Then I tried adding an expression to the watch window:cv2.imshow('mask', mask)However, this then freezes the other windows when hitting a breakpoint and does not display the variable, so I added a second expression just afterwards:cv2.imshow('mask', mask); cv2.waitKey(30)This does the trick as it allows to actually display the content of the variable when hitting the breakpoint in a separate window called 'mask'. Unfortunately it still causes a freeze of the window showing the variable. Has someone an idea how to get around this issue? | I had similar problems, so I've just created OpenCV Image Viewer Plugin, which works as you expect. You can install it to any JetBrains IDE, which support Python (directly or via plugin).https://plugins.jetbrains.com/plugin/14371-opencv-image-viewer |
How to call an API in Flask Restplus? I'm trying to figure out how can I call an API using Flask-Restplus (normally, I'd just use API key, as it always was possible - let's say the easiest example is weather). I know how I can do it in Flask, but have no idea how can I do it in Restplus. There are tons of documentations, but mostly about working with local database. I don't understand how I can call a real weather API inside Restplus. If you can explain it to me or provide an example, I'd appreciate that very much. Also I have 2 files - one for an endpoints, 2nd for the calls, and I can't understand how I can invoke the call or connect those two files. Thanks in advance! | Flask-RESTPlus package deals with creating and exposing APIs. If you need to access external APIs within your application you are suppose to use Requests package. |
boolean indexing to store a column value as a variable in python Let's say I have a CSV file which reads Student_Name GradeMary 75John 65Stella 90I'd like to store Stella's grade as a variable. My current code looks like:import pandas as pdstudent_grades = pd.read_csv('.../Term2grades.csv')x = student_grades.loc[student_grades['Student_Name'] == "Stella", ['Grade']]print(x)The output of this code is: Grade2 90However, I only want to get 90 so that I can use it later (if x > 85 etc.)Thanks for the help. | Access the underlying numpy array and take its first element (assuming you have a single element):student_grades.loc[student_grades['Student_Name'] == "Stella", 'Grade'].values[0]Out: 90You can also use iat or iloc on the returning Series:student_grades.loc[student_grades['Student_Name'] == "Stella", 'Grade'].iloc[0]Out: 90 |
how to add matrices as values in a dictionary? I have a dictionary which its values are matrices and its keys are the most frequent words in the train file. I have a test file, I have to see if the words in each line of that are in the dictionary it gets their values which are matrices and add the matrices and then divide them to the number of words. the answer should be one matrix. I tried "sum(val)" but it doesn't add them together. How can I do it? (The file contains a Persian sentence, a tab and then an English word). The output of the dictionary is as like as below:keys = [p[0] for p in freq.most_common(4)] array = numpy.array([[wordVector[0,:]] , [wordVector[1,:]], [wordVector[2,:]], [wordVector[3,:]]])dic = dict(zip(keys, zip(array)))#print (dic)# test partwith open ("test2.txt", encoding = "utf-8") as f2: for line in f2: line = line.split("\t") lin = line[0].split() for i in lin: for key, val in dic.items(): if i == key: print ((sum(val))/ | The val is an numpy.array and you can use the sum() function: val.sum() |
Django Rest creating Nested-Objects (ManyToMany) I looked for an answer to this question specifically for Django Rest, but I haven't found one anywhere, although I think a lot of people have this issue. I'm trying to create an object with multiple nested relationships but something is keeping this from happening. Here are my models for reference:class UserProfile(models.Model): user = models.OneToOneField(User, unique=True, null=True) tmp_password = models.CharField(max_length=32) photo = models.ImageField(upload_to='media/', blank=True, null=True) likes = models.IntegerField(blank=True, null=True) dislikes = models.IntegerField(blank=True, null=True) def __unicode__(self): return unicode(self.user.username)class Item(models.Model):"""Item Object Class""" id = models.AutoField(primary_key=True) name = models.CharField(max_length=125, blank=True) price = models.FloatField(default=0, blank=True) rating = models.IntegerField(default=0, blank=True) description = models.TextField(max_length=300, blank=True) photo = models.ImageField(upload_to="media/", blank=True) barcode = models.CharField(max_length=20, blank=True) photo_url = models.URLField(max_length=200, blank=True) item_url = models.URLField(max_length=200, blank=True) def __unicode__(self): return unicode(self.name)class Favorite(models.Model): user = models.OneToOneField(User, null=True) items = models.ManyToManyField(Item) def __unicode__(self): return unicode(self.user.username) def admin_names(self): return '\n'.join([a.name for a in self.items.all()])And here are my serializers:class ItemSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = Item fields = ('id', 'name', 'price', 'description', 'rating', 'photo', 'barcode', 'photo_url','item_url' )class FavoriteSerializer(serializers.ModelSerializer): class Meta: model = Favorite exclude = ('id', 'user')class UserProfileSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = UserProfile fields = ('likes', 'dislikes', 'photo', 'tmp_password')class UserSerializer(serializers.HyperlinkedModelSerializer): userprofile = UserProfileSerializer() favorite = FavoriteSerializer() class Meta: model = User fields = ( 'id', 'username', 'url', 'email', 'is_staff', 'password', 'userprofile', 'favorite' ) def create(self, validated_data): profile_data = validated_data.pop('userprofile') favorites_data = validated_data.pop('favorite') user = User.objects.create_user(**validated_data) user_profile = UserProfile.objects.create(user=user, **profile_data) favorite = Favorite(user=user) favorite.save() print favorite.items for item in favorites_data: favorite.items.add(item) print favorite.items return userWhat I am having trouble with is the create() method on UserSerializer. What's happening is I can't .add() the data from favorites_data to the favorite object. I get an error saying invalid literal for int() with base 10: 'items'. I guess this makes sense, but if I try this instead of using the for loop:favorite.items.add(**favorites_data)I just get an error saying add() got an unexpected keyword argument 'items'. Finally, If I try this: favorite.items.add(favorites_data)I just get this error: unhashable type: 'OrderedDict'What am I doing wrong in this approach? Obviously, favorites_data exist, but I'm not inserting it properly. Thanks for any help! | I think favorite.items.add expects you to pass in a single instance of an Item, so you should replace this:for item in favorites_data: favorite.items.add(item)With this:for key in favorites_data: for item in favorites_data[key]: favorite.items.add(item) |
Python: is using decorator to change method arguments a bad thing? I implemented a decorator to change a class method's arguments in this way:def some_decorator(class_method): def wrapper(self, *args, **kargs): if self._current_view = self.WEAPON: items = self._weapons elif self._current_view = self.OTHER: items = self._others for item_id, item in items.iteritems(): class_method(self, item, *args, **kargs) items_to_remove = [] for item_id, each_item in items.iteritems: if each_item.out_dated(): item_to_remove.append(item_id) for item_id in items_to_remove: del items[item_id] return wrapperclass SomeClass(object): @some_decorator def update_status(self, item, status): item.update_status(status) @some_decorator def refresh(self, item): item.refresh()The main purpose of the some_decorator is automatically call method on every item of SomeClass then do some cleaning. Since there may be many methods I need to call on the items, I don't want to repeatedly write the for loop and the clean_items code.Without the decorator, the SomeClass will be like:class SomeClass(object): def update_status(self, status): if self._current_view = self.WEAPON: items = self._weapons elif self._current_view = self.OTHER: items = self._others for item_id, item in items.iteritems(): item.update_status(status) items_to_remove = [] for item_id, each_item in items.iteritems: if each_item.out_dated(): item_to_remove.append(item_id) for item_id in items_to_remove: del items[item_id] @some_decorator def refresh(self): if self._current_view = self.WEAPON: items = self._weapons elif self._current_view = self.OTHER: items = self._others for item_id, item in items.iteritems(): item.refresh() items_to_remove = [] for item_id, each_item in items.iteritems: if each_item.out_dated(): item_to_remove.append(item_id) for item_id in items_to_remove: del items[item_id]When I actually the methods, I will do:a = SomeClass()a.update_status(1)a.refresh()Here is the problem, the parameters I pass to update_status is different from the arguments of the declaration of update_status, the item is missed since is automatically passed by the some_decorator. I wonder if it's a bad thing since it may cause confusion when other programmers see it.If it's indeed a very bad pattern, are there any other pattern can do the same thing for me without causing confusion? | Yeah, I think in this case, more explicit is better. Why not leave the decorators off and just use for-loops in the method itself:def update_status(self, status): for item in self.items: item.update_status(status)def refresh(self): for item in self.items: item.refresh() |
How to export queryset in Django 1.7 to xls file? I using Django 1.7.1 with Python 3.4. I would like to export search results to Excel file. I have this function in view.pydef car_list(request): page = request.GET.get('page') search = request.GET.get('search') if search is None: cars= Car.objects.filter(plate__isnull = False ).order_by('-created_date') else: cars= Car.objects.filter(plate__contains = search ).order_by('-created_date') paginator = Paginator(cars, 100) # Show 100 contacts per page try: cars= paginator.page(page) except PageNotAnInteger: #if page is not an integer, deliver first page cars= paginator.page(1) except EmptyPage: #if page is out of the range, deliver last page cars= paginator.page(paginator.num_pages) if request.REQUEST.get('excel'): # excel button clicked return download_workbook(request, cars)return render_to_response('app/list.html', {'cars': cars}, context_instance=RequestContext(request))from .utils import queryset_to_workbookdef download_workbook(request, cars): queryset = cars columns = ( 'plante_number', 'make', 'model', 'year') workbook = queryset_to_workbook(queryset, columns) response = HttpResponse(mimetype='application/vnd.ms-excel') response['Content-Disposition'] = 'attachment; filename="export.xls"' workbook.save(response) return responseand to be honest I don't know what to do in template to export it.i have this button in my template <input type="submit" name="excel" value="Export to Excel" />and when I use it i get:TypeError at /__init__() got an unexpected keyword argument 'mimetype'Request Method: GETRequest URL: http://127.0.0.1:8000/?search=&excel=Export+to+ExcelDjango Version: 1.7.1Exception Type: TypeErrorException Value: __init__() got an unexpected keyword argument 'mimetype'Exception Location: C:\Python34\lib\site-packages\django\http\response.py in __init__, line 318Python Executable: C:\Python34\python.exePython Version: 3.4.2How can I fix this error? Please, give me some advice.Thanks | Passing mimetype to HttpResponse is deprecated and removed in Django 1.7You have to use content_type |
Git - Should Pipfile.lock be committed to version control? When two developers are working on a project with different operating systems, the Pipfile.lock is different (especially the part inside host-environment-markers).For PHP, most people recommend to commit composer.lock file.Do we have to do the same for Python? | Short - Yes!The lock file tells pipenv exactly which version of each dependency needs to be installed. You will have consistency across all machines.// update: Same question on github |
Pandas: Adding column via arithmetic on all matching indexes Edit: Added a row with no matched index to demonstrate expected behaviorI have the following two DataFrames:requests: requestsasn pop country1 1 us 100 br 50 2 br 200 3 hk 150 4 uk 1002 1 us 300...traffic: total capacityasn pop1 1 53 1000 2 15 1000 3 103 100002 1 254 10000...I wish to add a new column to the requests DataFrame with a value equal to traffic["total"] / traffic["capacity"], aligned on the two matching indexes.I tried the following:>>>requests["network"] = traffic["total"] / traffic["capacity"]>>>requests requests networkasn pop country1 1 us 100 NaN br 50 NaN 2 br 200 NaN 3 hk 150 NaN 4 uk 100 NaN2 1 us 300 NaN...When all three indexes are available, this has worked for me before. However in this instance I only have two indexes, so it seems to fail.Expected Output>>>requests requests networkasn pop country1 1 us 100 0.053 br 50 0.053 2 br 200 0.015 3 hk 150 0.0103 4 uk 100 NaN2 1 us 300 0.0254... | There is problem your MultiIndex not matched, so get NaNs. solution is add reindex.requests['network'] = traffic["total"].div(traffic["capacity"]) .reindex(requests.index, method='ffill')print (requests) requests networkasn pop country 1 1 us 100 0.0530 br 50 0.0530 2 br 200 0.0150 3 hk 150 0.01032 1 us 300 0.0254Old solution with reset_index + set_index:requests = requests.reset_index(level=2)requests['network'] = traffic["total"].div(traffic["capacity"])requests = requests.set_index('country', append=True)print (requests) requests networkasn pop country 1 1 us 100 0.0530 br 50 0.0530 2 br 200 0.0150 3 hk 150 0.01032 1 us 300 0.0254 |
Error importing mnist dataset from tensorflow and ssl certificate error anaconda I have no idea what the problem is. I am trying to import the mnist data set from the tensorflow examples and I am finding it very difficult to proceed.So far: I saw the SSL certifications error, so I tried the following:1. pip remove certified and pip install certified2. read a lot to fix the SSL error, all in vain.I think the issue is with the importing the mnist library as It says:from tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("tmp/data/", one_hot=True)WARNING:tensorflow:From <ipython-input-3-7da058911bcf>:1: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.Instructions for updating:Please use alternatives such as official/mnist/dataset.py from tensorflow/models.WARNING:tensorflow:From /home/prashanth/Downloads/[/media/sf_H_DRIVE/UBUNTU/Anaconda]/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.Instructions for updating:Please write your own downloading logic.WARNING:tensorflow:From /home/prashanth/Downloads/[/media/sf_H_DRIVE/UBUNTU/Anaconda]/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:219: retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.Instructions for updating:Please use urllib or similar directly.Config of the python I am using:I am using anaconda and I have pip installed tensorflow. Sorry if the question isn't framed properly. This is my first question on stack, any links to solution is also enough. Please Help me! :) Thank you Errors | I just had same issue like you, and resolved by run /Applications/Python 3.6/Install Certificates.command....just double click that :-) FYI: It's a Python 3.6 on MacOSX has no certificates at all (see the release notes), so it cannot verify the SSL certificate from GitHub's servers when trying to download housing.tgz. The solution is to:1. read /Applications/Python 3.6/ReadMe.rtf2. the ReadMe will have you run /Applications/Python 3.6/Install Certificates.command which installs the certificates.Thanks to ageron@github: https://github.com/ageron/handson-ml/issues/46 |
How to use "incorrect" JSON in python3 I have a JSON file in the following format - Note the characters after 1 and 2 (etc) represent strings written without double quotes{ "Apparel": { "XX": { "1": YY, "2": ZZ }, "TT": { "1":TTT, "2":TTT, "3": TTT, "4": TTT }, "XXX": { "1":XXX, "2":XXX }, "RRR": { "1":RRR, "2":RRR }, "AAA": { "1":AAA, "2":AAA, "3":AAA }, }....And so on.Now I know that the file is not correctly formatted (the file is being kept this way because of design or something idk) and using it with the standard json module in Python3 will give a decode error but I've been told to use the file as it is. Which means any problems, I'll have to sort in my code. I need to pick the values after 1 from every heading, then values from 2 from every heading and so on.Currently I'm using this code to read the file - import jsonwith open("brand_config.json") as json_file: json_data = json.load(json_file) test = (json_data["apparel"]["biba"])print (test)This code gives this error -Traceback (most recent call last): File "reader.py", line 4, in <module> json_data = json.load(json_file) File "/usr/lib/python3.5/json/__init__.py", line 268, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/usr/lib/python3.5/json/__init__.py", line 319, in loads return _default_decoder.decode(s) File "/usr/lib/python3.5/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from Nonejson.decoder.JSONDecodeError: Expecting value: line 4 column 9 (char 36)How do I read the required values without changing anything in the JSON file. | I understand from the question that the values of your JSON are not surrounded by quotation marks.I wrote the following script that parses that specific file from the question:#!/usr/bin/env python3from json import dumps# Reads THAT SPECIFIC MALFORMATTED JSON, SHOULD NOT BE USEDdef parse_json(filename): j = {} with open(filename, 'r') as json_file: lines = [line.strip() for line in json_file.readlines()] level = 0 keys = [] for line in lines: # increase a level if '{' in line: level += 1 # append proper key if ':' in line: keys.append(line.split(':')[0].replace('"', '').strip()) if level == 2: j[keys[0]] = {} elif level == 3: j[keys[0]][keys[1]] = {} # decrease a level, remove key elif '}' in line: keys = keys[:-1] level -= 1 # add value else: if level == 3 and line: k, v = line.split(':') k = k.replace('"', '').strip() v = v.strip()[:-1] j[keys[0]][keys[1]][k] = v return jbrand_config = parse_json('brand_config.json')print(dumps(brand_config, indent=4, sort_keys=True))Which creates a python dictionary:{ "Apparel": { "AAA": { "1": "AAA", "2": "AAA", "3": "AA" }, "RRR": { "1": "RRR", "2": "RR" }, "TT": { "1": "TTT", "2": "TTT", "3": "TTT", "4": "TT" }, "XX": { "1": "YY", "2": "Z" }, "XXX": { "1": "XXX", "2": "XX" } }}Given what you provided in the question.EDIT: explanation asked for in the commentskeys is a list used to store the keys that are currently being used in the json. For example, { "Apparel": {}} will mean keys=["Apparel"], and { "Apparel": {"AAA": XXX }} will mean keys=["Apparel", "AAA"].The function processes the text file one line at a timeCreate an empty dictionary (j).Whenever {, level is increased by 1. If : was present in the line, split it and use the first string as a dictionary key after removing the quotation marks. Create a new dictionary associated with that key.If no { is present but : is, split the line and use left value as key, right value as value.If } is present, decrease level by 1 and remove the last key.The final line just prettyprints it. |
polyfit refining: setting polynomial to be always possitive I am trying to fit a polynomial to my data, e.g.import scipy as spx = [1,6,9,17,23,28]y = [6.1, 7.52324, 5.71, 5.86105, 6.3, 5.2]and say I know the degree of polynomial (e.g.: 3), then I just use scipy.polyfit method to get the polynomial of a given degree:+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++fittedModelFunction = sp.polyfit(x, y, 3)func = sp.poly1d(fittedModelFunction) ++++++++++++++++++++++++++++++QUESTIONS: ++++++++++++++++++++++++++++++1) How can I tell in addition that the resulting function func must be always positive (i.e. f(x) >= 0 for any x)? 2) How can I further define a constraint (e.g. number of (local) min and max points, etc.) in order to get a better fitting?Is there smth like this:http://mail.scipy.org/pipermail/scipy-user/2007-July/013138.htmlbut more accurate? | Always PositveI haven't been able to find a scipy reference that determines if a function is positive-definite, but an indirect way would be to find the all the roots - Scipy Roots - of the function and inspect the limits near those roots. There are a few cases to consider:No roots at allPick any x and evaluate the function. Since the function does not cross the x-axis because of a lack of roots, any positive result will indicate the function is positive!Finite number of rootsThis is probably the most likely case. You would have to inspect the limits before and after each root - Scipy Limits. You would have to specify your own minimum acceptable delta for the limit however. I haven't seen a 2-sided limit method provided by Scipy, but it looks simple enough to make your own.from sympy import limit// f: function, v: variable to limit, p: point, d: delta// returns two limit valuesdef twoSidedLimit(f, v, p, d): return limit(f, v, p-d), limit(f, v, p+d)Infinite rootsI don't think that polyfit would generate an oscillating function, but this is something to consider. I don't know how to handle this with the method I have already offered... Um, hope it does not happen?ConstraintsThe only built-in form of constraints seems to be limited to the optimize library of SciPy. A crude way to enforce constraints for polyfit would be to get the function from polyfit, generate a vector of values for various x, and try to select values from the vector that violate the constraint. If you try to use filter, map, or lambda it may be slow with large vectors since python's filter makes a copy of the list/vector being filtered. I can't really help in this regard. |
modifying part of a list in place using list comprehensions in python I have a list that looks like test = ['A','B','C','D D','E E','F F']I would like test to become the following (that is, the spaces removed)test = ['A', 'B', 'C', 'DD', 'EE', 'FF']I used a list comprehension in Python to achieve this:>>> [re.sub(' ','',i) for i in test]['A', 'B', 'C', 'DD', 'EE', 'FF']My question is - what if I explicitly DO NOT want re.sub(' ','',i) to run on the first three elements of my list? I only want the re.sub function to run on 'DD','EE', and 'FF'.Is this way efficient? I understand a list comprehension takes up memory because Python makes a copy.test2[3:] = [re.sub(' ','',i) for i in test[3:]]Or should I just loop through the values of test that I want to modify like this:for i in range(3,len(test)): print i test[i] = re.sub(' ','',test[i]) | First of all, it sounds like you're optimizing prematurely.Secondly, you can express your requirements with a single list comprehension:In [5]: test = ['A','B','C','D D','E E','F F']In [6]: [t if i < 3 else re.sub(' ', '', t) for (i, t) in enumerate(test)]Out[6]: ['A', 'B', 'C', 'DD', 'EE', 'FF']Finally, my advice would be to focus on correctness first, then on readability. Once you've achieved those, profile the code to see where the bottlenecks are, and only then optimize for performance. |
How to transform these output into a matrix format I wrote a code that displays a 4x4 tkinter entry widget. So when I input the values in each entry boxes and after pressing the "Matrix Form" button to print the output, it prints like this in the shell:12345678910111213141516What I would like to achieve is to print like this format:[[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]]Below is my code:from tkinter import *import numpy as npimport tkinter.fontimport tkinter.ttk as ttkfourbyfour = Tk()fourbyfour.wm_geometry("420x400+0+0")fourbyfour.wm_title("4X4 Matrix Calc")fourbyfour.focus_set()fourbyfour.grab_set()myFont = tkinter.font.Font(family = 'Helvetica' , size = 12, weight = 'bold')def getmatrix(): for row in rows: for col in row: m = col.get() print(m)rows = []for i in range(4): cols = [] for j in range(4): e = Entry(fourbyfour,width=10,font=myFont,bd=5) e.grid(row=i, column=j, sticky=NSEW) cols.append(e) rows.append(cols)Calculate_2 = Button(fourbyfour, text = "Matrix Form", font = myFont, bd=4, command = getmatrix, bg = 'light blue', height = 2 ,width = 8)Calculate_2.grid(row=5, column=2) | You can use the command np.reshape, for example, for your case np.reshape(YOUR_ARRAY, (4, 4))would get you the desired output |
Best way to parse sections of json in python3 to separate items in list First off, I'm having trouble Googling this question since I don't know all of the terminology (so really, giving me the proper terms to use in my Google search would be just as useful in this question).I have some JSON that I need to parse in python, put each JSON string in a list after its been parsed(List, not array for Python correct?) and then I am going to go through that list to push the JSON content back to my source. So as of now, I can parse out a section of JSON that I want, but I am not sure how to then get down to just printing the section between brackets. For example, I want to get each section (brackets) in this block of code to be in a separate JSON line:{"components": [ { "self": "MY URL", "id": "ID", "name": "NAME", "description": "THIS IS DESC", "isAssigneeTypeValid": false }, { "self": "MY URL 2", "id": "ID", "name": "name", "isAssigneeTypeValid": false }, { "self": "URL 3", "id": "ID", "name": "NAME 3", "description": "DESC", "isAssigneeTypeValid": false }]}There is a lot more JSON in my file, but using this, I can get it down to just returning the text above.datas = json.loads(data)print(datas['components'])So my question is how would I just print one block? Or access the first 'self' section? | Here's how you can iterate over that data, converting each dict in the "components" list back into JSON strings:import jsondata = '''{ "components": [ { "self": "MY URL", "id": "ID", "name": "NAME", "description": "THIS IS DESC", "isAssigneeTypeValid": false }, { "self": "MY URL 2", "id": "ID", "name": "name", "isAssigneeTypeValid": false }, { "self": "URL 3", "id": "ID", "name": "NAME 3", "description": "DESC", "isAssigneeTypeValid": false } ]}'''datas = json.loads(data)for d in datas['components']: print(json.dumps(d))output{"self": "MY URL", "description": "THIS IS DESC", "id": "ID", "isAssigneeTypeValid": false, "name": "NAME"}{"self": "MY URL 2", "id": "ID", "isAssigneeTypeValid": false, "name": "name"}{"self": "URL 3", "description": "DESC", "id": "ID", "isAssigneeTypeValid": false, "name": "NAME 3"} |
Python deep nesting factory functions Working through "Learning Python" came across factory function. This textbook example works:def maker(N): def action(X): return X ** N return action>>> maker(2)<function action at 0x7f9087f008c0>>>> o = maker(2)>>> o(3)8>>> maker(2)<function action at 0x7f9087f00230>>>> maker(2)(3)8However when going deeper another level I have no idea how to call it:>>> def superfunc(X):... def func(Y):... def subfunc(Z):... return X + Y + Z... return func... >>> superfunc()Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: superfunc() takes exactly 1 argument (0 given)>>> superfunc(1)<function func at 0x7f9087f09500>>>> superfunc(1)(2)>>> superfunc(1)(2)(3)Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: 'NoneType' object is not callable>>> superfunc(1)(2)>>>Why doesn't superfunc(1)(2)(3) work while maker(2)(3) does? While this kind of nesting certainly doesn't look like a good, usable code to me, Python still accepts it as valid, so I'm curious as to how this can be called. | You get a TypeError because function func doesn't return anything (thus its return is NoneType). It should return subfunc:>>> def superfunc(X):... def func(Y):... def subfunc(Z):... return X + Y + Z... return subfunc... return func... |
Sublime Text 2 plugin won't show up in Command Platte I started writing a Plugin for Sublime Text 2.I created a new folder in "Packages/RailsQuick"And Created 2 files:RailsQuick.pyimport sublime, sublime_pluginclass GeneratorsCommand(sublime_plugin.WindowCommand): def run(self): self.window.show_quick_panel(["test"], None)RailsQuick.sublime-commands[ { "caption": "RailsQuick: Generators", "command": "rails_quick_generators" }]The problem is that i cant find RailsQuick: Generators in the Command Platte (CTRL + SHIFT + P)Console logs after saving both files:Writing file /home/danpe/.config/sublime-text-2/Packages/RailsQuick/RailsQuick.py with encoding UTF-8Reloading plugin /home/danpe/.config/sublime-text-2/Packages/RailsQuick/RailsQuick.pyWriting file /home/danpe/.config/sublime-text-2/Packages/RailsQuick/RailsQuick.sublime-commands with encoding UTF-8What am i doing wrong ? | My lucky guess:Your class name is wrong. GeneratorsCommand should match the one defined in RailsQuick.sublime-commands (rails_quick_generators). Sublime Text 2 needs to have 1:1 mapping between these names, otherwise it cannot know which plug-in belongs to which shortcut.Example:https://github.com/witsch/SublimePythonTidy |
Copying a key/value from one dictionary into another I have a dict with main data (roughly) as such: {'UID': 'A12B4', 'name': 'John', 'email': 'hi@example.com}and I have another dict like: {'UID': 'A12B4', 'other_thing: 'cats'}I'm unclear how to "join" the two dicts to then put "other_thing" to the main dict. What I need is: {'UID': 'A12B4', 'name': 'John', 'email': 'hi@example.com, 'other_thing': 'cats'}I'm pretty new to comprehensions like this, but my gut says there has to be a straight forward way. | you want to use the dict.update method:d1 = {'UID': 'A12B4', 'name': 'John', 'email': 'hi@example.com'}d2 = {'UID': 'A12B4', 'other_thing': 'cats'}d1.update(d2)Outputs:{'email': 'hi@example.com', 'other_thing': 'cats', 'UID': 'A12B4', 'name': 'John'}From the Docs: Update the dictionary with the key/value pairs from other, overwriting existing keys. Return None. |
python virtualenv.el no longer works in emacs after updating python-mode I upgraded from python-mode.el-6.1.2 to python-mode.el-6.1.3 and my M-x virtualenv-activate venvname no longer activates the virtual environment in my emacs *Python* buffer. This same keystroke used to load the virtualenv. My process for updating python-mode was only...$ wget https://launchpad.net/python-mode/trunk/6.1.3/+download/python-mode.el-6.1.3.tar.gz$ tar -zxvf python-mode.el-6.1.3.tar.gz $ emacs init.elAnd then changing; python-mode(setq py-install-directory "~/.emacs.d/python-mode.el-6.1.2")(add-to-list 'load-path py-install-directory)(require 'python-mode)to the correct folder of:; python-mode(setq py-install-directory "~/.emacs.d/python-mode.el-6.1.3")(add-to-list 'load-path py-install-directory)(require 'python-mode)Then reloading with M-x load-fileThis is the only change I made that I can attribute the sudden change in behavior to. Anyone have similar experiences or pointers for what might be going wrong? | I haven't maintainedmy virtualenv package in along time since I use docker and LXC for a better virtual environmentfor my development purposes that provides stronger isolation,first-class network interfaces, and support for non-python stacks.If you still want to work with virtualenv there are at least 3 newer,actively maintained packages available onMELPA that are superior to my old onethat have taken its place:pyvenvvirtualenvwrapperpython-environment |
Comparing three arrays I would like to ask, why this is returning 'True' (or what is the code doing when it is written like this): def isItATriple(first,second,third):if first[0] == second[0] == third[0] or first[0] != second[0] != third[0]: if first[1] == second[1] == third[1] or first[1] !=second[1] != third[1]: if first[2] == second[2] == third[2] or first[2] != second[2] != third[2]: if (first[3] == second[3] == third[3]) or (first[3] != second[3] != third[3]): return True else: return False else: return False else: return Falseelse: return Falseprint(isItATriple([0,0,0,0],[0,0,0,1],[0,0,0,0])) | Let analyze:first if: if first[0] == second[0] == third[0] or \ first[0] != second[0] != third[0]:The first (before or) is True - because at 0 index all lists have 0;If so - the condition after or is not checked (because python is lazy) True or Anything gives True.The second if:if first[1] == second[1] == third[1] or \ first[1] !=second[1] != third[1]:Exactly same as above - 1 element of each list is equal - so it's True here.The third if:if first[2] == second[2] == third[2] \ or first[2] != second[2] != third[2]:The same. Generally: True.The fourth if:if first[3] == second[3] == third[3] or \ first[3] != second[3] != third[3]:And here - the first condition (before or) is False and the second is True. So this is why your method returns True.The second condition is evaluated to:0 != 1 != 0In other words this mean:0 != 1 and 1 != 0And finally:True # because 0 is different than 1;It is a common case when you use operators like this:1 < x < 10This mean:1 < x and x < 10But to be honest - this code is pretty ugly :)Let me show you how can you do this more nicely.def myIsATriple(first, second, third): return first == second == thirdList comparison works pretty well in python :) so you do not need to do it manually, examples:myIsATriple([0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 0]) # FalsemyIsATriple([0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]) # TruemyIsATriple([0, 'a', 0, 0], [0, 'a', 0, 0], [0, 'b', 0, 0]) # FalsemyIsATriple([0, 'a', 1, 0], [0, 'a', 1, 0], [0, 'a', 1, 0]) # TruemyIsATriple([0, {'a': 2}, 1, 0], [0, {'a': 2}, 1, 0], [0, {'a': 3}, 1, 0]) # FalsemyIsATriple([0, {'a': 2}, 1, 0], [0, {'a': 2}, 1, 0], [0, {'a': 2}, 1, 0]) # TrueHappy coding! |
Have researched the RE module without finding a solution Using python 2.7.5 and the following string. I am trying to sum and with the following code. Can someone steer me in the right direction? Thanks<msg><src>CC128-v0.15</src><dsb>01068</dsb><time>09:19:01</time><tmprF>68.9</tmprF><sensor>0</sensor><id>00077</id><type>1</type><ch1><watts>00226</watts></ch1><ch2><watts>00189</watts></ch2></msg>try: watts_ex = re.compile('<watts>([0-9]+)</watts>') temp_ex = re.compile('<tmprF>([\ ]?[0-9\.]+)</tmprF>') time_ex = re.compile('<time>([0-9\.\:]+)</time>') watts = str(int(watts_ex.findall(data)[0])) temp = temp_ex.findall(data)[0].strip() time = time_ex.findall(data)[0]except: sys.stderr.write("Could not get details from device") sys.exit()# Replace format stringformat = format.replace("{{watts}}", watts)format = format.replace("{{time}}", time)format = format.replace("{{temp}}", temp)print formatif __name__ == "__main__": main()""" output is 09:19:01:, 226 watts, 68.9F (watts should = 415 watts """ | This looks like an XML-like language, I'd strongly recommend using the XML libraries instead of regexes to parse it. The problem in your code is this part:watts = str(int(watts_ex.findall(data)[0]))You're just using result 0 from the findall(), I think you want something like this:watts = str(sum(int(w) for w in watts_ex.findall(data))) |
Can I edit XML loaded by xml.dom.minidom.parse? As I saw, when we runfrom xml.dom.minidom import parsemyXML = parse('anything.xml')in a Python script, it loads the contents of "anything.xml", until you leave the script or Ctrl+D your Python session.Is it possible to add attribute values to this loaded version of the XML in Python? | The parse method returns you an instance of xml.dom.minidom.Document, on which you can invoke the plethora of methods listed in the documentation of xml.dom. Here's a small example:import xml.dom.minidomd = xml.dom.minidom.parseString('<head>hello</head>')d.getElementsByTagName('head')[0].setAttribute('joe', '2')print d.toxml()This adds a joe="2" attribute to the head tag:<?xml version="1.0" ?><head joe="2">hello</head> |
Get Plain text from a QLabel with Rich text I have a QLabel that contains rich text.I want to extract just the actual (visible) 'text' from the QLabel, and none of the code for formatting.I essentially need a function similiar to the '.toPlainText' method of other Qt Widgets.I can not simply call .text() and string manipulate away the html tags as suggested in this thread Get plain text from QString with HTML tags, since the returned QString contains all the <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> nonsense.How do I extract the plain text?(I'm open to any method, even if indirect. eg; Pre-existing functions that convert html to plain text)Thanks! Specs:python 2.7.2PyQt4Windows 7 | Use a QTextDocument to do the conversion:doc = QtGui.QTextDocument()doc.setHtml(label.text())text = doc.toPlainText() |
Migrations across databases with inconsistend database backend - Input? I am migration some data from one database to another, it is production data that has accidentally ended up in a testing database.It is typical a relational database centered around a single User table.Things to considerDuplicate rows between production and testing may exist in almost any table.Any column in any User-related table in testing may miss content from production, or contain updated information due to User re-registration.All tables contain created and updated columns.I have been connecting to the database via a SOAP layer because it is was the "easy" way. I do however have administration access to the machines running these databases.Do you have any methods, any advice, any pointers for me to aid me in my goal to make this? Perhaps something along the lines of Content Migration - Best Practices (PDF), anything, really. | 1. Backup all the data first. It never hurts to say this!2. Establish a reasonable sample size, i.e. how many records are you willing to look at in details, partly based on your time/money and the value of corrected accurate data.3. Create a list, say in a spreadsheet of those records. 4. If you can, identify (externally) which ones are real, maybe using email address or other fields to compare with other data.5. Look for patterns. Is there any individual field:- id, date, user_id, etc that looks as if it will help you know which records are good? Looks for value patterns, low/high ranges, duplicated 'sample' data (same value for a column in many records), dates without times, records with orphaned foreign ID's, there are a suprising number of things you can check!6. Determine your final tolerance - are you looking for 100% ? Or would 99.94% fixed be ok (well, acceptable then!) to the users?7. Look at those duplicates you mentioned. For those records, can you apply any rule such as 'older record' or 'newer record' or low ID number to at least eliminate them? I hope this helps! |
Python error with debugging I am very new with Python and I have just received this message while trying to use Visual Studio plugin for Python:try: import boinc # getting the exception here _BOINC_ENABLED = Trueexcept: _BOINC_ENABLED = Falseand this is the error message that I get: exceptions.ImportError occurred Message: No module named boincthe other lines that import files are here :from util import *from util import raiseNotDefinedimport time, osimport traceback(i haven't wrote them they were given in the pacman project)I am trying to use Python for the pacman project that was given to me as an assignment and I am having trouble running the project - debugging it (I didn't write any code yet).Thanks in advance for your kind help. | problem was that my project was not on the root of my hard drive and the project was inside a folder named in hebrew.the path of the folder containning the project must be in english for it to work |
How to scrape a webpage which has login if we have the credentials using python scrapy? Just want to know how to send request along with the login credentials to a login page to fetch the data. | It is usual for web sites to provide pre-populated form fields through elements, such as session related data or authentication tokens (for login pages). When scraping, you’ll want these fields to be automatically pre-populated and only override a couple of them, such as the user name and password. You can use the FormRequest.from_response() method for this job. Here’s an example spider which uses it:import scrapydef authentication_failed(response): # TODO: Check the contents of the response and return True if it failed # or False if it succeeded. passclass LoginSpider(scrapy.Spider): name = 'example.com' start_urls = ['http://www.example.com/users/login.php'] def parse(self, response): return scrapy.FormRequest.from_response( response, formdata={'username': 'john', 'password': 'secret'}, callback=self.after_login ) def after_login(self, response): if authentication_failed(response): self.logger.error("Login failed") return # continue scraping with authenticated session... |
rate_limit not working celery i have a simple structure:projcelery.pytasks.pyrun_tasks.pycelery.py:from __future__ import absolute_import, unicode_literalsfrom celery import Celeryapp = Celery('proj', broker='amqp://', backend='amqp://', include=['proj.tasks'])app.conf.update( result_expires=3600, task_annotations={ 'proj.tasks.add': {'rate_limit': '2/m'} })tasks.py:from __future__ import absolute_import, unicode_literalsfrom .celery import app@app.taskdef add(x, y): return x + yrun_tasks.py: from proj.tasks import * res = add.delay(4,4) a = res.get() print(a)According to the parameter {'rate_limit': '2/m'} I can run the add task only 2 times a minute. But I can run it as many times as I want. What's wrong? | From Celery Docs: Note that this is a per worker instance rate limit, and not a global rate limit. To enforce a global rate limit (e.g., for an API with a maximum number of requests per second), you must restrict to a given queue. |
Python Error:Commands out of sync; you can't run this command now Here is the scenario, I am facing the error Error:Commands out of sync; you can't run this command nowI need to pass the string to MYSQL which is a mixture of double and Single quotes. But when the mysql parsing the string it couldnt process the parameters because Python converting "" to ''.for example temp = "users= JSON_ARRAY_APPEND(users, '$', 'user1'), users= JSON_ARRAY_APPEND(users, '$', 'user2')"converted totemp = 'users= JSON_ARRAY_APPEND(users, \'$\', \'user1\'), users= JSON_ARRAY_APPEND(users, \'$\', \'user2\')'sql = "Insert into User(internal_id, users) values(16, IFNULL(users->'$',JSON_ARRAY())); Update User SET " + temp + " where internal_id = 16;"How to hanndle this Scenario?Thanks | Based on searching past Stack Overflow questions related to the error: Commands out of sync; you can't run this command nowThis problem is caused by executing multiple SQL statements in the same call to execute(). You can't do that unless you pass the argument multi=True.This is the same solution as the one reported in Commands out of sync you can't run this command nowBut there is no reason to use multi-statements. It is easier and more clear to write your code to execute one SQL statement at a time.The former Engineering Manager for MySQL once told me, "there is no reason for MySQL to support multi-statements." They just make writing code more difficult, and they don't give you any advantage to performance or anything else. |
Python if-elif-else expressions returning a value and scope resolution Question 1In rust, I can write code like this:let foo = if ... { 1} else if ... { 2} else { 3};Here foo is assigned the return value of that if-elseif-else expression.Is something similar possible in Python?Question 2Is the outer variable "foo" updated in this python code?foo = "hello"if cond1: foo = "world"else: pass# if cond1 is true, what is the value of foo now? "hello" or "world" | it seems the closest Py version is a ternary op#scenario 2foo = "hello"foo = "world" if True else fooprint(foo) # prints 'world' |
exception handling in python for a beginner def flatten(nstd_list): for item in nstd_list: try: yield from flatten(item) except TypeError: yield itemI am a beginner for python, can you here please explain me how does this work(step by step) | you can take yield as return, so the code is to get every single element from a nested list.for example:nstd_list = [[1],2]first round: item is [1] and 2, so yield flatten([1]) and 2second round: item is 1, and return 1 |
GAE timeout when query DNS I'm trying to use a script to check if an email exists or not. For that I'm using DNS queries. This is the call that fails:from dns import resolvermx_data = resolver.query(hostname, 'MX', source='')It works if I execute the script standalone with python but it fails when it runs in appengine locally or remotely. The stacktrace:File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1535, in __call__ rv = self.handle_exception(request, response, e)File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1529, in __call__ rv = self.router.dispatch(request, response)File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher return route.handler_adapter(request, response)File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1102, in __call__ return handler.dispatch()File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 572, in dispatch return self.handle_exception(e, self.app.debug)File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 570, in dispatch return method(*args, **kwargs)File "/Users/user/dev/gaeapp/request.py", line 5827, in get check2 = email_checker.validate_email(email)File "/Users/user/dev/gaeapp/tools/email_checker.py", line 89, in validate_email mx_data = resolver.query(hostname, 'MX')File "/Users/user/dev/gaeapp/dns/resolver.py", line 974, in query raise_on_no_answer, source_port)File "/Users/user/dev/gaeapp/dns/resolver.py", line 894, in query timeout = self._compute_timeout(start)File "/Users/user/dev/gaeapp/dns/resolver.py", line 734, in _compute_timeout raise TimeoutI'm having a similar problem to the question DNS query using Google App Engine socket but I've tried to call query with the parameter source='' with no success.I'm using dnspython 1.11.1UPDATE: It works after manually setting the DNS resolvers:r = resolver.Resolver()r.nameservers = ['8.8.8.8', '8.8.4.4']mx_data = r.query(hostname, 'MX') | As Tim said, you need to set the resolver explicitly.Example code:import dns.resolverresolver = dns.resolver.Resolver()resolver.nameservers = ['8.8.8.8']mx_data = resolver.query(hostname, 'MX')Note that 8.8.8.8 is googles dns server, but could be any other.Also note that you do not need to set source='' |
Multi-threaded websocket server on Python Please help me to improve this code:import base64import hashlibimport threadingimport socketclass WebSocketServer: def __init__(self, host, port, limit, **kwargs): """ Initialize websocket server. :param host: Host name as IP address or text definition. :param port: Port number, which server will listen. :param limit: Limit of connections in queue. :param kwargs: A dict of key/value pairs. It MAY contains:<br> <b>onconnect</b> - function, called after client connected. <b>handshake</b> - string, containing the handshake pattern. <b>magic</b> - string, containing "magic" key, required for "handshake". :type host: str :type port: int :type limit: int :type kwargs: dict """ self.host = host self.port = port self.limit = limit self.running = False self.clients = [] self.args = kwargs def start(self): """ Start websocket server. """ self.root = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.root.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.root.bind((self.host, self.port)) self.root.listen(self.limit) self.running = True while self.running: client, address = self.root.accept() if not self.running: break self.handshake(client) self.clients.append((client, address)) onconnect = self.args.get("onconnect") if callable(onconnect): onconnect(self, client, address) threading.Thread(target=self.loop, args=(client, address)).start() self.root.close() def stop(self): """ Stop websocket server. """ self.running = False def handshake(self, client): handshake = 'HTTP/1.1 101 Switching Protocols\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Accept: %s\r\n\r\n' handshake = self.args.get('handshake', handshake) magic = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" magic = self.args.get('magic', magic) header = str(client.recv(1000)) try: res = header.index("Sec-WebSocket-Key") except ValueError: return False key = header[res + 19: res + 19 + 24] key += magic key = hashlib.sha1(key.encode()) key = base64.b64encode(key.digest()) client.send(bytes((handshake % str(key,'utf-8')), 'utf-8')) return True def loop(self, client, address): """ :type client: socket """ while True: message = '' m = client.recv(1) while m != '': message += m m = client.recv(1) fin, text = self.decodeFrame(message) if not fin: onmessage = self.args.get('onmessage') if callable(onmessage): onmessage(self, client, text) else: self.clients.remove((client, address)) ondisconnect = self.args.get('ondisconnect') if callable(ondisconnect): ondisconnect(self, client, address) client.close() break def decodeFrame(self, data): if (len(data) == 0) or (data is None): return True, None fin = not(data[0] & 1) if fin: return fin, None masked = not(data[1] & 1) plen = data[1] - (128 if masked else 0) mask_start = 2 if plen == 126: mask_start = 4 plen = int.from_bytes(data[2:4], byteorder='sys.byteorder') elif plen == 127: mask_start = 10 plen = int.from_bytes(data[2:10], byteorder='sys.byteorder') mask = data[mask_start:mask_start+4] data = data[mask_start+4:mask_start+4+plen] decoded = [] i = 0 while i < len(data): decoded.append(data[i] ^ mask[i%4]) i+=1 text = str(bytearray(decoded), "utf-8") return fin, text def sendto(self, client, data, **kwargs): """ Send <b>data</b> to <b>client</b>. <b>data</b> can be of type <i>str</i>, <i>bytes</i>, <i>bytearray</i>, <i>int</i>. :param client: Client socket for data exchange. :param data: Data, which will be sent to the client via <i>socket</i>. :type client: socket :type data: str|bytes|bytearray|int|float """ if type(data) == bytes or type(data) == bytearray: frame = data elif type(data) == str: frame = bytes(data, kwargs.get('encoding', 'utf-8')) elif type(data) == int or type(data) == float: frame = bytes(str(data), kwargs.get('encoding', 'utf-8')) else: return None framelen = len(frame) head = bytes([0x81]) if framelen < 126: head += bytes(int.to_bytes(framelen, 1, 'big')) elif 126 <= framelen < 0x10000: head += bytes(126) head += bytes(int.to_bytes(framelen, 2, 'big')) else: head += bytes(127) head += bytes(int.to_bytes(framelen, 8, 'big')) client.send(head + frame)It works fine. I want the server to use all the processor cores for improved performance. And this code is not effective in high quantities connections. How to implement a multi-threaded solution for this case?sorry for my bad english. | In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once.So your code won't work. You can use processeses instead of threads (not on Windows*), twisted or asyncore if you want to support more than one client at the same time.If your choice is multiprocessing, try this:client.py:import socketdef main(): s = socket.socket() s.connect(("localhost", 5555)) while True: data = raw_input("> ") s.send(data) if data == "quit": break s.close()if __name__ == "__main__": main()server.py:from multiprocessing import Processfrom os import getpidimport socketdef receive(conn): print "(%d) connected." % getpid() while True: data = conn.recv(1024) if data: if data == "quit": break else: print "(%s) data" % getpid()def main(): s = socket.socket() s.bind(("localhost", 5555)) s.listen(1) while True: conn, address = s.accept() print "%s:%d connected." % address Process(target=receive, args=(conn,)).start() s.close()if __name__ == "__main__": main()*On Windows this code will throw an error when pickling the socket: File "C:\Python27\lib\pickle.py", line 880, in load_eof raise EOFError |
How to include rpm dependency in setup.py I'm new to python but i want to create rpm package by using setuptools and bdist_rpm option.The problem I've occurred is how to include dependencies to other rpm (c/c++ binaries libraries). | You need to add the dependencies to the Requires section, see distutils documentation. |
Matplotlib - animate PIL images in Jupyter How can I create an animation in Jupyter using PIL images?I'm creating drawings with PIL. Here is the code for one frame (other frames are generated by just increasing theta)import matplotlib.pyplot as pltimport mathfrom PIL import Image, ImageDrawwidth, height = 800,800theta = math.pi / 3image = Image.new('RGBA', (width, height))draw = ImageDraw.Draw(image)# draw sunsun_radius = 80center_x = width/2center_y = width/2draw.ellipse( ( center_x - sun_radius/2, center_y - sun_radius/2, center_x + sun_radius/2, center_y + sun_radius/2 ), fill = 'yellow', outline ='orange')# draw planetplanet_radius = 20orbit_radius = 300planet_offset_x = center_x + math.cos(theta) * orbit_radiusplanet_offset_y = center_y + math.sin(theta) * orbit_radiusdraw.ellipse( ( planet_offset_x - planet_radius/2, planet_offset_y - planet_radius/2, planet_offset_x + planet_radius/2, planet_offset_y + planet_radius/2 ), fill = 'blue', outline ='blue')plt.imshow(image)This the frame that the above code generatesI already have a solution, I'm posting this because it took me a while to get working and I think it will be useful to others | I would advocate for doing the whole animation in matplotlib directly, since that is more memory efficient (no need to create store 100 images) and gives a better graphics quality (because pixels would not need to resampled).import numpy as npimport matplotlib.pyplot as pltfrom matplotlib import animationfrom matplotlib.patches import Circlefrom IPython.display import HTMLfig, ax = plt.subplots()width, height = 800,800planet_radius = 20orbit_radius = 300sun_radius = 80center_x = width/2center_y = width/2ax.axis([0,width,0,height])ax.set_aspect("equal")sun = Circle((center_x,center_y), radius=sun_radius, facecolor="yellow", edgecolor="orange")ax.add_patch(sun)def get_planet_offset(theta): x = center_x + np.cos(theta) * orbit_radius y = center_y + np.sin(theta) * orbit_radius return x,yplanet = Circle(get_planet_offset(0), radius=planet_radius, color="blue")ax.add_patch(planet)def update(theta): planet.center = get_planet_offset(theta)ani = animation.FuncAnimation(fig, update, frames=np.linspace(0, 2 * np.pi, 100), interval=50, repeat_delay=1000)HTML(ani.to_html5_video()) |
PyGTK-2.24.0 Installation cannot find NumPy I am trying to build the PyGTK source from version 2.24.0 with a local (prefix=$HOME/.local) installation of python 3.5.2. Running the configure script produces:$: ./configure --prefix=$HOME/.local....configure: WARNING: Could not find a valid numpy installation, disabling.....The following modules will be built:atkpangopangocairogtk with 2.18 APIgtk.gladegtk.unixprintNumpy support: noLooking in config.log:....configure:12393: checking for /home/me/.local/bin/python3.5 versionconfigure:12400: result: 3.5configure:12412: checking for /home/me/.local/bin/python3.5 platformconfigure:12419: result: linuxconfigure:12426: checking for /home/me/.local/bin/python3.5 script directoryconfigure:12455: result: ${prefix}/lib/python3.5/site-packagesconfigure:12464: checking for /home/me/.local/bin/python3.5 extension module directoryconfigure:12493: result: ${exec_prefix}/lib/python3.5/site-packages.... ac_cv_env_PKG_CONFIG_PATH_value=/home/me/.local/lib/pkgconfig:/home/me/.local/bin/libwx/pkgconfig:/usr/lib/pkconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig....ac_cv_env_PYGOBJECT_LIBS_value=-L/home/me/.local/lib/python3.5/site-packages/gi....am_cv_python_platform=linuxam_cv_python_pyexecdir='${exec_prefix}/lib/python3.5/site-packages'am_cv_python_pythondir='${prefix}/lib/python3.5/site-packages'am_cv_python_version=3.5....PYTHON='/home/me/.local/bin/python3.5'PYTHON_EXEC_PREFIX='${exec_prefix}'PYTHON_INCLUDES='-I/home/me/.local/include/python3.5m -I/home/csmall02/.local/include/python3.5m'PYTHON_PLATFORM='linux'PYTHON_PREFIX='${prefix}'PYTHON_VERSION='3.5'....pyexecdir='${exec_prefix}/lib/python3.5/site-packages'pythondir='${prefix}/lib/python3.5/site-packages'Why can't this configure find the NumPy packages? My lib/python3.5 directory looks like:.local`--lib `--python3.5 `--site-packages |-- numpy | |-- compat |-- ma | |-- core |-- matrixlib | |-- distutils |-- polynomial | |-- doc |-- __pycache__ | |-- f2py |-- random | |-- fft |-- testing | |-- lib `-- tests | `-- linalg |-- numpy-1.11.1.dist-info `-- numpy-1.11.1-py3.5-linux-x86_64.egg |-- EGG-INFO `-- numpy |-- compat |-- ma |-- core |-- matrixlib |-- distutils |-- polynomial |-- doc |-- __pycache__ |-- f2py |-- random |-- fft |-- testing |-- lib `-- tests `-- linalgThe reason for the two numpy directories is I installed one using pip install numpy and the other I installed from source in the course of trying to fix this problem.Also, I have no problem using import numpy and such in interactive python, so I know it's "there".Does anyone know how to pass the location of NumPy directly?Any other advice would also be appreciated.Thanks! | I'm afraid you have some mix-up.Here is what I did :sudo apt-get dist-upgradesudo apt-get install python3 sudo apt-get install python3-numpy sudo apt-get install python3-matplotlibsudo apt-get install python3-scipysudo apt-get install python3-pyfitsOne can also use pip3 to install those libs, but using pip will install them for python 2.7...Also, pygtk for python3 seems not to be available, read the answer to this questionHope this clears things up so that you can solve it. |
how to fill missing time slots in python? I'm trying to fill the missing slots in the CSV file which has date and time as a string.My input from a csv file is:A B C56 2017-10-26 22:15:00 892 2017-10-27 00:30:00 5420 2017-10-28 05:00:00 6424 2017-10-29 06:00:00 291 2017-11-01 22:45:00 7862 2017-11-02 15:30:00 9991 2017-11-02 22:45:00 34Output should beA B C0 2017-10-26 00:00:00 891 2017-10-26 00:15:00 89.....56 2017-10-26 22:15:00 89......96 2017-10-26 23:45:00 890 2017-10-27 00:00:00 541 2017-10-27 00:15:00 542 2017-10-27 00:30:00 54...20 2017-10-28 05:00:00 6421 2017-10-28 05:15:00 64....24 2017-10-29 06:00:00 2.91 2017-11-01 22:45:00 78.62 2017-11-02 15:30:00 99.91 2017-11-02 22:45:00 34The output range is 15 min time slots for days between 2017-10-26 -> 2017-11-02 and each day have 96 slots.And the same as above. | Using resample to get 15-min intervalsand bfill to fill missing values in B:df = df.set_index(pd.to_datetime(df.pop('B')))df.loc[df.index.min().normalize()] = Nonedf = df.resample('15min').max().bfill()df['A'] = 4*df.index.hour + df.index.minute//15print(df)Output: A CB 2017-10-26 00:00:00 0 89.02017-10-26 00:15:00 1 89.02017-10-26 00:30:00 2 89.0... .. ...2017-11-02 22:15:00 89 34.02017-11-02 22:30:00 90 34.02017-11-02 22:45:00 91 34.0 |
How to export data stored in GG Bigquery into GZ file. I used this code to export data into a csv file and it works:project_id = 'project_id'client = bigquery.Client()dataset_id = 'dataset_id'bucket_name = 'bucket_name'table_id = 'table_id'destination_uri = 'gs://{}/{}'.format(bucket_name, 'file.csv')dataset_ref = client.dataset(dataset_id, project=project_id)table_ref = dataset_ref.table(table_id)extract_job = client.extract_table( table_ref, destination_uri) extract_job.result() But I prefer a GZ file because of my table up to 700M. Could anyone help me export data into a GZ file? | You need to add a jobConfig like in:job_config = bigquery.job.ExtractJobConfig()job_config.compression = 'GZIP'Complete code:from google.cloud import bigqueryclient = bigquery.Client()project_id = 'fh-bigquery'dataset_id = 'public_dump'table_id = 'afinn_en_165'bucket_name = 'your_bucket'destination_uri = 'gs://{}/{}'.format(bucket_name, 'file.csv.gz')dataset_ref = client.dataset(dataset_id, project=project_id)table_ref = dataset_ref.table(table_id)job_config = bigquery.job.ExtractJobConfig()job_config.compression = 'GZIP'extract_job = client.extract_table( table_ref, destination_uri, job_config = job_config) extract_job.result() |
send a file from a server to another server use rest framework I have a server that generates a file, I want to send that file to another server, when the file is ready. so the server that receives file should always listen and I have used Django rest framework, does anybody have a link to help me? | Server A (Server receiving file from Server B)models.pyclass TestModel(models.Model): # Other fields you are interested in saving file_data = models.FileField()serializers.pyclass TestModelSerializer(serializers.ModelSerializer): class Meta: model = TestModel fields = '__all__'view.pyclass TestModelViewSet(viewsets.ModelViewSet): queryset = TestModel.objects.all() parser_classes = (MultiPartParser, FormParser,) serializer_class = TestModelSerializerurls.pyrouter = routers.DefaultRouter()router.register(r'test', views.TestModelViewSet)urlpatters = [ url(r'^api/', include(router.urls))] Server B (192.168.5.5) (Send file to Server A)file_generator.pywith open('file.txt', 'rb') as f: r = requests.post('http://192.168.5.5/api/test', files={'file.txt': f}) |
Fading out a signal in numpy What is the most idiomatic way to produce a cumulative sum which "fades" out as it moves along. Let me explain with an example.>>> np.array([1,0,-1,0,0]).cumsum()array([1, 1, 0, 0, 0], dtype=int32)But I would like to provide a factor <1 and produce something like:>>> np.array([1,0,-1,0,0]).cumsum_with_factor(0.5)array([1.0, 0.5, -0.75, -0.375, -0.1875], dtype=float64)It's a big plus if it's fast! | Your result can be obtained by linear convolution:signal = np.array([1,0,-1,0,0])kernel = 0.5**np.arange(5)np.convolve(signal, kernel, mode='full')# array([ 1. , 0.5 , -0.75 , -0.375 , -0.1875, -0.125 , -0.0625, 0. , 0. ])If performance is a consideration use scipy.signal.fftconvolve which is a faster implementation of the same logic. |
Error when opening Jupyter Notebook from terminal on Mac I get the following error when trying to open a Jupyter Notebook (using the command jupyter notebook) from the terminal on mac. Traceback (most recent call last): File "/Applications/anaconda3/bin/jupyter-notebook", line 11, in <module> sys.exit(main()) File "/anaconda3/lib/python3.6/site-packages/jupyter_core/application.py", line 266, in launch_instance return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs) File "/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 657, in launch_instance app.initialize(argv) File "<decorator-gen-7>", line 2, in initialize File "/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "/anaconda3/lib/python3.6/site-packages/notebook/notebookapp.py", line 1531, in initialize super(NotebookApp, self).initialize(argv) File "<decorator-gen-6>", line 2, in initialize File "/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "/anaconda3/lib/python3.6/site-packages/jupyter_core/application.py", line 242, in initialize self.migrate_config() File "/anaconda3/lib/python3.6/site-packages/jupyter_core/application.py", line 168, in migrate_config migrate() File "/anaconda3/lib/python3.6/site-packages/jupyter_core/migrate.py", line 247, in migrate with open(os.path.join(env['jupyter_config'], 'migrated'), 'w') as f:PermissionError: [Errno 13] Permission denied: '/Users/Mridula/.jupyter/migrated'I have tried to uninstall and re-install. I still face the same error. I have tried to clear the bash profile, to no avail. Any help would be highly welcome and appreciated. Best, Mridula | I'm summarizing our conversations here as it helped you to resolve the problem.There could be many possible options to address this issue. However, the very first approach to tackle this problem is to resolve permission issues. The last line of your error message is PermissionError: [Errno 13] Permission denied: '/Users/Mridula/.jupyter/migrated' which means you don't have the right permission to access the .jupyter directory. Change file permissionsudo chmod -R 755 /Users/Mridula/.jupyter/If this doesn't help then do the following stepsuninstall Anaconda remove .jupyter directory from your user install a fresh copy of AnacondaHope it helps! |
Windows Python packages not compatible with Amazon Linux? I'm creating a layer for my lambda function by installing the dependencies locally, zipping the folder, and uploading it to S3. To ensure the packages are compatible with Lambda runtimes, I'm installing the packages like this (per the docs)pip install \ --platform manylinux2014_x86_64 \ --target=my-lambda-function \ --implementation cp \ --python 3.8 \ --only-binary=:all: --upgrade \ packagenameThis works for all but one package, pyzbar, which, per its docs,The zbar DLLs are included with the Windows Python wheels. On other operating systems, you will need to install the zbar shared library.Linux: sudo apt-get install libzbar0Mac OS X: brew install zbarpyzbar works locally, but I can't install the shared library on Windows, so I'm gettingImportError: Unable to find zbar shared librarywhen I try to run the lambda. What's the best way to solve this? | The easiest way to do that is to download the library from pypi and then unzipped the file. Then, you must put the file in a folder called "python" and then, you have to zipp the folder nad upload to a layer. It must works. |
Create Excel Hyperlinks in Python I am using win32com to modify an Excel spreadsheet (Both read and edit at the same time) I know there are other modules out there that can do one or the other but for the application I am doing I need it read and processed at the same time.The final step is to create some hyperlinks off of a path name. Here is an Example of what I have so far:import win32com.clientexcel = r'I:\Custom_Scripts\Personal\Hyperlinks\HyperlinkTest.xlsx'xlApp = win32com.client.Dispatch("Excel.Application")workbook = xlApp.Workbooks.Open(excel)worksheet = workbook.Worksheets("Sheet1")for xlRow in xrange(1, 10, 1): a = worksheet.Range("A%s"%(xlRow)).Value if a == None: break print aworkbook.Close()I found some code for reading Hyperlinks using win32com:sheet.Range("A8").Hyperlinks.Item(1).Addressbut not how to set hyperlinksCan someone assist me? | Borrowing heavily from this question, as I couldn't find anything on SO to link to as a duplicate...This code will create a Hyperlink in cells A1:A9import win32com.clientexcel = r'I:\Custom_Scripts\Personal\Hyperlinks\HyperlinkTest.xlsx'xlApp = win32com.client.Dispatch("Excel.Application")workbook = xlApp.Workbooks.Open(excel)worksheet = workbook.Worksheets("Sheet1") for xlRow in xrange(1, 10, 1): worksheet.Hyperlinks.Add(Anchor = worksheet.Range('A{}'.format(xlRow)), Address="http://www.microsoft.com", ScreenTip="Microsoft Web Site", TextToDisplay="Microsoft")workbook.Save()workbook.Close()And here is a link to the Microsoft Documentation for the Hyperlinks.Add() method. |
How to remove elements from a list I have two listsfirst = ['-6.50', '-7.00', '-6.00', '-7.50', '-5.50', '-4.50', '-4.00', '-5.00'] second = ['-7.50', '-4.50', '-4.00']I want to shorten first by every element that occur in second list.for i in first: for j in second: if i == j: first.remove(i)Don't know why this did not remove the -4.00['-6.50', '-7.00', '-6.00', '-5.50', '-4.00', '-5.00']Any help appreciated :) | >>> first = ['-6.50', '-7.00', '-6.00', '-7.50', '-5.50', '-4.50', '-4.00', '-5.00']>>> second = ['-7.50', '-4.50', '-4.00']>>> set_second = set(second) # the set is for fast O(1) amortized lookup>>> [x for x in first if x not in set_second]['-6.50', '-7.00', '-6.00', '-5.50', '-5.00'] |
python - pyodbc setting for increase package size in stored procedure params I'm using Python2.7 with pyodbc==3.0.7 for connecting to SQL Server.Everything is OK, but when I call a stored procedure with a string parameter that has 480 characters, it returns below error:Code: cursor.execute("{CALL SP_NAME(?)}", (param))Error: The driver did not supply an error!But when I call SP with less character count, it works.So, how can I increase transfer package size in pyodbc?Note: This problem does not exists in Windows OS, but it appears in Unix OS. | Maximum Length of pyodbc module is 255 characters in each transferring in Unix OS. i checked some attributes like packet size in "connection string" through this reference http://www.connectionstrings.com/sql-server/, but it doesn't influence.If you need to increase that, You need to change unixODBC-devel package in Unix base systems. Or decrease your packet size in each SP calling. |
Properly creating a mouse event with Kivy I am trying to create a program that will control the mouse on on my Kivy application. What is the proper way to create a provider and send it the locations I want to move and click at? | Take a look at the recorder module, it can both record events and also replay themHere is a small example: (change RECORD to False to watch the replay after recording ... )import kivyfrom kivy.uix.button import Buttonfrom kivy.app import Appfrom kivy.input.recorder import Recorderrec = Recorder(filename='myrecorder.kvi', record_attrs=['is_touch', 'sx', 'sy', 'angle', 'pressure'], record_profile_mask=['pos', 'angle', 'pressure'])def funky(b): print("Hello!!!") if RECORD: rec.record = False else: rec.play = False exit(0)class MyApp(App): def build(self): if RECORD: rec.record = True else: rec.play = True return Button(text="hello", on_release=funky)if __name__ == '__main__': RECORD = True # False for replay MyApp().run()Now you can see the file myrecorder.kvi:#RECORDER1.0(1.1087048053741455, 'begin', 1, {'profile': ['pos'], 'sx': 0.65875, 'is_touch': True, 'sy': 0.51})(1.1346497535705566, 'update', 1, {'profile': ['pos'], 'sx': 0.66, 'is_touch': True, 'sy': 0.51})(1.1994667053222656, 'end', 1, {'profile': ['pos'], 'sx': 0.66, 'is_touch': True, 'sy': 0.51})You can use the Recorder class in many other ways, see the docs:https://kivy.org/docs/api-kivy.input.recorder.htmlYou can wrap the recorder in a function to make a small helper:#not testeddef click(x, y): with open("clicker.kvi", 'w') as f: f.write("""\#RECORDER1.0(0.1087048053741455, 'begin', 1, {{'profile': ['pos'], 'sx': {x}, 'is_touch': True, 'sy': {y}}})(0.1346497535705566, 'update', 1, {{'profile': ['pos'], 'sx': {x}, 'is_touch': True, 'sy': {y}}})(0.1994667053222656, 'end', 1, {{'profile': ['pos'], 'sx': {x}, 'is_touch': True, 'sy': {y}}})""".format(x=x, y=y)) rec = Recorder(filename='clicker.kvi', record_attrs=['is_touch', 'sx', 'sy', 'angle', 'pressure'], record_profile_mask=['pos', 'angle', 'pressure']) rec.play = True #should call rec.play = False somewhere? |
Dynamically add legend for arrows in matplotlib import matplotlib.pyplot as pltdef plot_arrow(arrow_type): if arrow_type == 'good_arrow': arrow_color = 'g' ar = plt.arrow(x, y, dx , dy, label=arrow_type, fc=arrow_color) plt.legend([ar,], [arrow_type,])The above callback function is used to draw arrows in a plot. I need to add legend to them. These arrows are drawn dynamically. The above code will have only one legend. I tried solutions like this but those do not work for matplotlib.pyplot.arrowEdit:Problem: I have two colored arrows but only one legendPlot example: | Why not just use quiver() in your case? See here.import matplotlib.pyplot as pltimport numpy as npplt.figure()def rand(n): return 2.0*np.random.rand(n)-1.0def plt_arrows(n, c, l, xpos, ypos): X, Y = rand(n), rand(n) U, V = 4.*rand(n), 4.*rand(n) Q = plt.quiver(X, Y, U, V, angles="xy", scale=30, label=l, color=c) qk = plt.quiverkey(Q, xpos, ypos, 2, label=l, labelpos='N', labelcolor=c)plt_arrows(20, "g", "good", 1.05, 0.9)plt_arrows(20, "r", "bad", 1.05, 0.8)plt.show() |
Py2App Can't find standard modules I've created an app using py2app, which works fine, but if I zip/unzip it, the newly unzipped version can't access standard python modules like traceback, or os. The manpage for zip claims that it preserves resource forks, and I've seen other applications packaged this way (I need to be able to put this in a .zip file). How do I fix this? | This is caused by building a semi-standalone version that contains symlinks to the natively installed files and as you say, the links are lost when zipping/unzipping unless the "-y" option is used.An alternate solution is to build for standalone instead, which puts (public domain) files inside the application and so survives zipping/unzipping etc. better. It also means the app is more resilient to changes in the underlying OS. The downside is that it is bigger, of course, and is more complicated to get it set up.To build a stand alone version, you need to install the python.org version which can be repackaged.An explanation of how to do this is here, but read the comments as there have been some changes since the blog post was written. |
Import Error while running pytest in virtualenv I am trying to run my pytest (bdd) test cases in virtualenv. I have created a requirements.txt (using pip freeze) file in the root folder as below.apipkg==1.5atomicwrites==1.3.0attrs==19.1.0behave==1.2.6certifi==2019.6.16chardet==3.0.4chromedriver==2.24.1contextlib2==0.6.0.post1coverage==4.5.4docker==4.2.0execnet==1.7.1extras==1.0.0Faker==4.1.1fixtures==3.0.0fuzzywuzzy==0.17.0glob2==0.7html-testRunner==1.2html2text==2020.1.16HTMLParser==0.0.2idna==2.8imaplib2==2.45.0importlib-metadata==0.23Jinja2==2.9.5lettuce==0.2.23lettuce-webdriver==0.3.5linecache2==1.0.0Mako==1.1.0MarkupSafe==1.1.1mock==3.0.5more-itertools==7.0.0packaging==19.2parse==1.11.1parse-type==0.4.2path==15.0.0path.py==12.5.0pbr==5.4.2pi==0.1.2pipenv==2018.11.26pluggy==0.13.0py==1.8.0pyparsing==2.4.2pyperclip==1.7.0PyQt5==5.13.0PyQt5-sip==4.19.18PyQtWebEngine==5.13.0pytest==5.1.2pytest-bdd==3.2.1pytest-docker-fixtures==1.3.6pytest-fixture-config==1.7.0pytest-forked==1.1.3pytest-html==2.0.0pytest-metadata==1.8.0pytest-ordering==0.6pytest-shutil==1.7.0pytest-splinter==2.0.1pytest-virtualenv==1.7.0pytest-xdist==1.31.0pytest-yield==1.0.0python-dateutil==2.8.1python-mimeparse==1.6.0python-subunit==1.3.0PyYAML==5.3.1QScintilla==2.11.2requests==2.22.0responses==0.10.9selenium==3.141.0six==1.12.0splinter==0.11.0sure==1.4.11termcolor==1.1.0testtools==2.3.0text-unidecode==1.3traceback2==1.4.0unittest2==1.1.0urllib3==1.24.1virtualenv==16.7.2virtualenv-clone==0.5.3wcwidth==0.1.7websocket-client==0.57.0zipp==0.6.0I have created the virtual env, activated the same and installed the dependencies using the below commands.virtualenv testsource test/bin/activatepip install -r requirements.txtHowever when I am trying to run the test cases, am getting the below errors.Traceback (most recent call last): File "/test/bin/pytest", line 8, in <module> sys.exit(main()) File "/test/lib/python3.7/site-packages/_pytest/config/__init__.py", line 59, in main config = _prepareconfig(args, plugins) File "/test/lib/python3.7/site-packages/_pytest/config/__init__.py", line 209, in _prepareconfig pluginmanager=pluginmanager, args=args File "/test/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) File "/test/lib/python3.7/site-packages/pluggy/manager.py", line 92, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/test/lib/python3.7/site-packages/pluggy/manager.py", line 86, in <lambda> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False, File "/test/lib/python3.7/site-packages/pluggy/callers.py", line 203, in _multicall gen.send(outcome) File "/test/lib/python3.7/site-packages/_pytest/helpconfig.py", line 89, in pytest_cmdline_parse config = outcome.get_result() File "/test/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result raise ex[1].with_traceback(ex[2]) File "/test/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall res = hook_impl.function(*args) File "/test/lib/python3.7/site-packages/_pytest/config/__init__.py", line 720, in pytest_cmdline_parse self.parse(args) File "/test/lib/python3.7/site-packages/_pytest/config/__init__.py", line 928, in parse self._preparse(args, addopts=addopts) File "/test/lib/python3.7/site-packages/_pytest/config/__init__.py", line 874, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "/test/lib/python3.7/site-packages/pluggy/manager.py", line 297, in load_setuptools_entrypoints plugin = ep.load() File "/test/lib/python3.7/site-packages/importlib_metadata/__init__.py", line 92, in load module = import_module(match.group('module')) File "/test/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "/test/lib/python3.7/site-packages/_pytest/assertion/rewrite.py", line 140, in exec_module exec(co, module.__dict__) File "/test/lib/python3.7/site-packages/pytest_yield/plugin.py", line 11, in <module> from _pytest.python import GeneratorImportError: cannot import name 'Generator' from '_pytest.python' (/test/lib/python3.7/site-packages/_pytest/python.py)Can you help me understand what I am missing here? I tried with different versions of pytest, without luck. However the test cases are running fine in my local without any issues. | There's an open issue with pytest-yield that prevents it to work with latest pytest version (5.1 and up): #6. This means that you have either to downgrade to an older version of pytest:$ pip install "pytest<5"Or uninstall pytest-yield until the above issue is resolved:$ pip uninstall -y pytest-yield |
Place Python list in Excel column Say I have two lists in Python:A_List=["jim", "go"]B_List=["kkj", "nmh",123]how can I get those lists into a .csv Excel file, where in column A, A1 contains jim, B2 contains nmh, A2 contains go.In other words, each list's items needs to go into a column in Excel, with each list item in a separate cell. | Use zip to create the rows. Since your lists are unequal in length, use itertools.izip_longest to create blank entries for the end of the shorter list.Use csv.csvwriter.writerows to write the CSV data to a file.import csvimport itertoolsA_List=["jim", "go"]B_List=["kkj", "nmh",123]with open("result.csv", "w") as output_file: writer = csv.writer(output_file) writer.writerows(itertools.izip_longest(A_List, B_List))Result:jim,kkjgo,nmh,123 |
passing variables in python through functions unicode error code :def power(base,exponent): result = base**exponent print "%d to the power of %d is %d." % (base, exponent, result)n=raw_input("Enter a number whose power you wish to calculate:")p=raw_input("Enter the power:")power(n,p) Some unicode error is coming while executing please help | raw_input returning string. You have to convert it to integer, because your function finding power of a number.def power(base,exponent): result = base**exponent print "%d to the power of %d is %d." % (base, exponent, result)n=int(raw_input("Enter a number whose power you wish to calculate:"))p=int(raw_input("Enter the power:"))power(n,p) |
Python: AttributeError: 'str' object has no attribute 'font' I am using the following code to write to an excel file. Please correct me.I am parsing an HTML page in this context. My aim is to find the table elements and write it into columns.for row in table.findAll('tr', { "class" : "product-row" }):col = row.findAll('td')i=1Image = col[0].a.img['src']Name = col[1].a.textWidth = col[3].textrecord = (Image,Name,Width)book = xlwt.Workbook(encoding="utf-8")sheet1 = book.add_sheet("Sheet 1")sheet1.write(i, Image, Name, Width)book.save("trial.xls")It shows the following error:---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-179-1d146cb8794d> in <module>() 14 sheet1 = book.add_sheet("Sheet 1") 15 ---> 16 sheet1.write(i, Image, Name, Width) 17 18 book.save("trial.xls")C:\Users\Santosh\Anaconda3\lib\site-packages\xlwt\Worksheet.py in write(self, r, c, label, style) 1086 :class:`~xlwt.Style.XFStyle` object. 1087 """-> 1088 self.row(r).write(c, label, style) 1089 1090 def write_rich_text(self, r, c, rich_text_list, style=Style.default_style):C:\Users\Santosh\Anaconda3\lib\site-packages\xlwt\Row.py in write(self, col, label, style) 233 234 def write(self, col, label, style=Style.default_style):--> 235 self.__adjust_height(style) 236 self.__adjust_bound_col_idx(col) 237 style_index = self.__parent_wb.add_style(style)C:\Users\Santosh\Anaconda3\lib\site-packages\xlwt\Row.py in __adjust_height(self, style) 63 64 def __adjust_height(self, style):---> 65 twips = style.font.height 66 points = float(twips)/20.0 67 # Cell height in pixels can be calcuted by following approx. formula:AttributeError: 'str' object has no attribute 'font | You are not using .write() method correctly. Provide row and column indexes followed by the data you want to write into a cell:record = (Image, Name, Width)for col_index, item in enumerate(record): sheet1.write(i, col_index, item) |
Python Pexpect and Check Point Gaia Expert Mode I administer a few Check Point Firewalls at work that run on the Gaia operating system. Gaia is a hardened, purpose-built Linux OS using the 2.6 kernel. I am a novice at Python and I need to write a script that will enter "expert mode" from the clish shell. Entering expert mode is similar to invoking su as it gives you root privileges in the BASH shell. Clish is a Cisco like custom shell made to ease OS configuration changes. I saw a similar discussion at pexpect and ssh: how to format a string of commands after su - root -c, but people responding recommended sudo. This is not an option for me as sudo is not supported by the OS and if you were to install it, clish would not recognize the command. The goal of my script would be to SSH to the device, login, invoke expert mode, then run grep admin /etc/passwd and date. Again, sudo is not an option. | clish does not support SSH. But you can change the shell of your user to /bin/bash instead of /etc/clish.shset user <myuser> shell /bin/bashsave config |
How can I accelerate the array assignment in python? I am trying to make array assignment in python, but it is very slow, is there any way to accelerate?simi_matrix_img = np.zeros((len(annot), len(annot)), dtype='float16')for i in range(len(annot)): for j in range(i + 1): score = 0 times = 0 if i != j: x_idx = [p1 for (p1, q1) in enumerate(annot[i]) if np.abs(q1 - 1) < 1e-5] y_idx = [p2 for (p2, q2) in enumerate(annot[j]) if np.abs(q2 - 1) < 1e-5] for idx in itertools.product(x_idx, y_idx): score += simi_matrix_word[idx] times += 1 simi_matrix_img[i, j] = score/times else: simi_matrix_img[i, j] = 1.0"annot" is a numpy array. Is there any way to accelerate it? | (1) You could use generators instead of list comprehension where possible. For example: x_idx = (p1 for (p1, q1) in enumerate(annot[i]) if np.abs(q1 - 1) < 1e-5) y_idx = (p2 for (p2, q2) in enumerate(annot[j]) if np.abs(q2 - 1) < 1e-5)With this, you iterate only once over those items (in for idx in itertools.product(x_idx, y_idx)), as opposed to twice (once for constructing the list then again in said for loop).(2) What Python are you using? If <3, I have a hunch that a significant part of the problem is you're using range(), which can be expensive in connection with really large ranges (as I'm assuming you're using here). In Python 2.7, range() actually constructs lists (not so in Python 3), which can be an expensive operation. Try achieving the same result using a simple while loop. For example, instead of for i in range(len(annot)), do:i=0while i < len(annot): ... do stuff with i ... i += 1(3) Why call len(annot) so many times? It doesn't seem like you're mutating annot. Although len(annot) is a fast O you could store the length in a var, e.g., annot_len = len(annot), and then just reference that. Wouldn't scrape much off though, I'm afraid. |
Is there a generic way to create an URL template in Django? I'm looking for an easy way to compose URLs in the frontend with JavaScript. Let's take the URL patterns from the Django tutorial as an example:urlpatterns = [ # ex: /polls/ url(r'^$', views.index, name='index'), # ex: /polls/5/ url(r'^(?P<question_id>[0-9]+)/$', views.detail, name='detail'), # ex: /polls/5/results/ url(r'^(?P<question_id>[0-9]+)/results/$', views.results, name='results'), # ex: /polls/5/vote/ url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote'),]I'd like to create a template from an URL name reference (I'm using MustacheJS in the example, but I'm not fixed on any particular syntax):assert get_url_template('detail') == "/polls/{{ question_id }}/"assert get_url_template('results') == "/polls/{{ question_id }}/results/"In the frontend, I would simple supply question_id and get a complete URL. Of course, this has to work for most if not all URL patterns.Is there any easy way to do this or do I have to hack something together based on Django's urlresolver? | I've managed to hack something simple which does what I wanted, maybe it'll be useful for someone. It's a template tag which replaces named groups with {{ name }} sequences and takes the URL name as a single parameter. import refrom django import templatefrom django.core.urlresolvers import get_resolver, get_urlconf register = template.Library()@register.simple_tagdef data_url(urlname): urlconf = get_urlconf() resolver = get_resolver(urlconf) if urlname not in resolver.reverse_dict: return "" url = resolver.reverse_dict[urlname][0][-1][0] url = re.sub(r'%\((.*?)\)s', '{{ \g<1> }}', url) return "/%s" % url |
IndexError when applying setblocking(0) for a Blender3D Python script I'm currently running a script with Blender3D I ported from Python 2+ to Python 3+ with the help of someone from Stackoverflow. The script creates communication between a OMRON PLC (Programmable logic computer) and Blender/Python3+. The script uses TCP communication to write and read the PLC's memory. After the port the script ran properly in Blender the only problem it creates a massive amount of lag and forces Blender to run at 6 fps. Python sockets are blocking by default. What that means is that when you call socket.read() the function will not return until your data has been read or there is an error on the socket. i.e. The socket "blocks" execution until the operation has completed. Because your code gets blocked in the recv() call, your game freezes." LINKIn the link there is said that if you add setblocking(0) or setblocking(False) or settimeout(0) (All the same thing), you will cancel the delay that is caused by the TCP communication.In order to make it work I had to wrap my recv() call in a try, except in order to avoid a socket error: BlockingIOError: [WinError 10035] A non-blocking socket operation could not be completed immediately. I've done it like this: def _recieve(self): try: pr = self.sock.recv(8) length = binstr2int( pr[4:8]) r = pr + self.sock.recv( length) #print (' Recv:' + repr(r)) return r except socket.error as err: # Any error but "Would block" should cause the socket to close if err.errno != errno.EWOULDBLOCK: self.sock.close() self.sock = None returnHowever this didn't solve my problem, this creates more problems. The script will run fine but at 6fps when I have setblocking(1). But when I turn it on it gives me a Indexerror:File "D:\...", line 126, in disassembled asm[ b'ICF'] = binstr2int( self.rawTcpFrame[16])IndexError: index out of range'This is the area where the error occurs, if I comment out the line it will just say the error occurs in the next self.rawTcpFrame[17] for example: def disassembled(self): asm = { b"header" : binstr2int( self.rawTcpFrame[ 0: 4] ), b'length' : binstr2int( self.rawTcpFrame[ 4: 8] ), b'command' : binstr2int( self.rawTcpFrame[ 8:12] ), b'errCode' : binstr2int( self.rawTcpFrame[12:16] ), } if( asm[b'command'] == 2) : asm[ b'ICF'] = binstr2int( self.rawTcpFrame[16]) asm[ b'RSV'] = binstr2int( self.rawTcpFrame[17]) asm[ b'GCT'] = binstr2int( self.rawTcpFrame[18]) asm[ b'DNA'] = binstr2int( self.rawTcpFrame[19]) asm[ b'DA1'] = binstr2int( self.rawTcpFrame[20]) asm[ b'DA2'] = binstr2int( self.rawTcpFrame[21]) asm[ b'SNA'] = binstr2int( self.rawTcpFrame[22]) asm[ b'SA1'] = binstr2int( self.rawTcpFrame[23]) asm[ b'SA2'] = binstr2int( self.rawTcpFrame[24]) asm[ b'SID'] = binstr2int( self.rawTcpFrame[25]) asm[ b'MRC'] = binstr2int( self.rawTcpFrame[26]) asm[ b'SRC'] = binstr2int( self.rawTcpFrame[27]) if self.fromRaw : #decode from response asm[ b'MRES'] = binstr2int( self.rawTcpFrame[28]) asm[ b'SRES'] = binstr2int( self.rawTcpFrame[29]) asm[b'response'] = self.rawTcpFrame[30:] else : asm[b'cmd'] = self.rawTcpFrame[28:] return asmThe entire script can be found here. | Your message is not long enough (sequence is shorter than 17). You should test length of frame, or use zip and slice to be sure, you don't try to call index that does not exist:tags = (b'ICF', b'RSV', b'GCT', b'DNA', b'DA1', b'DA2', b'SNA', b'SA1', b'SA2', b'SID', b'MRC', b'SRC')for tag, data in zip(tags, self.rawTcpFrame[16:]): asm[tag] = binstr2int(data)or you can wrap everything in try: ... except IndexError: and handle too short frame there. |
Python - must we have an __init__.py in every step of a directory in Windows? Lets say that we want on Windows with Python 2.7 to either run a command from the GUI by typing it and hitting Enter or right-click on the Python file and choose Edit with IDLE and when IDLE pops up inside IDLE press F5.Now assume that we have a simplistic file directory like C:\A\B\C\D and inside folder D we want to run the example.py. We want to do it by a certain way which is either:By opening it with a right-click and choose edit with IDLE and pressing F5 or By picking up every line of code and typing her on the Python GUILet us say that we type the command:from C.D import examplePlease bear in mind that C and D are folders and inside folder D we have example.py. We also set up the environment variable: C:\A\B\C; in PYTHONPATH so that Python can see folder D as a module and inside of it operate with the example.py submodule.Is it necessary to have an __init__.py in every folder before the File D?If file A or file B do NOT have an __init__.py can Python (remember we have typed in the environment variable) go to file C?Is it a must that in every step of the way aka in every folder in the directory to have an __init__.py? | You only need to add __init__.py to directories that must be treated as packages. Any folder outside the PYTHONPATH search path is never a package and doesn't need to have an __init__.py.You don't need __init__ in the top-level directory listed on PYTHONPATH either, because that's not a package. C is not a package here, it is a folder on module search path. You cannot import from C; only D is a package.For your setup, if C:A\B\C is on the module search path, then only items in C can be imported. C:A\B\C\module.py is importable as module, and C:A\B\C\D\ is importable if there is an __init__.py file in that directory, as import D.Python uses the search path to find top-level names you import; if you try to import foo, the search path is tried one by one, each a location on your harddisk. C:\A\B\C is one such location, it'll look inside that location, but the directory name of the location is never part of the search space.See the Python tutorial on packages for more information. |
Parallel asynchronous IO in Python's coroutines Simple example: I need to make two unrelated HTTP requests in parallel. What's the simplest way to do that? I expect it to be like that:async def do_the_job(): with aiohttp.ClientSession() as session: coro_1 = session.get('http://httpbin.org/get') coro_2 = session.get('http://httpbin.org/ip') return combine_responses(await coro_1, await coro_2)In other words, I want to initiate IO operations and wait for their results so they effectively run in parallel. This can be achieved with asyncio.gather:async def do_the_job(): with aiohttp.ClientSession() as session: coro_1 = session.get('http://example.com/get') coro_2 = session.get('http://example.org/tp') return combine_responses(*(await asyncio.gather(coro_1, coro_2)))Next, I want to have some complex dependency structure. I want to start operations when I have all prerequisites for them and get results when I need the results. Here helps asyncio.ensure_future which makes separate task from coroutine which is managed by event loop separately:async def do_the_job(): with aiohttp.ClientSession() as session: fut_1 = asyncio.ensure_future(session.get('http://httpbin.org/ip')) coro_2 = session.get('http://httpbin.org/get') coro_3 = session.post('http://httpbin.org/post', data=(await coro_2) coro_3_result = await coro_3 return combine_responses(await fut_1, coro_3_result)Is it true that, to achieve parallel non-blocking IO with coroutines in my logic flow, I have to use either asyncio.ensure_future or asyncio.gather (which actually uses asyncio.ensure_future)? Is there a less "verbose" way?Is it true that normally developers have to think what coroutines should become separate tasks and use aforementioned functions to gain optimal performance?Is there a point in using coroutines without multiple tasks in event loop?How "heavy" are event loop tasks in real life? Surely, they're "lighter" than OS threads or processes. To what extent should I strive for minimal possible number of such tasks? | I need to make two unrelated HTTP requests in parallel. What's the simplest way to do that?import asyncioimport aiohttpasync def request(url): async with aiohttp.ClientSession() as session: async with session.get(url) as resp: return await resp.text()async def main(): results = await asyncio.gather( request('http://httpbin.org/delay/1'), request('http://httpbin.org/delay/1'), ) print(len(results))loop = asyncio.get_event_loop()try: loop.run_until_complete(main()) loop.run_until_complete(loop.shutdown_asyncgens())finally: loop.close()Yes, you may achieve concurrency with asyncio.gather or creating task with asyncio.ensure_future. Next, I want to have some complex dependency structure? I want to start operations when I have all prerequisites for them and get results when I need the results.While code you provided will do job, it would be nicer to split concurrent flows on different coroutines and again use asyncio.gather:import asyncioimport aiohttpasync def request(url): async with aiohttp.ClientSession() as session: async with session.get(url) as resp: return await resp.text()async def get_ip(): return await request('http://httpbin.org/ip')async def post_from_get(): async with aiohttp.ClientSession() as session: async with session.get('http://httpbin.org/get') as resp: get_res = await resp.text() async with session.post('http://httpbin.org/post', data=get_res) as resp: return await resp.text()async def main(): results = await asyncio.gather( get_ip(), post_from_get(), ) print(len(results))loop = asyncio.get_event_loop()try: loop.run_until_complete(main()) loop.run_until_complete(loop.shutdown_asyncgens())finally: loop.close() Is it true that normally developers have to think what coroutines should become separate tasks and use aforementioned functions to gain optimal performance?Since you use asyncio you probably want to run some jobs concurrently to gain performance, right? asyncio.gather is a way to say - "run these jobs concurrently to get their results faster".In case you shouldn't have to think what jobs should be ran concurrently to gain performance you may be ok with plain sync code. Is there a point in using coroutines without multiple tasks in event loop?In your code you don't have to create tasks manually if you don't want it: both snippets in this answer don't use asyncio.ensure_future. But internally asyncio uses tasks constantly (for example, as you noted asyncio.gather uses tasks itself). How "heavy" are event loop tasks in real life? Surely, they're "lighter" than OS threads or processes. To what extent should I strive for minimal possible number of such tasks?Main bottleneck in async program is (almost always) network: you shouldn't worry about number of asyncio coroutines/tasks at all. |
How to parse specific strings from an attribute inside xml using etree and xpath I have an XML with two of the same tags and same attribute but different value.<testsuite> <testcase> <GenericItem html="Name: Epsilon&lt;br/&gt;ID: ID-032&lt;br/&gt;Owner: Infinitie &lt;a href=&quot;mailto: infinitie@company.com &quot;&gt;infinitie@company.com&lt;/a&gt;&lt;br/&gt;Revision ID: ataaaa"> Algorithm: </GenericItem> <GenericItem html="No numerical differences between 'Model AB' and 'Baseline A' found.&lt;/font&gt;"> Results for 'Epsilon Model-01': </GenericItem> </testcase> <testcase> <GenericItem html="Name: ZeroG&lt;br/&gt;ID: ID-033&lt;br/&gt;Owner: Lite &lt;a href=&quot;mailto: lite@company.com &quot;&gt;lite@company.com&lt;/a&gt;&lt;br/&gt;Revision ID: ataaab"> Algorithm: </GenericItem> <GenericItem html="No numerical differences between 'Model A' and 'Baseline B' found.&lt;/font&gt;&lt;br/&gt;&lt;font color=&quot;green&quot;&gt;No performance difficulties found.&lt;/font&gt;&lt;br/&gt;&lt;font color=&quot;red&quot;&gt;Target memory footprint more than 1.4 of baseline.&lt;/font&gt;&lt;br/&gt;Applied tolerance from model configuration: 2e-13&lt;br/&gt;&lt;font color=&quot;green&quot;&gt;No numerical differences between 'Baseline B' and 'Target - Model A' found.&lt;/font&gt;"> Results for 'Lite Model': </GenericItem> </testcase></testSuite>The data that I want to get are :Results for '#### Model'Numerical Differences (if not listed then print ("NULL")Performance Difficulties (if not listed then print ("NULL")Memory footprint (if not listed then print ("NULL")Tolerance Model applied (if not listed then print ("NULL")The other numerical differences (if not listed then print ("NULL")The code for now are :from lxml import etree as EThtml_parser = ET.HTMLParser()tree = ET.parse('NewestReport.xml')test = tree.findall('testcase')root = tree.getroot()count = 0count1 = 0i = 0for person in tree.xpath('./testcase'): a = person.xpath('//text()[contains(.,"Results")]')[i] print(a.strip()) b = person.xpath('.//@html/text()[contains(., "tolerance")]')[i] if b: print(b.strip()) else: print("None") i = i + 1I manage to get the Results for '#### Model' but for the Tolerance Model applied (if not listed then print ("NULL") it outputs the error: Traceback (most recent call last):IndexError: list index out of rangeI think it is because the no 'tolerance' is found on the first . But I can't seem to fix the error. Can someone help enlighten me on this problem ? | The main problem is that you forgot . to create relative path to - and then you don't need [i] but always [0]a = person.xpath('.//text()[contains(.,"Results")]')[0]And in second part you should search in @html instead of text()And you should also use [0] but after checking ifb = person.xpath('.//@html[contains(., "tolerance")]') if b: print(b[0].strip().split('<br/>'))else: print("None")Minimal working example with data directly in codetext = '''<testsuite> <testcase> <GenericItem html="Name: Epsilon&lt;br/&gt;ID: ID-032&lt;br/&gt;Owner: Infinitie &lt;a href=&quot;mailto: infinitie@company.com &quot;&gt;infinitie@company.com&lt;/a&gt;&lt;br/&gt;Revision ID: ataaaa"> Algorithm: </GenericItem> <GenericItem html="No numerical differences between 'Model AB' and 'Baseline A' found.&lt;/font&gt;"> Results for 'Epsilon Model-01': </GenericItem> </testcase> <testcase> <GenericItem html="Name: ZeroG&lt;br/&gt;ID: ID-033&lt;br/&gt;Owner: Lite &lt;a href=&quot;mailto: lite@company.com &quot;&gt;lite@company.com&lt;/a&gt;&lt;br/&gt;Revision ID: ataaab"> Algorithm: </GenericItem> <GenericItem html="No numerical differences between 'Model A' and 'Baseline B' found.&lt;/font&gt;&lt;br/&gt;&lt;font color=&quot;green&quot;&gt;No performance difficulties found.&lt;/font&gt;&lt;br/&gt;&lt;font color=&quot;red&quot;&gt;Target memory footprint more than 1.4 of baseline.&lt;/font&gt;&lt;br/&gt;Applied tolerance from model configuration: 2e-13&lt;br/&gt;&lt;font color=&quot;green&quot;&gt;No numerical differences between 'Baseline B' and 'Target - Model A' found.&lt;/font&gt;"> Results for 'Lite Model': </GenericItem> </testcase></testsuite>'''from lxml import etree as EThtml_parser = ET.HTMLParser()#tree = ET.parse('NewestReport.xml')tree = ET.fromstring(text)for person in tree.xpath('./testcase'): a = person.xpath('.//text()[contains(.,"Results")]')[0] print(a.strip()) print('---') b = person.xpath('.//@html[contains(., "tolerance")]') if b: print(b[0].strip().split('<br/>')) else: print("None") print('----------------')Result:Results for 'Epsilon Model-01':---None----------------Results for 'Lite Model':---No numerical differences between 'Model A' and 'Baseline B' found.</font><font color="green">No performance difficulties found.</font><font color="red">Target memory footprint more than 1.4 of baseline.</font>Applied tolerance from model configuration: 2e-13<font color="green">No numerical differences between 'Baseline B' and 'Target - Model A' found.</font>----------------For getting Numerical Differences, Performance Difficulties,etc. I would use normal string functions or regex. |
Count the number of nodes with the same name in a tree I'm trying count the number of nodes with the same name in a tree, but having difficulty. This is what I've tried:musics = {'genre':'music', 'children':[{'genre':'Pop', 'children':[{'genre':'Eurobeat','children':[]}, {'genre':'Austropop','children':[]}, {'genre':'hard rock', 'children':[]}]}, {'genre':'Latin', 'children':[{'genre':'Eurobeat', 'children':[{'genre':'Chicha', 'children':[{'genre':'Eurobeat','children':[]}]}]}, {'genre':'Bachata', 'children':[]}, {'genre':'Criolla', 'children':[]}]}]}class MUsicNode(object): def __init__(self, genre): self.genre = genre self.children = [] def add(self,x): self.children.append(x) def count_name(self,genre): name_count = 0 for node in self.children: if node.genre == genre: print "same genre" print "word = ", genre name_count+=1 node.count_name(genre) return name_countdef create_tree(musics): for key,value in musics.items(): if key == 'genre': node = value var = MUsicNode(node) if key == 'children': kid = value for n in kid: var.add(create_tree(n)) return varTree = create_tree(musics)print Tree.count_name('Eurobeat'), "<---COUNT FOR NAME 'Eurobeat' ""Eurobeat" must be 3, but my output is:same genreword = Eurobeatsame genreword = Eurobeatsame genreword = Eurobeat0 <---COUNT FOR NAME 'Eurobeat' | change name_count to be a function parameter so you could set a default value of 0 and pass in your current count for your recursive call:def count_name(self,genre,name_count = 0): for node in self.children: if node.genre == genre: print ("same genre") #you probably know this but take this print ("word = ", genre) #and this line out to only print `3` name_count+=1 name_count = node.count_name(genre,name_count) return name_countthis will yield:>>> musics = {'genre':'music', 'children':[{'genre':'Pop', 'children':[{'genre':'Eurobeat','children':[]}, {'genre':'Austropop','children':[]}, {'genre':'hard rock', 'children':[]}]}, {'genre':'Latin', 'children':[{'genre':'Eurobeat', 'children':[{'genre':'Chicha', 'children':[{'genre':'Eurobeat','children':[]}]}]}, {'genre':'Bachata', 'children':[]}, {'genre':'Criolla', 'children':[]}]}]}>>> tree = create_tree(musics)>>> tree.count_name('Eurobeat')same genreword = Eurobeatsame genreword = Eurobeatsame genreword = Eurobeat3 |
Lost Robot program This is an ICPC online round question. I checked sample input and my own imaginary inputs.This is link to questionHere is my code. This code is in Python.for _ in range(int(input())): x1,y1,x2,y2=map(int,input().split()) if (x2-x1==y2-y1)or((x2-x1!=0)and(y2-y1!=0)): print('sad') elif (x2-x1==0 and y2-y1>0): print('up') elif (x2-x1==0 and y2-y1<0): print('down') elif (y2 - y1 == 0 and x2 - x1 > 0): print('right') elif (y2 - y1 == 0 and x2 - x1 < 0): print('left')Can anyone suggest inputs for that code will not work?? If code is correct for all inputs , any modification will be welcomed?. | Your code already does not parse the input correctly. Have a look:python /tmp/test.py 10 0 0 1Traceback (most recent call last): File "/tmp/test.py", line 3, in <module> x1, y1, x2, y2 = map(int, input().split()) File "<string>", line 1 0 0 0 1 ^Besides from that, it seems to be logical correct.However, your coding style is improvable. Have a look at my version:for _ in range(int(input())): x1, y1, x2, y2 = map(int, input().split()) if x1 != x2 and y1 != y2: print('sad') elif x1 == x2 and y1 < y2: print('up') elif x1 == x2 and y1 > y2: print('down') elif y1 == y2 and x1 < x2: print('right') elif y1 == y2 and x1 > x2: print('left')The only thing I changed was to improve the conditions. Furthermore, I removed the redundant condition at the bottom.As a former participant of NWERC and ACM-ICPC I can only suggest to have an eye on readable code. It is not as important as in production code bases, but it helps a lot when you have to debug printed code, as happens at real competitions. |
Get child of box in a dialog in python Gtk3 I'm trying to get a date value from a Calendar in Python Gtk3. The Calendar is inside a dialog. I have the following code:import gigi.require_version('Gtk', '3.0')from gi.repository import Gtkclass MyTest(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, title="Titulo") self.connect("delete_event", Gtk.main_quit) self.set_border_width(6) button = Gtk.Button("Open Dialog") button.connect("clicked", self.on_button_clicked) self.add(button) def on_button_clicked(self, widget): dialog = DialogExample(self) response = dialog.run() if response == Gtk.ResponseType.OK: print("OK") a = dialog.box.cal date = a.get_date() print(date) elif response == Gtk.ResponseType.CANCEL: print("Cancel") dialog.destroy()class DialogExample(Gtk.Dialog): def __init__(self, parent): Gtk.Dialog.__init__(self, "My Dialog", parent, 0, (Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OK, Gtk.ResponseType.OK)) self.set_default_size(150, 100) box = self.get_content_area() box.set_border_width(6) cal = Gtk.Calendar() box.add(self.cal) self.show_all()window = MyTest()window.show_all()Gtk.main()I cant seem to get the date value from the calendar. It gives me Box Object has no attribute box. I also tried using get_child with the same result. The box may in the future have two children, a Calendar and a other widget. How to get the date from the Calendar? | This can be solved by properly using self.import gigi.require_version('Gtk', '3.0')from gi.repository import Gtkclass MyTest(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, title="Titulo") self.connect("delete_event", Gtk.main_quit) self.set_border_width(6) button = Gtk.Button("Open Dialog") button.connect("clicked", self.on_button_clicked) self.add(button) def on_button_clicked(self, widget): dialog = DialogExample(self) response = dialog.run() if response == Gtk.ResponseType.OK: print("OK") a = dialog.cal date = a.get_date() print(date) elif response == Gtk.ResponseType.CANCEL: print("Cancel") dialog.destroy()class DialogExample(Gtk.Dialog): def __init__(self, parent): Gtk.Dialog.__init__(self, "My Dialog", parent, 0, (Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OK, Gtk.ResponseType.OK)) self.set_default_size(150, 100) self.box = self.get_content_area() self.box.set_border_width(6) self.cal = Gtk.Calendar() self.box.add(self.cal) self.show_all()window = MyTest()window.show_all()Gtk.main() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.