questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Finding out if values in dataframe increases in tens place I'm trying to figure out if the value in my dataframe is increasing in the tens/hundreds place. For example I created a dataframe with a few values, I duplicate the values and shifted them and now i'm able to compare them. But how do i code and find out if the tens place is increasing or if it just increasing by a little, for example 0.02 points.import pandas as pd import numpy as npdata = {'value':['9','10','19','22','31']}df = pd.DataFrame(data)df['value_copy'] = df['value'].shift(1)df['Increase'] = np.where(df['value']<df['value_copy'],1,0) output should be in this case:[nan,1,0,1,1]
IIUC, divide by 10, get the floor, then compare the successive values (diff(1)) to see if the difference is exactly 1:np.floor(df['value'].astype(float).div(10)).diff(1).eq(1).astype(int)If you want a jump to at least the next tens (or more) use ge (≥):np.floor(df['value'].astype(float).div(10)).diff(1).ge(1).astype(int)output:0 01 12 03 14 1Name: value, dtype: int64NB. if you insist on the NaN:s = np.floor(df['value'].astype(float).div(10)).diff(1)s.eq(1).astype(int).mask(s.isna())output:0 NaN1 1.02 0.03 1.04 1.0Name: value, dtype: float64
Python: access structure field through its name in a string In Scapy, I want to compare a number of header fields between any two packets a and b. This list of fields is predefined, say:fieldsToCompare = ['tos', 'id', 'len', 'proto'] #IP headerNormally I would do it individually:if a[IP].tos == b[IP].tos: ... do stuff...Is there any way to access those packet fields from a list of strings including what each one of them is called? Like:for field in fieldsToCompare: if a[IP].field == b[IP].field: ... do stuff...
You can use getattr(). These lines are equivalent:getattr(x, 'foobar')x.foobarsetattr() is its counterpart.
GAE launcher Python installation can't import module six I just recently launched my local Google App Engine sandbox Python application after not touching it in a while. It seems the following import is giving me problems, but this didn't happen before.from googleapiclient.discovery import buildThis results in the import error: ImportError: No module named sixI'm not sure what changed with the GAE launcher but it seems the six module is no longer included. I checked on my system and this module is installed globally.
The problem was introduced by some updated libraries included with Google App Engine launcher. To avoid the import error, you'll need to include module six into you project. It can be found here
How to deal with MinMaxScaler when there is only a single input? I was working with MinMaxScaler. So the following input:data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]]would be transformed in the range(0,1) as follows:[[ 0. 0. ] [ 0.25 0.25] [ 0.5 0.5 ] [ 1. 1. ]]Now how to transform a single input? On passing the single input such as :data = [[1,18]]will transform as follows:[[0.,0.]]The reason being, there is no Min and no Max, thus scaling to 0. But what shall I do in cases where all I have is a single input and want to pass this to a Machine Learning Model (trained after scaling down both inputs and outputs) as input? All the outputs will be the same irrespective of the dissimilarity in the data before scaling. This definitely is not right. What can I do here? Is there some way I can represent a single input as not a series of 0.?Here is what I did during the training process: X_scaler = MinMaxScaler(feature_range=(0, 1)) Y_scaler = MinMaxScaler(feature_range=(0, 1)) # Scale both training inputs and outputs X_scaled_training = X_scaler.fit_transform(X_training) Y_scaled_training = Y_scaler.fit_transform(Y_training) X_scaled_testing = X_scaler.transform(X_testing) Y_scaled_testing = Y_scaler.transform(Y_testing)During the training process I inverted the value from the model as follows to get the final output score:Y_predicted = Y_scaler.inverse_transform(Y_predicted_scaled)
There are a few plausible interpretations of what you're asking.If you're interested in using a single variable for your training set, scaling everything to 0 is a reasonable interpretation. Without some sort of domain knowledge or other sources of information, the best guess you have is simply the average value (or median or whatnot depending on your error function).If you're interested in using a single variable for your test set, the problem is that you should be using the same min/max scaling that you did for your training data. Since the parameters for that min/max scaling are already pre-determined, your new data will almost certainly not be scaled to 0 (and if it does, that's still the correct choice).The reason you would want to use the same min/max scaler for your test data (or production data or whatnot) is because for your model to generalize, it needs to be operating on the same kind of data as what you trained it on. One way to think about a machine learning model is that it replicates an observed probability distribution. If you use a different min/max scaler (or any other step in your pre-processing) on your test data, then you will be using information about one probability distribution to attempt to predict what is likely a completely different distribution. That usually doesn't work well.
index python dictionary by value Possible Duplicate: Inverse dictionary lookup - Python Is there a built in way to index a dictionary by value in Python.e.g. something like:dict = {'fruit':'apple','colour':'blue','meat':'beef'}print key where dict[key] == 'apple'or:dict = {'fruit':['apple', 'banana'], 'colour':'blue'}print key where 'apple' in dict[key]or do I have to manually loop it?
You could use a list comprehension:my_dict = {'fruit':'apple','colour':'blue','meat':'beef'}print [key for key, value in my_dict.items() if value == 'apple']The code above is doing almost exactly what said you want: print key where dict[key] == 'apple'The list comprehension is going through all the key, value pairs given by your dictionary's items method, and making a new list of all the keys where the value is 'apple'.As Niklas pointed out, this does not work when your values could potentially be lists. You have to be careful about just using in in this case since 'apple' in 'pineapple' == True. So, sticking with a list comprehension approach requires some type checking. So, you could use a helper function like:def equals_or_in(target, value): """Returns True if the target string equals the value string or, is in the value (if the value is not a string). """ if isinstance(target, str): return target == value else: return target in valueThen, the list comprehension below would work:my_dict = {'fruit':['apple', 'banana'], 'colour':'blue'}print [key for key, value in my_dict.items() if equals_or_in('apple', value)]
How do I slice a line from a text file in python? Currently I am working on a python password manager. Its not too complex, just a simple commandline interface. I have made one file in which the passwords and usernames are stored in the following format:servicenameusername-usernameinputservicenamepassword-generatedpasswordfor eg:GOOGLEusername-myusernameGOOGLEpassword-generated passwordThe problem is that when I try to fetch the password and username from a separate program. I need a way to slice a line in the txt file.eg:GOOGLEpassword-passowrdGOOGLEusername-usernameI want the program to get the service name as input and then return what is in front of them. I need to slice it in such a way that it takes the first character of the password and prints it until the line ends. Thanks in advance.
Some string.split() magic will resolve this. I added some logic to be able to deal with usernames or passwords containing the - characterpassword.pyfrom pprint import pprintdef process_two_lines(line1: str, line2: str) -> dict: # determine which of the 2 variables is the password # and which is the username if 'password' in line1.split('-')[0]: passwordline = line1 userline = line2 else: passwordline = line2 userline = line1 # the join is in case the password or username contains a "-" character password = '-'.join(passwordline.split('-')[1:]).strip('\n') username = '-'.join(userline.split('-')[1:]).strip('\n') service = userline.split('username')[0] return { 'username': username, 'password': password, 'service': service } def get_login_info(filename: str) -> dict: # read file with open(filename) as infile: filecontents = infile.readlines() result = [] # go through lines by pair for index, line1 in enumerate(filecontents[::2]): result.append(process_two_lines(line1, filecontents[(index*2)+1])) return resultlogininfo = get_login_info('test1.txt')pprint(logininfo)print('----------\n\n')for index, line in enumerate(logininfo): print(f'{index:2}: {line["service"]}')service = int(input('Please select the service for which you want the username/password:\n'))print(f'Username:\n{logininfo[service]["username"]}\nPassword:\n{logininfo[service]["password"]}\n')test1.txtTESTACCOUNTusername-test-userTESTACCOUNTpassword-0R/bL----d?>[GGOOGLEusername-googleGOOGLEpassword-V*biw:Y<%6k`?JI)r}tCSTACKOVERFLOWusername-testingSTACKOVERFLOWpassword-5AmT-S)My;>3lh"output[{'password': '0R/bL----d?>[G', 'service': 'TESTACCOUNT', 'username': 'test-user'}, {'password': 'V*biw:Y<%6k`?JI)r}tC', 'service': 'GOOGLE', 'username': 'google'}, {'password': '5AmT-S)My;>3lh"', 'service': 'STACKOVERFLOW', 'username': 'testing'}]---------- 0: TESTACCOUNT 1: GOOGLE 2: STACKOVERFLOWPlease select the service for which you want the username/password:2Username:testingPassword:5AmT-S)My;>3lh"Original answerpassword.pyfrom pprint import pprintdef get_login_info(filename: str) -> dict: with open(filename) as infile: filecontents = infile.readlines() for line in filecontents: if 'password' in line.split('-')[0]: # split needed to not get false positive if username == password password = line.split('-')[1].strip('\n') # gets the part after the - separator, removing the newline service = line.split('password')[0] # gets the service part elif 'username' in line.split('-')[0]: # split needed to not get false positive if password == username username = line.split('-')[1].strip('\n') result = { 'username': username, 'password': password, 'service': service } return resultpprint(get_login_info('test0.txt'))pprint(get_login_info('test1.txt'))output:{'password': 'generatedpassword', 'service': 'servicename', 'username': 'usernameinput'}{'password': 'generated password', 'service': 'GOOGLE', 'username': 'myusername'}
Create scatterplot in pandas using row as index and row as data First off, let me say that I just began using the pandas module a few days ago, so apologies if there is a simple solution to this that I was unaware of. I am trying to make a scatterplot in pandas using a specific row as the index (for the x-axis) and a specific row for the data to be plotted. I want this to be executed across all columns. Example:df: col1 col2 col30 0 0 -11 0.88 1 8.122 1 -1 13 0 0 04 0.3 1 3.4 So, the idea is to create a scatterplot using 4 as the index (x-axis) and 1 as the data (y axis) where every column represents a point.Any ideas?
In [3]: df.T.plot(kind='scatter', x=4, y=1)The .T transposes the matrix, so 4 is now the column you use as the x axis.
What are the differences with the code below? I'm writing a few pieces of code and I've got some questionsdef fanction(tata): for i in range(0,4): tata +i print tatatata = 0fanction(tata)The results are 0000Second piece of code :def fbnction(tbtb): for i in range(0,4): print tbtb +itbtb = 0fbnction(tbtb)The results are 0123Third piece of code :def fcnction(tctc): for i in range(0,4): print tctc, tctc +itctc = 0fcnction(tctc) The results are 0 00 10 20 3Can someone give some explanation on the why, please of having different results? I thought it would give me the same results all the time.
tata + i does not change tata. It just returns added value.If you want tata to be changed, you need to assign back the added result.tata = tata + iortata += i>>> tata = 5>>> i = 2>>> tata + i7>>> tata # not changed5>>> tata = tata + i>>> tata # changed7
Basics of string based protocol security I wasn't sure how to phrase this question, so apologies in advance if it's a duplicate of something else.I wanted to sanity check how I've secured my twisted based application and think I've done a good job at it, but it's been over a decade since I've written anything that uses raw or managed sockets.Authentication transaction: Client connects and immediately a challenge response is sent back with a 16 character hex string. Client side takes user name & password, password is converted sha1( salt + sha1(password)) and credentials are sent back to the server as { username, password }. On the server side, authentication does standard lookup pattern ( if user exists and has password equal to input then grant ). If the connection between user & client is lost, the protocol class marks itself as dirty and disconnects itself from the user object. Any time after this point, to get access to the user object again, the client would have to repeat the authentication process with a new salt.Am I missing something? Is there a better/more secure approach for a character stream based protocol?
The protocol you described addresses one attack, that is the a replay attack. However, you are very vulnerable to MITM attacks. The TCP connection won't drop when the attacker moves in on the protocol. Further more anything transferred over this system can be sniffed. If you are on the wireless at a cafe everyone in the area will be able to sniff everything that is transmitted and then MITM the authenticated session. Another point is that sha1() is proven to be insecure you should use sha256 for anything security related. NEVER REINVENT THE WHEEL, especially when it comes to security.Use SSL! Everyone uses SSL and it has a LONG history of proven secuirty, and that is something you can't build. SSL Not only solved Man in the Middle Attacks but you can also use Certificates instead of passwords to authenticate both client and server which makes you immune to brute force. The sun will burn out before an attacker can brute force a 2048bit RSA certificate. Father more you don't have to worry about an eve dropper sniffing the transmission. Keep in mind that OpenSSL is FREE, generating certificates is FREE, and the singing of certificates is FREE. Although the only reason why you would want to sign a certificate is if you want to implement a PKI, which is probably not necessary. The client can have the server's public key hard coded to verify the connection. The server could have a database of client public keys. This system would be self contained and not require OCSP or CRL or any other part of a Public Key Infrastructure.
Preserving argument default values while method chaining If I have to wrap an existing method, let us say wrapee() from a new method, say wrapper(), and the wrapee() provides default values for some arguments, how do I preserve its semantics without introducing unnecessary dependencies and maintenance? Let us say, the goal is to be able to use wrapper() in place of wrapee() without having to change the client code. E.g., if wrapee() is defined as:def wrapee(param1, param2="Some Value"): # Do somethingThen, one way to define wrapper() is:def wrapper(param1, param2="Some Value"): # Do something wrapee(param1, param2) # Do something else.However, wrapper() has to make assumptions on the default value for param2 which I don't like. If I have the control on wrapee(), I would define it like this:def wrapee(param1, param2=None): param2 = param2 or "Some Value" # Do somethingThen, wrapper() would change to:def wrapper(param1, param2=None): # Do something wrapee(param1, param2) # Do something else.If I don't have control on how wrapee() is defined, how best to define wrapper()? One option that comes into mind is to use to create a dict with non-None arguments and pass it as dictionary arguments, but it seems unnecessarily tedious.Update:The solution is to use both the list and dictionary arguments like this:def wrapper(param1, *args, **argv): # Do something wrapee(param1, *args, **argv) # Do something else.All the following calls are then valid:wrapper('test1')wrapper('test1', 'test2')wrapper('test1', param2='test2')wrapper(param2='test2', param1='test1')
Check out argument lists in the Python docs.>>> def wrapper(param1, *stuff, **kargs):... print(param1)... print(stuff)... print(args)...>>> wrapper(3, 4, 5, foo=2)3(4, 5){'foo': 2}Then to pass the args along:wrapee(param1, *stuff, **kargs)The *stuff is a variable number of non-named arguments, and the **kargs is a variable number of named arguments.
python selenium send css with !important I would like to change the css of the element as follows. It works fine when:browser.execute_script( "arguments[0].style.display = 'block';", browser.find_element_by_xpath("//div[@role='main']/div/div/div["+str(d)+"]/div["+str(r)+"]/div/div[2]"))but when I try to add the "!important", the code was not updated:browser.execute_script( "arguments[0].style.display = 'block!important';", browser.find_element_by_xpath("//div[@role='main']/div/div/div["+str(d)+"]/div["+str(r)+"]/div/div[2]"))
The statement element.style.display = 'block' will only work for setting the value of valid property values. Since 'block !important' is not recognized, it will not be added. !important itself is a declaration.You can use .setProperty() instead, which will let you add more than the value. Use 'important' as a string and don't add the exclamation point.browser.execute_script( "arguments[0].style.setProperty('display', 'block', 'important');", browser.find_element_by_xpath("//div") )
Extrapolating data from a curve using Python I am trying to extrapolate future data points from a data set that contains one continuous value per day for almost 600 days. I am currently fitting a 1st order function to the data using numpy.polyfit and numpy.poly1d. In the graph below you can see the curve (blue) and the 1st order function (green). The x-axis is days since beginning. I am looking for an effective way to model this curve in Python in order to extrapolate future data points as accurately as possible. A linear regression isnt accurate enough and Im unaware of any methods of nonlinear regression that can work in this instance.This solution isnt accurate enough as if I feed x = dfnew["days_since"]y = dfnew["nonbrand"]z = numpy.polyfit(x,y,1)f = numpy.poly1d(z)x_new = future_daysy_new = f(x_new)plt.plot(x,y, '.', x_new, y_new, '-')EDIT:I have now tried the curve_fit using a logarithmic function as the curve and data behaviour seems to conform to:def func(x, a, b): return a*numpy.log(x)+bx = dfnew["days_since"]y = dfnew["nonbrand"]popt, pcov = curve_fit(func, x, y)plt.plot( future_days, func(future_days, *popt), '-')However when I plot it, my Y-values are way off:
The very general rule of thumb is that if your fitting function is not fitting well enough to your actual data then either:You are using the function wrong, e.g. You are using 1st order polynomials - So if you are convinced that it is a polynomial then try higher order polynomials.You are using the wrong function, it is always worth taking a look at: your data curve & what you know about the process that is generating the datato come up with some speculation/theorem/guesses about what sort of model might fit better.Might your process be a logarithmic one, a saturating on, etc. try them!Finally, if you are not getting a consistent long term trend then you might be able to justify using cubic splines.
Select a random row from the table using Python Below is the my table.I use MySQL for the database queries. Structure of the tableI want to print questions randomly by taking the questions from the table. How can I do that using Python?
from random import randintnum = randint(1,5)Then db query:SELECT question FROM your_table WHERE ques_id = num;Alternatively:SELECT question FROM your_table LIMIT num-1, 1;num would be a random number between 1 and 5, replace num in the query and it only returns 1 row. Be aware it is starting from index 0, therefore the first argument should be num-1 other than num, second argument is always 1 because you only want to get one row per query.
django: set model fields as choices in models.py Is it possible to set the choices of a field from another table?for exampleclass Initial_Exam(models.Model): Question_category = models.CharField(max_length=20, choices = Job.Job_Position)class Job(models.Model): Job_Position = models.CharField(max_length=30, null=True)something like that
To close this:As commented above, instead of twisting my implementation, setting the foreign key for Initial_Exam and using __unicode__ on Job did the jobshould look like this:class Job(models.Model): Job_Position = models.CharField(max_length=30, null=True) def __unicode__(self): return self.Job_Requirementsthat would display the Job_Position as choices in the admin panelI thank the community, really appreciate it
Splitter window display issue I am writing a program which has a TreeCtrl on the left and a RichTextCtrl on the right.Following is the code of the splitter, panel and other elements. The problem is that in windows, the bottom of the treectrl and textctrl is hidden. The statusbar covers the bottom of the splitter. But even after removing the statusbar I cannot see the bottom of the treectrl (hides up to 6 rows).self.panel=wx.Panel(self,wx.ID_ANY)self.splitter=wx.SplitterWindow( self.panel,-1,size=wx.DisplaySize(),style=wx.SP_LIVE_UPDATE)self.splitter.SetMinimumPaneSize(5)self.datatree=wx.TreeCtrl(self.splitter,1,style=wx.TR_HIDE_ROOT|wx.TR_ROW_LINES)self.display=wx.richtext.RichTextCtrl( self.splitter,1,style=wx.VSCROLL|wx.HSCROLL|wx.WANTS_CHARS)self.display.SetFont(self.displayfont)self.handler=wx.richtext.RichTextXMLHandler()self.splitter.SplitVertically(self.datatree,self.display)self.logger=self.CreateStatusBar()
I think the issue here may be that you've explicitly told the SplitterWindow to take up the entire display size. Try omitting the size argument to the constructor, or adjust it down some, to see if that has any effect.If omitting the size parameter does not help, I'd suggest creating Panels with Sizers that contain your Tree and your Rich Text Control, then splitting those Panels vertically within the Splitter Window.
What is a NameError and how can I fix it? I have defined a function to return the indices of certain occurrences in a list. However, when I try to run my doctests, it returns a NameError, but exits with 'exit code 0' meaning there's no problems with it. def build_placements(shoes): """Return a dictionary where each key is a company, and each value is a list of placements by people wearing shoes made by that company. >>> result = build_placements(['Saucony', 'Asics', 'Asics', 'NB', 'Saucony', 'Nike', 'Asics', 'Adidas', 'Saucony', 'Asics']) >>> result == {'Saucony': [1, 5, 9], 'Asics': [2, 3, 7, 10], 'NB': [4], 'Nike': [6], 'Adidas': [8]} True """ empty_dict = {} for item in shoes: indices = [i for i, x in enumerate(shoes) if x == item] for value in item: value += 1 empty_dict[item] = indices return empty_dictif __name__ == '__main__': import doctest doctest.testmod()After running this, I get this error message:NameError: name 'result' is not definedI don't understand what part of my code is causing this.
Your error is here:for value in item: value += 1item is an element of the shoes list, all elements of shoes are strings. You cannot add an integer to a string. So value += 1 raises an error, and because of this the whole function fails. You never get a return variable. From here the error.At first glance, you don't need those two lines at all, you never use value after. Try to remove them.EDITIf the meaning behind those lines was to increase each index value by 1, as I suspect from the docstring (thanks @ekhumoro to make me realize this), you can just edit the list comprehension to achieve the intended goal.indices = [i+1 for i, x in enumerate(shoes) if x == item]
Django: Object is not iterable when trying to instantiate an object I am getting 'Players' object is not iterable when I am trying to save some form data, and I don't understand why.Here is my RequestedPartners model:class RequestedPartners(models.Model): first_nm = models.CharField('Requested Partner First Name', max_length=100) last_nm = models.CharField('Requested Partner Last Name', max_length=100) player = models.ManyToManyField(Players)Here is my form:class RequestedPartnersForm(forms.ModelForm): class Meta: model = RequestedPartners fields = ['first_nm', 'last_nm'] def clean_first_nm(self): return self.cleaned_data['first_nm'].upper() def clean_last_nm(self): return self.cleaned_data['last_nm'].upper()Here is the Players model:class Players(models.Model): first_nm = models.CharField('First Name', max_length=100) last_nm = models.CharField('Last Name', max_length=100) email = models.EmailField('Email Address (optional)', max_length=200, null=True)Here is my view method where I'm getting the "'Players' object is not iterable" error.def post(self, request): bound_form = UsersForm(request.POST) lineItemsForm = LineItemsForm(request.POST) RequestedPartnersFormSet = formset_factory(RequestedPartnersForm) formset = RequestedPartnersFormSet(request.POST, request.FILES) if bound_form.is_valid() and lineItemsForm.is_valid() and formset.is_valid(): bound_form.save() players = Players() players.first_nm = bound_form.cleaned_data['first_nm'] players.last_nm = bound_form.cleaned_data['last_nm'] players.email = bound_form.cleaned_data['email'] players.save() for form in formset.cleaned_data: rp1 = RequestedPartners() rp1.last_nm = form['last_nm'] rp1.first_nm = form['first_nm'] rp1.player = players # Error is being thrown on this line rp1.save() return redirect(reverse('begin_registration'))What am I doing wrong that is causing this error?
RequestedPartners.player is a ManyToManyField. As per the documentation they have a special api when you need to assign values to them.First, you need to save the RequestedPartners object (so that it has a primary key) then add the players:rp1 = RequestedPartners()rp1.last_nm = form['last_nm']rp1.first_nm = form['first_nm']rp1.save()rp1.player.add(players)rp1.save()
Skype4Py MessageStatus not firing consistently I'm trying to make a basic Skype bot using Skype4Py and have encountered a rather serious error. I am working on a 64 bit windows 7 with the 32bit Python 2.7.8. installed, along with the latest version of Skype4Py.My main demand is that the bot has an overview of 5 different Skype chats: four individual chats with four users and one common chat in which all four users participate. To that end, I have written two different functions handling individual responses and the group chat:class SkypeBot(object): def __init__(self): self.skype = Skype4Py.Skype(Events=self) self.skype.Attach() self.active_chat = find_conference_chat() def MessageStatus(self, msg, status): if status == Skype4Py.cmsReceived: if msg.Chat.Name == self.active_chat.Name: msg.Chat.SendMessage(respond_to_group(msg)) else: msg.Chat.SendMessage(respond_to_individual(msg))bot = SkypeBot()The above code (there's much more to it, but the core of it is written down) is supposed to answer each message that any user sends either privately or in the group chat.However, there's a problem. Usually, this code works just fine. The bot responds to each individual user as well as the group chat. Then, every once in a while (once every 10 chats), the bot stops responding to individual messages. The function MessageStatus simply does not fire, which made me think that there may be some other event I need to catch. So I added one general event catcher to the bot: def Notify(self, notification): print "NOTIFICATION:" print notification print "=========================="The only purpose of this code was to see if I am missing any event. So I waited for a bit, and when the bot did not respond, I checked the printout of the function.Usually, the bot recieves several notifications when a message arrives: there's the chatmessage recieved notification, the chat activity timestamp notification and some others. The chatmessage recieved notification is the one that eventually triggers the MessageStatus event.In the case when the bot did not respond, only one notification came through. It was the notification CHAT **** ACTIVITY_TIMESTAMP ******. There was no notification that a chatmessage was recieved, so no message to respond do.When I manually clicked on my Skype client and focused my window on the message recieved, the MessageStatus evend finally fired and the bot responded, but that was way too late.My question has several parts:Is my general code correct? Should, if Skype4Py worked flawlessly, my code work OK?Did anyone else encounter this error where a certain event did not fire?If you encountered a similar error, have you solved it? If not, did you at least discover how to consistently reproduce this problem? I can't even debug it because it appears suddenly and out of nowhere...
Unfortunately, this is probably a bug in the Skype API.This help post indicates that support for the API is being revoked, saying: Important: As communicated in this blog post, due to technology improvements we are making to the Skype experience, some features of the API will stop working with Skype for desktop. For example, delivery of chat messages using the API will cease to work. However, we will be extending support for two of the most widely used features – call recording and compatibility with hardware devices – until we determine alternative options or retire the current solution.
AttributeError: type object X has no attribute Y So I'm a Django noob although I'm quite familiar with the Python syntax. I keep getting this error:AttributeError at /dashboard/home/type object 'Member' has no attribute 'dept1'every time I try to go to my dashboard/home/ url.I have created a Custom User Model as given below: from django.db import models # importing database library from Django from django.contrib.auth.models import Userclass Member(models.Model): # table for members' info DEPARTMENTS = ( ('Quiz', 'Quizzing'), ('Design', 'Design'), ('Elec', 'Electronics'), ('Prog', 'Programming'), ) CLASSES = ( # tuples to store choices for each field (9, '9'), # (actual value to be stored, human-readable value), (10, '10'), (11, '11'), (12, '12'), ) DESIGNATIONS = ( ('Mem', 'Member'), ('ExecMem', 'Executive Member'), ('VicePres', 'Vice President'), ('Pres', 'President'), ) user = models.OneToOneField(User) # to inherit the properties of the base User class in Django, like first_name, last_name, password, username, etc. schoolClass = models.IntegerField('Class', choices = CLASSES) desig = models.CharField('Designation', max_length = 20, choices = DESIGNATIONS) dept1 = models.CharField('Department 1', max_length = 20, choices = DEPARTMENTS) dept2 = models.CharField('Department 2', max_length = 20, choices = DEPARTMENTS) #proPic = models.ImageField('Profile Picture', upload_to = 'profile_pics') def __unicode__(self): return self.user.usernameHere is my home function from views.py:@login_required(login_url = '/dashboard/login/')def home(request): noOfPosts = 10 post_list1 = DepInfo.objects.filter(dept = Member.dept1)[:noOfPosts] post_list2 = DepInfo.objects.filter(dept = Member.dept2)[:noOfPosts] context = {'post_list1': post_list1, 'post_list2': post_list2} return render(request, 'dashboard/home.html', context)And here is the required part of my dashboard/home.html template:<h2> Welcome back, {{ Member.first_name }} </h2>{% if Member.desig == 'Mem' %}<h2> Member </h2>{% elif Member.desig == 'ExecMem' %}<h2> {{ Member.dept1 }} Executive </h2>{% else %}<h2> {{ Member.desig }} - MINET </h2>{% endif %}<h2> Departments: </h2><h3> {{ Member.dept1 }} </h3><h3> {{ Member.dept2 }} </h3>Here is my admin.py as well:from django.contrib import adminfrom django.contrib.auth.admin import UserAdminfrom django.contrib.auth.admin import Userfrom dashboard.models import Member, DepInfoclass MemberInline(admin.StackedInline): model = Member can_delete = False verbose_name_plural = 'member'class UserAdmin(UserAdmin): inlines = (MemberInline, )admin.site.unregister(User)admin.site.register(User, UserAdmin)#admin.site.register(Member)admin.site.register(DepInfo)After I run python manage.py shell and call print foo.dept1 in the shell, itt runs successfully, but refuses to work properly here. Could you please tell me what's wrong with my code?
You're trying to access the dept1 attribute of the Member class, but you ought to be getting the attribute from an instance of the Member class.So, your view function should look more like this:current_member = Member.objects.get(user = request.user)post_list1 = DepInfo.objects.filter(dept = current_user.dept1)[:noOfPosts]post_list2 = DepInfo.objects.filter(dept = current_user.dept2)[:noOfPosts]
organizing numbers in numpy I have some numbers in list which i want to organize with numpy.Heres my codelst=['99.56','99.76','99.84','100.00','100.00','100.00','100.00','100.00','100.00','99.80','99.43']lst2=[] for i in np.arange(95.0,100.0,0.1): x=0 for j in lst: if float(i)+0.1>= float(j) > float(i): x=x+1 lst2.append(x)lst2=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0]I didn't understand why the last item is 0.There are 6 '100.00'.How come it's not 6?.Thanks
The ndarray generated by numpy.arange does generally not include the end value:In [15]: np.arange(99.0, 100., 0.1)Out[15]: array([ 99. , 99.1, 99.2, 99.3, 99.4, 99.5, 99.6, 99.7, 99.8, 99.9])Note that there is a built-in method numpy.histogram to do this for you;In [13]: np.histogram(lst, list(np.arange(95.0,100.2,0.1)))Out[13]: (array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 2, 0, 6, 0]), array([ 95. , 95.1, 95.2, 95.3, 95.4, 95.5, 95.6, 95.7, 95.8, 95.9, 96. , 96.1, 96.2, 96.3, 96.4, 96.5, 96.6, 96.7, 96.8, 96.9, 97. , 97.1, 97.2, 97.3, 97.4, 97.5, 97.6, 97.7, 97.8, 97.9, 98. , 98.1, 98.2, 98.3, 98.4, 98.5, 98.6, 98.7, 98.8, 98.9, 99. , 99.1, 99.2, 99.3, 99.4, 99.5, 99.6, 99.7, 99.8, 99.9, 100. , 100.1, 100.2]))
how to convert a python dict object to a java equivalent object? I need to convert a python code into an equivalent java code. Python makes life very easy for the developers by providing lots of shortcut functionalities. But now I need to migrate the same to Java. I was wondering what will the equivalent of dict objects in java? I have tried using HashMap but life is hell. For starters consider this,# Nodes is a dictionary -> Key : (Name, Strength)for node, (name, strength) in nodes.items(): nodes[node] = (name, new_strength)So how to go about converting this into Java?For starters I used HashMap object so,Map<Integer, List> nodesMap = new HashMap<Integer,List>();/* For iterating over the map */Iterator updateNodeStrengthIterator = nodesMap.entrySet().iterator(); while(updateNodeStrengthIterator.hasNext()){ } My problem is in getting the List part which contains Name & Strength & then updating the Strength part. Is there any feasible way to do this? Should I consider some different data structure? Please help.
It's probably easiest to just create a class for the (Name, Strength) tuple:class NameStrength { public String name; public String strength;}Add getters, setters and a constructor if appropriate.Then you can use the new class in your map:Map<Integer, NameStrength> nodesMap = new HashMap<Integer, NameStrength>();In Java 5 and up, you can iterate like this:for (NameStrength nameStrength : nodesMap.values()) {}or like this:for (Entry<Integer, NameStrength> entry : nodesMap.entrySet()) {}
Writing a function that calculates the average value of 5 parameters from user input I need to build a code that will give the average sum of 5 user input parameters. I have to add all 5 parameters, and then divide the addition by 5. I also have to make sure the function uses the return command to return the average as the value for the function. Since I am a beginner I can't use advanced code. Been stuck for days.I have tried converting the string into an intI have tried converting a list to a string to an intI have tried making multiple variables.from statistics import meana = input("Enter 1st number:")b = input("Enter 2nd number:")c = input("Enter 3rd number:")d = input("Enter 4th number:")e = input("Enter 5th number:")inputs = ['a', 'b', 'c', 'd', 'e']input1 = int(inputs)mean(inputs)import matha = input("Enter 1st number:")b = input("Enter 2nd number:")c = input("Enter 3rd number:")d = input("Enter 4th number:")e = input("Enter 5th number:")inputs = ['a', 'b', 'c', 'd', 'e']avg_mean = "5"totals = int(inputs) / avg_meanI expect to get all 5 user input numbers and divide it by 5 to get the average sum.
A pythonic implementationCreate input directly into a list and repeat with rangeThere's no need to create a separate object for each inputThere's no need to then load those 5 objects into another objectConvert input to int immediately with int(input())statistics.mean works on the entire list[x for x in range(5)] is a list comprehensionf'Input {x} values' is an f-StringThe function accepts a parameter for how many inputs. The default is 5, but that isn't used if you pass some other number when the function is called.from statistics import meandef mean_of_inputs(number_of_inputs: int=5) -> float: return mean([int(input(f'Please input {x + 1} of {number_of_inputs} numbers: ')) for x in range(number_of_inputs)])# Use the functionmean_of_inputs(6)Please input 1 of 6 numbers: 10Please input 2 of 6 numbers: 20Please input 3 of 6 numbers: 30Please input 4 of 6 numbers: 40Please input 5 of 6 numbers: 50Please input 6 of 6 numbers: 6035Alternative 1:No list comprehensionNo imported modulesStores all the inputs in numbers and uses the built-in function, sumdef mean_of_inputs(number_of_inputs: int=5) -> float: numbers = list() for x in range(number_of_inputs): numbers.append(int(input(f'Please input {x + 1} of {number_of_inputs} numbers: '))) return sum(numbers) / number_of_inputsCall the function:mean_of_inputs()Please input 1 of 5 numbers: 2Please input 2 of 5 numbers: 4Please input 3 of 5 numbers: 6Please input 4 of 5 numbers: 8Please input 5 of 5 numbers: 106.0Alternative 2:Don't store the inputs, just add them to a running totaldef mean_of_inputs(number_of_inputs: int=5) -> float: sum_of_inputs = 0 for x in range(number_of_inputs): sum_of_inputs += int(input(f'Please input {x + 1} of {number_of_inputs} numbers: ')) return sum_of_inputs / number_of_inputs
How to get Information about sharpness of Image with Fourier Transformation? i am rookie with Matplotlib, Python, FFT.My Task is to get information about sharpness of a Image with FFT, but how do i get this done? What i have done so far:#getImage:imgArray2 = Camera.GetImage()imgArray2 = cv2.flip(imgArray2, 0)grayImage = Image.fromarray(imgArray2).convert('L')#Fast Fourier Transformation:f = np.fft.fft2(grayImage)#Shift zero frequency to Centerfshift = np.fft.fftshift(f)#Shows Result of FFT:#plt.imshow(np.abs(np.log10(fshift)), cmap='gray')#Try to Plot the result (this code is an example which i tried to modify):N = 600T = 1.0 / 800.0xf = np.linspace(0.0, 1.0 / (2.0 + T), N / 2)plt.plot(xf, 2.0 / N * np.abs(fshift[:N // 2]))plt.title('Fourier Transformation')plt.show()EDIT:Based on the answer of roadrunner66. My new Code:imgArray2 = Camera.GetImage()imgArray2 = cv2.flip(imgArray2, 0)grayImage = Image.fromarray(imgArray2).convert('L')f = np.fft.fft2(grayImage)fshift = np.fft.fftshift(f)magnitude_spectrum = 20 * np.log(np.abs(fshift))x = np.linspace(0, 1, 1024)y = np.linspace(0, 1, 768)X, Y = np.meshgrid(x, y)highpass = 1 - np.exp(- ((X - 0.5) ** 2 + (Y - 0.5) ** 2) * 5)print(np.shape(highpass))f2 = fshift * highpassz3 = np.absolute(np.fft.ifft2(f2))plt.subplot(337)plt.imshow(z3)plt.title('only high frequency content survived')plt.colorbar()plt.subplot(338)plt.imshow(highpass)plt.title('highpass, suppresses \n low frequencies')plt.colorbar()plt.subplot(339)plt.imshow(np.log10(np.abs(fshift * highpass)), cmap='gray')plt.title('FFT*highpass')plt.colorbar()plt.show()Can someone verify if i correctly ported the Code. Must i multiply magnitude and hishpass OR fshift and highpass?Now if i have two pictures which are same, but one is blurry and the other one is sharp. Here are the results (Link, because i can not upload directly pictures):https://share-your-photo.com/e69b1128bchttps://share-your-photo.com/1ef71afa07Also a new Question:How can i compare two pictures with each to say which one is sharper without looking at it. I mean how can i program something like that?Is it possible to compare two Array and say which one has overall bigger values (overall bigger Values means more sharper?)Currently i am doing something like that:sharpest = 0sharpestFocus = 0# Cam has a Focus Range from 0 to 1000while i < 1000:i = i + 25#Set Focus Value to Camera...a = np.sum(np.log10(np.abs(fshift * highpass)) / np.log10(np.abs(fshift * highpass)).size)if sharpest < a: sharpest = a sharpestFocus = i...This seems to work but it is very slow, because i loop and make 40 FFTs. Is there a faster way to do that?Sorry if this question is stupid, but i am a noob :-)
As the comments pointed out, you are looking for high frequencies (frequencies away from the center of your 2D Fourier plot). I'm giving a synthetic example. I added some noise to make it more similar to a real image.In the 3rd line I'm showing a lowpass filter in the middle, multiply the FFT spectrum to the right with it and inverse transform to get the filtered image on the left. So I suppressed the low frequencies in the image and only the sharp portions stand out now. Try with your image.import numpy as npimport matplotlib.pyplot as p%matplotlib inlinen=200x=np.linspace(0,1,n)y=np.linspace(0,1,n)X,Y=np.meshgrid(x,y)z=np.zeros((n,n))z1= np.sin(2*np.pi*X*5)* np.cos(2*np.pi*Y*20) +1/20*np.random.random(np.shape(z))z2=np.copy(z1)for i in range(30): z2[ i*10: 3+i*10, 100+i*3:103+i*3]=2#Fast Fourier Transformation:def f(z): return np.fft.fftshift(np.fft.fft2(z))highpass=1-np.exp(- ((X-0.5)**2+(Y-0.5)**2)*5)print(np.shape(highpass))f2=f(z2)*highpassz3= np.absolute( np.fft.ifft2(f2)) #Shows Result of FFT:p.figure(figsize=(15,12))p.subplot(331)p.imshow( z1)p.colorbar()p.title('soft features only')p.subplot(333)p.imshow(np.abs( np.log10(f(z1)) ), cmap='gray')p.title('see the spatial frequencies +/-5 from center in x, +/-20 in y')p.colorbar()p.subplot(334)p.imshow( z2)p.colorbar()p.title('add some sharp feature')p.subplot(336)p.imshow(np.abs(np.log10(f(z2))), cmap='gray')p.title('higher frequencies appear ()')p.colorbar()p.subplot(337)p.imshow(z3)p.title('only high frequency content survived')p.colorbar()p.subplot(338)p.imshow( highpass)p.title('highpass, suppresses \n low frequencies')p.colorbar()p.subplot(339)p.imshow( np.log10(np.abs(f(z2)*highpass)), cmap='gray')p.title('FFT*highpass')p.colorbar()
Automatically writing headers using csv python I'm trying to write a motif finding function which takes amino acid fasta as an input and outputs an motifs in the excel file.My desired output looks like this..SeqName M1 Hits M2 HitsSeq1 MN[A-Z] 3 V[A-Z]R[ML] 2Seq2 MN[A-Z] 0 V[A-Z]R[ML] 5Seq3 MN[A-Z] 1 V[A-Z]R[ML] 0I've been trying to automatically generate the headers since the motifs that I'm searching for is over 2. (In the example above , there are only two(M1 and M2)).The code that Ive been working on looks like this..import refrom Bio import SeqIOimport csvimport collectionsdef SearchMotif(f1, motif, f2="motifs1.xls"): with open(f1, 'r') as fin, open(f2,'wb') as fout: writer = csv.writer(fout, delimiter = '\t') writer.writerow(['M%s'%(i+1)for i in range(0,len(motif),1)]) writer.writerow(['Hits' for i in range(0,len(motif),1)])And this generates M1 M2Hits HitsIs there any way that I could make my header rows looks like my desired output?Hits are static but M1 will be increased by length of motifs. So if I have 5 different types of motifs to search, it will be like..SeqName M1 Hits M2 Hits M3 Hits M4 Hits M5 Hits
It is simple. make header row,>>> headerrow = ['SeqName']>>> for i in range(1,6):... headerrow.append('M%d' % i)... headerrow.append('Hits')...>>> headerrow['SeqName', 'M1', 'Hits', 'M2', 'Hits', 'M3', 'Hits', 'M4', 'Hits', 'M5', 'Hits']and write.writer.writerow(headerrow)
How to store HDF5 (HDF Store) in a Django model field I am currently working on a project where I generate pandas DataFrames as results of analysis. I am developing in Django and would like to use a "data" field in a "Results" model to store the pandas DataFrame.It appears that HDF5(HDF Store) is the most efficient way to store my pandas DataFrames. However, I do not know how to create the custom field in my model to save it. I will show simplified views.py and models.py below to illustrate.models.pyclass Result(model.Model): scenario = models.ForeignKey(Scenario) # HOW DO I Store HDFStore data = models.HDF5Field()views.pyclass AnalysisAPI(View): model = Result def get(self, request): request_dict = request.GET.dict() scenario_id = request_dict['scenario_id'] scenario = Scenario.objects.get(pk=scenario_id) result = self.model.objects.get(scenario=scenario) analysis_results_df = result.data['analysis_results_df'] return JsonResponse( analysis_results_df.to_json(orient="records") ) def post(self, request): request_dict = request.POST.dict() scenario_id = request_dict['scenario_id'] scenario = Scenario.objects.get(pk=scenario_id) record_list = request_dict['record_list'] analysis_results_df = run_analysis(record_list) data = HDFStore('store.h5') data['analysis_results_df'] = analysis_results_df new_result = self.model(scenario=scenario, data=data) new_result.save() return JsonResponse( dict(status="OK", message="Analysis results saved.") )I appreciate any help and I am also open to another storage method, such as Pickle, with similar performance provided I can use it with Django.
You can create a custom Model field that saves your data to a file in storage and saves the relative file path to the database.Here is how you could subclass models.CharField in your app's fields.py:import osfrom django.core.exceptions import ValidationErrorfrom django.core.files.storage import default_storagefrom django.db import modelsfrom django.utils.translation import gettext_lazy as _class DataFrameField(models.CharField): """ custom field to save Pandas DataFrame to the hdf5 file format as advised in the official pandas documentation: http://pandas.pydata.org/pandas-docs/stable/io.html#io-perf """ attr_class = DataFrame default_error_messages = { "invalid": _("Please provide a DataFrame object"), } def __init__( self, verbose_name=None, name=None, upload_to="data", storage=None, unique_fields=[], **kwargs ): self.storage = storage or default_storage self.upload_to = upload_to self.unique_fields = unique_fields kwargs.setdefault("max_length", 100) super().__init__(verbose_name, name, **kwargs) def deconstruct(self): name, path, args, kwargs = super().deconstruct() if kwargs.get("max_length") == 100: del kwargs["max_length"] if self.upload_to != "data": kwargs["upload_to"] = self.upload_to if self.storage is not default_storage: kwargs["storage"] = self.storage kwargs["unique_fields"] = self.unique_fields return name, path, args, kwargsThe __init__ and deconstruct methods are very much inspired by the Django original FileField. There is an additional unique_fields parameter that is useful for creating predictable unique file names. def from_db_value(self, value, expression, connection): """ return a DataFrame object from the filepath saved in DB """ if value is None: return value return self.retrieve_dataframe(value) def get_absolute_path(self, value): """ return absolute path based on the value saved in the Database. """ return self.storage.path(value) def retrieve_dataframe(self, value): """ return the pandas DataFrame and add filepath as property to Dataframe """ # read dataframe from storage absolute_filepath = self.get_absolute_path(value) dataframe = read_hdf(absolute_filepath) # add relative filepath as instance property for later use dataframe.filepath = value return dataframeYou load the DataFrame to memory from storage with the from_db_value method based on the file path saved in the database.When retrieving the DataFrame, you also add the file path as instance property to it, so that you can use that value when saving the DataFrame back to the database. def pre_save(self, model_instance, add): """ save the dataframe field to an hdf5 field before saving the model """ dataframe = super().pre_save(model_instance, add) if dataframe is None: return dataframe if not isinstance(dataframe, DataFrame): raise ValidationError( self.error_messages["invalid"], code="invalid", ) self.save_dataframe_to_file(dataframe, model_instance) return dataframe def get_prep_value(self, value): """ save the value of the dataframe.filepath set in pre_save """ if value is None: return value # save only the filepath to the database if value.filepath: return value.filepath def save_dataframe_to_file(self, dataframe, model_instance): """ write the Dataframe into an hdf5 file in storage at filepath """ # try to retrieve the filepath set when loading from the database if not dataframe.get("filepath"): dataframe.filepath = self.generate_filepath(model_instance) full_filepath = self.storage.path(dataframe.filepath) # Create any intermediate directories that do not exist. # shamelessly copied from Django's original Storage class directory = os.path.dirname(full_filepath) if not os.path.exists(directory): try: if self.storage.directory_permissions_mode is not None: # os.makedirs applies the global umask, so we reset it, # for consistency with file_permissions_mode behavior. old_umask = os.umask(0) try: os.makedirs(directory, self.storage.directory_permissions_mode) finally: os.umask(old_umask) else: os.makedirs(directory) except FileExistsError: # There's a race between os.path.exists() and os.makedirs(). # If os.makedirs() fails with FileExistsError, the directory # was created concurrently. pass if not os.path.isdir(directory): raise IOError("%s exists and is not a directory." % directory) # save to storage dataframe.to_hdf(full_filepath, "df", mode="w", format="fixed") def generate_filepath(self, instance): """ return a filepath based on the model's class name, dataframe_field and unique fields """ # create filename based on instance and field name class_name = instance.__class__.__name__ # generate unique id from unique fields: unique_id_values = [] for field in self.unique_fields: unique_field_value = getattr(instance, field) # get field value or id if the field value is a related model instance unique_id_values.append( str(getattr(unique_field_value, "id", unique_field_value)) ) # filename, for example: route_data_<uuid>.h5 filename = "{class_name}_{field_name}_{unique_id}.h5".format( class_name=class_name.lower(), field_name=self.name, unique_id="".join(unique_id_values), ) # generate filepath dirname = self.upload_to filepath = os.path.join(dirname, filename) return self.storage.generate_filename(filepath)Save the DataFrame to an hdf5 file with the pre_save method and save the file path to the Database in get_prep_value.In my case it helped to use a uuid Model Field to create the unique file name, because for new model instances, the pk was not yet available in the pre-save method, but the uuid value was.You can then use this field in your models.py:from .fields import DataFrameField# track data as a pandas DataFramedata = DataFrameField(null=True, upload_to="data", unique_fields=["uuid"])Please note that you cannot use this field in the Django admin or in a Model form. That would require additional work on a custom form Widget to edit the DataFrame content in the front-end, probably as a table.Also beware that for tests, I had to override the MEDIA_ROOT setting with a temporary directory using tempfile to prevent creating useless files in the actual media folder.
Python Regex for extracting specific part from string I have the following string:SOURCEFILE: file_name.dc : 1 : log: the logging areaI am trying to store anything inbetween the third and the fourth colon in a variable and discard the rest.I've tried to make a regular expression to grab this but so far i have this which is wrong :([^:]:[^:]*)I would appreciate some help with this and an explanation of the valid regex so i can learn from my mistake.
>>> import re>>> s = "SOURCEFILE: file_name.dc : 1 : log: the logging area">>> s1 = re.sub(r"[^\:]*\:[^\:]*\:[^\:]*\:([^\:]*)\:.*", r"\1", s)>>> print s1log
How to find particular column unique values count in python pandas? I have following dataframe.,company,sector,marksa,b1,21b,b2,27c,b2,20a,b3,70I have to display no of company,sector and sum of markshow do we take unique column value length in pandas
I think you can use nunique and sum:print (pd.Series([df.company.nunique(), df.sector.nunique(), df.marks.sum()], index=df.columns))company 3sector 3marks 138dtype: int64print (pd.Series([df.company.nunique(), df.sector.nunique(), df.marks.sum()], index=df.columns).to_dict()){'company': 3, 'sector': 3, 'marks': 138}Or:print (pd.Series([df.company.nunique(), df.sector.nunique(), df.marks.sum()], index=df.columns).to_json()){"company":3,"sector":3,"marks":138}If need custom names:print (pd.Series([df.company.nunique(), df.sector.nunique(), df.marks.sum()], index=['comp','sec','mar']))comp 3sec 3mar 138dtype: int64
Add or subtract to integer db field using sqlform in web2py I have set up the db table with the following fields:db.define_table('balance', Field('income', 'integer'), Field('income_description', "text"), Field('expenses', 'integer'), Field('expenses_discription', "text"), Field('loan', 'integer'), Field('loan_discription'))then the basic function with the form:def index(): form = SQLFORM(db.balance).process() if form.accepted: redirect(URL('data_display')) session.flash = 'Records Successfully Updated!' return locals()How can I add new amount to the income, the expenses or the loan each time I input and submit the form with a new integer? I want to achieve something like this that would look like this in simple Python:savings = (income - expenses) - loanSo each time I input a new amount for the income or the expenses or the loan, I would add that amount and update the record in the database.
What you need to use is a Computed Field:>>> db.define_table('item', Field('unit_price','double'), Field('quantity','integer'), Field('total_price', compute=lambda r: r['unit_price']*r['quantity']))>>> r = db.item.insert(unit_price=1.99, quantity=5)>>> print r.total_price9.95In your case it would be something like this:db.define_table('balance', Field('income', 'integer'), Field('income_description', "text"), Field('expenses', 'integer'), Field('expenses_discription', "text"), Field('loan', 'integer'), Field('loan_discription') Field('savings', compute=lambda r: (r['income'] - r['expenses']) -r['loan']))
Perlin Noise - Python's Ursina Game Engine Is there a way to incorporate Perlin Noise into my Minecraft Clone? I have tried many different things that did not work.Here is a snippet of my code:from ursina import *from ursina.prefabs.first_person_controller import FirstPersonControllerfrom ursina.shaders import camera_grayscale_shaderapp = Ursina()grass = 'textures/grass.jpg'class Voxel(Button): def __init__(self, position = (0,0,0), texture = grass): super().__init__( model='cube', texture=texture, color=color.color(0,0,random.uniform(.823,.984)), parent=scene, position=position, ) def input(self, key): if self.hovered: if key == 'right mouse down': voxel = Voxel(position = self.position + mouse.normal, texture = plank) if key == 'left mouse down': destroy(self)for z in range(16): for x in range(16): voxel = Voxel(position = (x,0,z))
To generate terrain using perlin noise, you can create a Terrain entity with the heightmap with your perlin noise image :from ursina import *app = Ursina()noise = 'perlin_noise_file' # file t = Terrain(noise) # noise must be a fileapp.run()To make a perlin noise, you can use the perlin_noise library. Here is an example from the docs :import matplotlib.pyplot as pltfrom perlin_noise import PerlinNoisenoise = PerlinNoise(octaves=10, seed=1)xpix, ypix = 100, 100pic = [[noise([i/xpix, j/ypix]) for j in range(xpix)] for i in range(ypix)]plt.imshow(pic, cmap='gray')plt.show()
Save iteration in dataframe I have two dataframes:import numpy as npimport pandas as pdfrom sklearn.metrics import r2_score df = pd.DataFrame([{'A': -4, 'B': -3, 'C': -2, 'D': -1, 'E': 2, 'F': 4, 'G': 8, 'H': 6, 'I': -2}])df2 looks like this (just a cutout; in total there are ~100 rows).df2 = pd.DataFrame({'Date': [220412004, 220412004, 220412004, 220412006], 'A': [-0.15584, -0.11446, -0.1349, -0.0458], 'B': [-0.11826, -0.0833, -0.1025, -0.0216], 'C': [-0.0611, -0.0413, -0.0645, -0.0049], 'D': [-0.04461, -0.022693, -0.0410, 0.0051], 'E': [0.0927, 0.0705, 0.0923, 0.0512], 'F': [0.1453, 11117, 0.1325, 0.06205], 'G': [0.30077, 0.2274, 0.2688, 0.1077], 'H': [0.2449, 0.1860, 0.2274, 0.09328], 'I': [-0.0706, -0.0612, -0.0704, -0.02953]}) Date A B C D E F G H I3 220412004 -0.15584 -0.11826 -0.0611 -0.04461 0.0927 0.1453 0.30077 0.2449 -0.07064 220412004 -0.11446 -0.0833 -0.0413 -0.022693 0.0705 0.11117 0.2274 0.1860 -0.06125 220412004 -0.1349 -0.1025 -0.0645 -0.0410 0.0923 0.1325 0.2688 0.2274 -0.07047 220412006 -0.0458 -0.0216 -0.0049 0.0051 0.0512 0.06205 0.1077 0.09328 -0.02953Then I set:df2 = df2.set_index('Date')df2 = df2.astype(float)Now I iterate through all rows of df2 and make a linear regression:for index, row in df2.iterrows():reg = np.polyfit(df.values[0], row.values, 1)predict = np.poly1d(reg) # Slope and intercepttrend = np.polyval(reg, df)std = row.std() # Standard deviationr2 = np.round(r2_score(row.values, predict(df.T)), 5) #R-squaredso far so good.Now I would like to store the results into df2. This is my approach:df3 = pd.DataFrame([predict])df4 = pd.DataFrame([r2])df5 = pd.concat([df3, df4], axis = 1)df5.columns = ['Slope', 'Intercept', 'r2']result = pd.concat([df2, df5], axis = 1)However, this will just add a single new row to "result". But I want the result to be stored to the corresponding index. Any ideas?Edit:to make it clearer, this is how I want result to look like:result = pd.DataFrame({'Date': [220412004, 220412004, 220412004, 220412006], 'A': [-0.15584, -0.11446, -0.1349, -0.0458], 'B': [-0.11826, -0.0833, -0.1025, -0.0216], 'C': [-0.0611, -0.0413, -0.0645, -0.0049], 'D': [-0.04461, -0.022693, -0.0410, 0.0051], 'E': [0.0927, 0.0705, 0.0923, 0.0512], 'F': [0.1453, 11117, 0.1325, 0.06205], 'G': [0.30077, 0.2274, 0.2688, 0.1077], 'H': [0.2449, 0.1860, 0.2274, 0.09328], 'I': [-0.0706, -0.0612, -0.0704, -0.02953], 'Slope': [0.03834244, 235.48473307, 0.03481399, 0.01286896], 'Intercept': [0.00294672, 1025.92034249, 0.00324312, 0.01272759], 'r2': [0.99615000, 0.07415000, 0.99447000, 0.97297000]})
You can store the result in every loopfor index, row in df2.iterrows(): reg = np.polyfit(df.values[0], row.values, 1) predict = np.poly1d(reg) # Slope and intercept trend = np.polyval(reg, df) std = row.std() # Standard deviation r2 = np.round(r2_score(row.values, predict(df.T)), 5) #R-squared df2.loc[[index], 'Slope and intercept'] = pd.Series([predict], index=[index]) df2.loc[index, 'r2'] = r2print(df2) A B C D E F G H I Slope and intercept r2Date220412004 -0.15584 -0.11826 -0.0611 -0.044610 0.0927 0.14530 0.30077 0.24490 -0.07060 [0.03481399394856277] 0.99447220412004 -0.11446 -0.08330 -0.0413 -0.022693 0.0705 11117.00000 0.22740 0.18600 -0.06120 [0.03481399394856277] 0.99447220412004 -0.13490 -0.10250 -0.0645 -0.041000 0.0923 0.13250 0.26880 0.22740 -0.07040 [0.03481399394856277] 0.99447220412006 -0.04580 -0.02160 -0.0049 0.005100 0.0512 0.06205 0.10770 0.09328 -0.02953 [0.012868956127080184] 0.97297You can also try apply instead of iteratingdef cal(row): reg = np.polyfit(df.values[0], row.values, 1) predict = np.poly1d(reg) # Slope and intercept trend = np.polyval(reg, df) std = row.std() # Standard deviation r2 = np.round(r2_score(row.values, predict(df.T)), 5) #R-squared #return pd.Series([predict, r2]) return predict, r2df2[['Slope and intercept', 'r2']] = df2.apply(cal, axis=1, result_type='expand')
Binding list to params in Pandas read_sql_query with other params I've been trying to test various methods for making my code to run. To begin with, I have this list:member_list = [111,222,333,444,555,...]I tried to pass it into this query:query = pd.read_sql_query("""select member id ,yearmonthfrom queried_tablewhere yearmonth between ? and ? and member_id in ?""", db2conn, params = [201601, 201603, member_list])However, I get an error that says: 'Invalid parameter type. param-index=2 param-type=list', 'HY105'So I looked around and tried using formatted strings:query = pd.read_sql_query("""select member id ,yearmonthfrom queried_tablewhere yearmonth between ? and ? and member_id in (%s)""" % ','.join(['?']*len(member_list), db2conn, params = [201601, 201603, tuple(member_list)])Now, I get the error: 'The SQL contains 18622 parameter markers, but 3 parameters were supplied', 'HY000'because it's looking to fill in all the ? placeholders in the formatted string.So, ultimately, is there a way to somehow evaluate the list and pass each individual element to bind to the ? or is there another method I could use to get this to work?Btw, I'm using pyodbc as my connector.Thanks in advance!
Break this up into three parts to help isolate the problem and improve readability:Build the SQL stringSet parameter valuesExecute pandas.read_sql_queryBuild SQLFirst ensure ? placeholders are being set correctly. Use str.format with str.join and len to dynamically fill in ?s based on member_list length. Below examples assume 3 member_list elements.Examplemember_list = (1,2,3)sql = """select member_id, yearmonth from queried_table where yearmonth between {0} and {0} and member_id in ({1})"""sql = sql.format('?', ','.join('?' * len(member_list)))print(sql)Returnsselect member_id, yearmonthfrom queried_tablewhere yearmonth between ? and ?and member_id in (?,?,?)Set Parameter ValuesNow ensure parameter values are organized into a flat tupleExample# generator to flatten values of irregular nested sequences,# modified from answers http://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-pythondef flatten(l): for el in l: try: yield from flatten(el) except TypeError: yield elparams = tuple(flatten((201601, 201603, member_list)))print(params)Returns(201601, 201603, 1, 2, 3)ExecuteFinally bring the sql and params values together in the read_sql_query callquery = pd.read_sql_query(sql, db2conn, params)
Python - multiplying dataframes of different size I have two dataframes:df1 - is a pivot table that has totals for both columns and rows, both with default names "All"df2 - a df I created manually by specifying values and using the same index and column names as are used in the pivot table above. This table does not have totals.I need to multiply the first dataframe by the values in the second. I expect the totals return NaNs since totals don't exist in the second table. When I perform multiplication, I get the following error:ValueError: cannot join with no level specified and no overlapping namesWhen I try the same on dummy dataframes it works as expected:import pandas as pdimport numpy as nptable1 = np.matrix([[10, 20, 30, 60], [50, 60, 70, 180], [90, 10, 10, 110], [150, 90, 110, 350]])df1 = pd.DataFrame(data = table1, index = ['One','Two','Three', 'All'], columns =['A', 'B','C', 'All'] )print(df1)table2 = np.matrix([[1.0, 2.0, 3.0], [5.0, 6.0, 7.0], [2.0, 1.0, 5.0]])df2 = pd.DataFrame(data = table2, index = ['One','Two','Three'], columns =['A', 'B','C'] )print(df2)df3 = df1*df2print(df3)This gives me the following output: A B C AllOne 10 20 30 60Two 50 60 70 180Three 90 10 10 110All 150 90 110 350 A B COne 1.00 2.00 3.00Two 5.00 6.00 7.00Three 2.00 1.00 5.00 A All B CAll nan nan nan nanOne 10.00 nan 40.00 90.00Three 180.00 nan 10.00 50.00Two 250.00 nan 360.00 490.00So, visually, the only difference between df1 and df2 is the presence/absence of the column and row "All".And I think the only difference between my dummy dataframes and the real ones is that the real df1 was created with pd.pivot_table method:df1_real = pd.pivot_table(PY, values = ['Annual Pay'], index = ['PAR Rating'], columns = ['CR Range'], aggfunc = [np.sum], margins = True)I do need to keep the total as I'm using them in other calculations. I'm sure there is a workaround but I just really want to understand why the same code works on some dataframes of different sizes but not others. Or maybe an issue is something completely different. Thank you for reading. I realize it's a very long post..
IIUC, My Preferred Approachyou can use the mul method in order to pass the fill_value argument. In this case, you'll want a value of 1 (multiplicative identity) to preserve the value from the dataframe in which the value is not missing.df1.mul(df2, fill_value=1) A All B CAll 150.0 350.0 90.0 110.0One 10.0 60.0 40.0 90.0Three 180.0 110.0 10.0 50.0Two 250.0 180.0 360.0 490.0Alternate ApproachYou can also embrace the np.nan and use a follow-up combine_first to fill back in the missing bits from df1 (df1 * df2).combine_first(df1) A All B CAll 150.0 350.0 90.0 110.0One 10.0 60.0 40.0 90.0Three 180.0 110.0 10.0 50.0Two 250.0 180.0 360.0 490.0
Get the list of RGB pixel values of each superpixel l have an RGB image of dimension (224,224,3). l applied superpixel segmentation on it using SLIC algorithm.As follow : img= skimageIO.imread("first_image.jpeg")print('img shape', img.shape) # (224,224,3)segments_slic = slic(img, n_segments=1000, compactness=0.01, sigma=1) # Up to 1000 segmentssegments_slic.shape(224,224)Number of returned segments are :np.max(segments_slic)Out[49]: 595From 0 to 595. So, we have 596 superpixels (regions).Let's take a look at segments_slic[0]segments_slic[0]Out[51]: array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25])What l would like to get ?for each superpixel region make two arrays as follow:1) Array : contain the indexes of the pixels belonging to the same superpixel.For instance superpixel_list[0] contains all the indexes of the pixels belonging to superpixel 0 . superpixel_list[400] contains all the indexes of the pixels belonging to superpixel 4002)superpixel_pixel_values[0] : contains the pixel values (in RGB) of the pixels belonging to superpixel 0.For instance, let's say that pixels 0, 24 , 29, 53 belongs to the superpixel 0. Then we getsuperpixel[0]= [[223,118,33],[245,222,198],[98,17,255],[255,255,0]]# RGB values of pixels belonging to superpixel 0What is the efficient/optimized way to do that ? (Because l have l dataset of images to loop over)EDIT-1def sp_idx(s, index = True): u = np.unique(s) if index: return [np.where(s == i) for i in u] else: return [s[s == i] for i in u] #return [s[np.where(s == i)] for i in u] gives the same but is slowersuperpixel_list = sp_idx(segments_slic)superpixel = sp_idx(segments_slic, index = False)In superpixel_list we are supposed to get a list containing the index of pixels belonging to the same superpixel.For instancesuperpixel_list[0] is supposed to get all the pixel indexes of the pixel affected to superpixel 0however l get the following :superpixel_list[0]Out[73]: (array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13]), array([0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2]))Why two arrays ?In superpixel[0] for instance we are supposed to get the RGB pixel values of each pixel affected to supepixel 0 as follow :for instance pixels 0, 24 , 29, 53 are affected to superpixel 0 then :superpixel[0]= [[223,118,33],[245,222,198],[98,17,255],[255,255,0]]However when l use your function l get the following :superpixel[0]Out[79]: array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])Thank you for your help
Can be done using np.where and the resulting indices.def sp_idx(s, index = True): u = np.unique(s) return [np.where(s == i) for i in u]superpixel_list = sp_idx(segments_slic)superpixel = [img[idx] for idx in superpixel_list]
Convert a list of a list of strings with decimals into floats I have a list of a list that goes like this: Main_List: [ ['1.2','3.5'],[ ['5.8','8.3'] ]I am trying to convert one of the sublists into floats, here is what i did:Main_List[1] = [float(i) for i in Main_List[1]]but i keep getting an error "ValueError: could not convert string to float: ."I have tried several other methods but for some reason it keeps complaining about the dot. The lists i am trying to convert have been extracted from a csv file if that makes a difference, but i did print them and they look fine.What am i doing wrong?
I was able to figure it out, what i had to do is create another list and set it equal to the first sublist then turn all its elements to floats.for exampleSublist_1 = []Sublist_1 = MainList[0]Sublist_1 = map(float, Sublist_1)
Taking the average of a sliced list The problem I'm having is attempting to take the average of my list (derived from y, which is a list of sin values). However, when running the code, I get the error TypeError: float() argument must be a string or a number, not 'list'Any help you could offer would be greatly appreciatedfor k in range(len(y)-1-r): list_to_avg = [y[r+k:len(y)-1-r+k]] b =float(sum(list_to_avg, [])) a =float(len(list_to_avg)) z.append(b/a)
You have made list_to_avg a list that contains a list.Uselist_to_avg = y[r+k:len(y)-1-r+k]instead.
Datetime format of a Pandas dataframe column switching randomly I am using a dataframe which has a 'Date' column. I have used pd.to_datetime() to convert this column format to yyyy-mm-dd. However, this format is getting switched to some other format at intermittent dates in the dataframe (eg: yyyy-dd-mm).Date 2021-02-01 <----- this is 2nd Jan, 20212021-01-21 <----- this is 21st Jan, 2021Further, I have alto tried using the df['Date'].dt.strftime('%y-%m-%d'), but this too has not helped.I request some guidance on the following points:For any Date column, is it enough to just use pd.to_datetime() and be rest assured that all dates will be in correct format?Or do I need to manually state the datetime format explicitly alongwith the pd.to_[enter image description here][1]datetime() feature?
The problem comes from how pandas parses dates.When receiving 2021-02-01 it does not know if it is Feb 1st or Jan 2nd, so it applies its default decision rules: when the date starts with the year, the next field is the month, so resulting in Feb 1st.This is not the case when parsing 2021-01-21, there is only one possible date, Jan 21st.Take a look at to_datetime documentation, and its parameters day_first or format, to force a given format when there are different possible parsings
Why are the two tkinter entries using the same number? import osimport tkinterimport tkinter.font as tkFontfrom tkinter import *coord1 = "0,0"coord2 = "0,0"EQ = "y = mx + b"def tkinter_window(): global coord1Entry global coord2Entry global coord1 global coord tk = Tk() tk.title("Math Graph") #create font1 = tkFont.Font(family="Source Code Pro", size=16) font2 = tkFont.Font(family="Source Code Pro", size=10) coord1Label = Label(tk, text='X coordinate 1:\n( "x,y", no parentheses )', font=font1) coord2Label = Label(tk, text='Y coordinate 2:\n( "x,y", no parentheses )', font=font1)This is the part where i define the two entries that seem to use the same numbers: coord1Entry = Entry(tk, textvariable=coord1) coord2Entry = Entry(tk, textvariable=coord2)So the problem is, when i run the program they show nothing, as usual. But as soon as i enter one character in one of the entries, they both show the character(s). I don't understand why, they use different variables? Can someone help me? coordButton = Button(tk, text="Done! (use coordinates)", font=font1) equationLabel = Label(tk, text="Equation: y =", font=font1) equationEntry = Entry(tk, textvariable=EQ, font=font1) equationButton = Button(tk, text="Done! (use equation)", font=font1) iwantanswersCheckbox = Checkbutton(tk, text="I want m, x, b, intercept and x-intercept", font=font1) iwantgraphCheckbox = Checkbutton(tk, text="I want a graph", font=font1) info1Label = Label(tk, text="***Both boxes may be checked***", font=font2) #pack coord1Label.grid(row=0, column=0, padx=15, pady=15) coord2Label.grid(row=1, column=0, padx=15, pady=15) coord1Entry.grid(row=0, column=1, padx=5, pady=5) coord2Entry.grid(row=1, column=1, padx=5, pady=5) coordButton.grid(row=2, columnspan=2, padx=5, pady=15) equationLabel.grid(row=3, column=0, sticky=E, padx=5, pady=5) equationEntry.grid(row=3, column=1, padx=5, pady=5) equationButton.grid(row=4, columnspan=2, padx=5, pady=15) iwantanswersCheckbox.grid(row=5, columnspan=2, padx=5, pady=5) iwantgraphCheckbox.grid(row=6, columnspan=2) info1Label.grid(row=7, columnspan=2) os.system('''/usr/bin/osascript -e 'tell app "Finder" to set frontmost of process "Python" to true' ''') tk.mainloop()tkinter_window()def matplotlib_window(): import matplotlib.pyplot as plt coordX[0] = Xcoord1Entry.get() coordX[1] = Xcoord2Entry.get() coordY[0] = Ycoord1Entry.get() coordY[1] = Ycoord2Entry.get() plt.plot(coordX, coordY) plt.legend(loc=4) plt.xlabel("x") plt.ylabel("y") plt.show()Main area of code where the problem should be (as requested):import tkinterimport tkinter.font as tkFontfrom tkinter import *coord1 = "0,0"coord2 = "0,0"def tkinter_window(): global coord1Entry global coord2Entry global coord1 global coord tk = Tk() tk.title("Math Graph") #create font1 = tkFont.Font(family="Source Code Pro", size=16) font2 = tkFont.Font(family="Source Code Pro", size=10) coord1Label = Label(tk, text='X coordinate 1:\n( "x,y", no parentheses )', font=font1) coord2Label = Label(tk, text='Y coordinate 2:\n( "x,y", no parentheses )', font=font1) coord1Entry = Entry(tk, textvariable=coord1) coord2Entry = Entry(tk, textvariable=coord2) #pack coord1Label.grid(row=0, column=0, padx=15, pady=15) coord2Label.grid(row=1, column=0, padx=15, pady=15) coord1Entry.grid(row=0, column=1, padx=5, pady=5) coord2Entry.grid(row=1, column=1, padx=5, pady=5) tk.mainloop()tkinter_window()
It is because you are using identical strings for the textvariable option when you need to be using two different instances of one of the special tkinter variables (StringVar, etc)By the way, you almost never need to use textvariable. My advice is to omit it since you're clearly not using it. This happens because the widget is just a thin wrapper around a widget implemented in an embedded Tcl interpreter. The string value of the textvariable option is treated as a global variable name in the embedded Tcl interpreter. Since both strings are the same, they become the same variable within the tcl interpreter (and yes, "0.0" is perfectly valid as a tcl variable). This behavior is actually why textvariable can be such a powerful tool -- you can link two or more widgets together so that when a value changes in one, it is immediately reflected in the other. Plus, it is possible to set traces on these variables so that you can get a callback when the variable is read, written, or unset. However, this is much more useful when coding in Tcl, since in Tcl a textvariable can be a normal Tcl variable. In tkinter, it must be a special type of object -- an instance of StringVar, IntVar, DoubleVar, or BooleanVar -- so you can't use it with ordinary variables.
Django REST Framework custom headers in request object I have a problem viewing incoming custom headers from requests when I'm creating a new API view via the @api_view decorator. My custom API view looks like this: @api_view(['GET']) def TestView(request): print(request.META) return Response({'message': 'test'})What I'm expecting is doing something like curl --request GET \ --url http://localhost:8000/test \ --header 'custom: test'I'd see my custom header called custom to appear in the output. Instead, it's not there at all. From the documentation, it says the following for the request.META field:A dictionary containing all available HTTP headers. Available headers depend on the client and server, but here are some examples:Whereas, they don't appear at all in my output. Am I missing something here?If it's relevant, I register my URL as such:urlpatterns = [url(r'test', views.TestView, name='test'), ...]My end goal is to write a custom permission class that will parse the custom header and do something related to authentication with it, and either allow or deny the request. My POC here is just to show the basic example of what I'm dealing with. I can provide the output of the print(request.META) but it's just a wall of text without my expected header present.
Django prepends HTTP_ to the custom headers. I think (not sure, though) that it might be related to some security issues described here. It also capitalizes them, so your custom header becomes HTTP_CUSTOMfrom rest_framework.decorators import api_viewfrom rest_framework.response import Responseimport logginglogger = logging.getLogger('django.test')@api_view(['GET'])def TestView(request): logger.info('Meta: %s' % request.META['HTTP_CUSTOM']) return Response({'message': 'test'})Correctly outputs (if the logging has been properly configured as described here):Django version 2.0, using settings 'django_server.settings'Starting development server at http://127.0.0.1:8000/Quit the server with CONTROL-C."GET /test HTTP/1.1" 301 0Meta: test"GET /test/ HTTP/1.1" 200 18Also, as you can see in the output above, Django gives a 301 redirect when the API is hit (I believe this is for authentication purposes? Not sure either) so you'll have to tell your cURL to follow the redirects using --location:curl --location --request GET \ --header 'custom: test' http://localhost:8000/testAlso note that all the hyphens (-) in your header's name are transformed to underline symbols (_) so if you call your header something like my-custom, Django will transform it to HTTP_MY_CUSTOM
How to change the target directory for a screenshot using Selenium webdriver in Firefox or Chrome I want to make a screenshot of a webpage and save it in a custom location using Selenium webdriver with Python. I tried saving the screenshot to a custom location using both Firefox and Chrome but it always saves the screenshot in the project dir. Here is my Firefox version:from selenium import webdriverfrom selenium.webdriver.firefox.firefox_binary import FirefoxBinaryprofile = webdriver.FirefoxProfile()profile.set_preference("browser.download.folderList", 2)profile.set_preference("browser.download.dir", 'C:\\Users\\User\\WebstormProjects')binary = FirefoxBinary("C:\\Program Files\\Mozilla Firefox\\firefox.exe")def foxScreen(): driver = webdriver.Firefox(firefox_binary=binary, firefox_profile=profile) driver.get("http://google.com") driver.save_screenshot("foxScreen.png") driver.quit()if __name__ == '__main__': foxScreen()And here is my Chrome version:from selenium import webdriveroptions = webdriver.ChromeOptions()prefs = {"download.default_directory": r'C:\\Users\\User\\WebstormProjects', "directory_upgrade": True}options.add_experimental_option("prefs", prefs)chromedriver = "C:\\Users\\User\\Downloads\\chromedriver_win32\\chromedriver.exe"def chromeScreen(): driver = webdriver.Chrome(chrome_options=options, executable_path=chromedriver) driver.get("http://google.com") driver.save_screenshot("chromeScreen.png") driver.quit()if __name__ == '__main__': chromeScreen()I have tried different notations for the location I want the screenshot saved to but that does not seem to help. What should I change so it does not save the screenshot to the project directory but to a given custom location?
You need to consider a couple of facts as follows:profile.set_preference('key', 'value')set_preference(key, value) sets the preference that we want in the firefox_profile. This preference is in effect when a specific Firefox Profile is invoked.save_screenshot(filename)As per the documentation save_screenshot(filename) saves a screenshot of the current window to a PNG image file. This method returns False if there is any IOError, else returns True. Use full paths in your filename.Args:filename: The full path you wish to save your screenshot to. This should end with a .png extension.Usage:driver.save_screenshot(‘/Screenshots/foo.png’)So, save_screenshot(filename) expects the full path you wish to save your screenshot to. As you were using:driver.save_screenshot("foxScreen.png")Hence the screenshot was always saved within the project directory.SolutionTo save the screenshot in a different directory you need to pass the absolute path as follows:driver.save_screenshot("./my_directory/foo.png")ReferenceYou can find a detailed discussion in How to take screenshot with Selenium WebDriver
Correctly assigning a data structure in python I have the following class:class Node(): def __init__(self, symbol, rule, children): self.symbol = symbol self.rule = rule self.children = children def addChild(self,child): self.children.append(child)I use it to build parsing trees; now I'm trying to use this function:def simplify(n): if len(n.children) == 0: return n if len(n.children) > 1: for c in n.children: c = simplify(c) return n while len(n.children) == 1: print n.symbol, n = n.children[0] #What is wrong here? print n.symbol return nto simplify trees by removing internal nodes that have just one child. For instance: S S / \ should become / \ E X v X / vWhen I run the code, the two print statements show me that n was correctly replaced by n.children[0] , but when the tree is printed (right after this funtion is used), I see the same one. What's the problem here?
In your simplify() function, the parameter n is a reference to some specific node, and you can change what node it refers to; but reassigning n doesn't change any of the other structure. As a specific example, this loop actually does nothing:for c in n.children: # simplify has no side effects and leaves the input structure unchanged c = simplify(c) # c is never used again so the simplified result is lostThere's two reasonable approaches to solving this. One is to construct a new tree as the result of simplify:def simplify(n): if len(n.children) > 1: new_children = [simplify(c) for c in n.children] return Node(n.symbol, n.rule, new_children) # and other casesThis has the advantage that your data structure is immutable: if you have two references to the tree hanging around, you know that rewriting one isn't going to destroy the other; if you have a reference to a node in the middle of the tree, there's no risk of it unexpectedly becoming "orphaned".Still, it is common enough to see mutable data structures, and you could add your simplify method into the Node class to rewrite a node in place:class Node: def simplify(self): if len(self.children) == 1: return self.children[0].simplify() if len(self.children) > 1: self.children = [c.simplify() for c in self.children] return self
How can I run a python script without python.exe My goal is to have a python script that I can giving to someone to run on their windows machine that does not have python installed. I do want to package it up in an exe because I want the underlying code to be easily read.I am updating an old VBscript and I want to mirror it. I am also using a few libraries.
use pyinstaller to package it up to an exe. You can still maintain your source code. Packaging it up wont remove your source code.
How to make a function use a variable outside the function, but in a method of a class I am trying to make a function that will be able to access a variable from outside the function. However, this variable needs to be defined in a class. I'll define a simplified function of what I am trying to do in code for clarity.class Stuff(): def __init__(self): print("Initialized") def forward(self,x): y=4 x=func(x) return xdef func(x): global y return x+ystuff = Stuff()print(stuff.forward(4))So, I am trying to make func(x) use the y defined in the "forward" method, but when I run this code, I get the error "global name y is not defined". Any help is much appreciated.
You can't. A member variable belongs to a specific instance of a class. You have to know the instance to do that, so you'd better pass y to your function as well. You could do the opposite though. use global in the forward function to define a global y variable, and then access it from the func. However, don't. It's bad practice, always. Pass y to func instead.
How convert a String to a String with HTML entities? I am looking for a way, preferably in python, but PHP is also ok or even an online site, to convert a string like"Wählen"into a string like"Wählen"i.e. replacing each ISO 8859-1 character/symbol by its HTML entity.
echo htmlentities('Wählen', 0, 'utf-8');^ PHPPS: Learn the arguments based on where you need the encoded string to appear:// does not encode quotesecho htmlentities('"Wählen"', 0, 'utf-8');// encodes quotesecho htmlentities('"Wählen"', ENT_QUOTES, 'utf-8');
How to create a list of elements where the start index is given in one list until I encounter a specific element in the larger list? I have 2 lists as follows:main_list={"A","B","End of Block", "C","D","E","F","End of Block",.....,"End of Block","Q", "R",...}index_list = {1,4,9,10,...}I need to create a list that will pick up the starting index from the index list and select all elements from that particular index in the main_list until it encounters the string "End of Block" and put it as the first element in the output list, then picks the 2nd index and parses the main list until it reads "End of block" and puts it as the 2nd element and so on.The resultant list should look like below:output_list = {"B", "DEF",...}Is there a simple way to do it?
It could also be done using list comprehension:main_list=["A","B","End of Block", "C","D","E","F","End of Block"]index_list = [1,4]output_list = [ "".join(main_list[i:main_list.index("End of Block",i+1)]) for i in index_list ]print(output_list) # ['B', 'DEF']
Pandas Groupby -- efficient selection/filtering of groups based on multiple conditions? I am trying tofilter dataframe groups in Pandas, based on multiple (any) conditions.but I cannot seem to get to a fast Pandas 'native' one-liner.Here I generate an example dataframe of 2*n*n rows and 4 columns:import itertoolsimport randomn = 100lst = range(0, n)df = pd.DataFrame( {'A': list(itertools.chain.from_iterable(itertools.repeat(x, n*2) for x in lst)), 'B': list(itertools.chain.from_iterable(itertools.repeat(x, 1*2) for x in lst)) * n, 'C': random.choices(list(range(100)), k=2*n*n), 'D': random.choices(list(range(100)), k=2*n*n) })resulting in dataframes such as: A B C D0 0 0 26 491 0 0 29 802 0 1 70 923 0 1 7 24 1 0 90 115 1 0 19 46 1 1 29 47 1 1 31 95I want toselect groups grouped by A and B,filtered groups down to where any values in the group are greater than 50 in both columns C and D,A "native" Pandas one-liner would be the following:test.groupby([test.A, test.B]).filter(lambda x: ((x.C>50).any() & (x.D>50).any()) )which produces A B C D2 0 1 70 923 0 1 7 2This is all fine for small dataframes (say n < 20).But this solution takes quite long (for example, 4.58 s when n = 100) for large dataframes.I have an alternative, step-by-step solution which achieves the same result, but runs much faster (28.1 ms when n = 100):test_g = test.assign(key_C = test.C>50, key_D = test.D>50).groupby([test.A, test.B])test_C_bool = test_g.key_C.transform('any')test_D_bool = test_g.key_D.transform('any')test[test_C_bool & test_D_bool]but arguably a bit more ugly. My questions are:Is there a better "native" Pandas solution for this task? , andIs there a reason for the sub-optimal performance of my version of the "native" solution?Bonus question:In fact I only want to extract the groups and not together with their data. I.e., I only need A B 0 1in the above example. Is there a way to do this with Pandas without going through the intermediate step I did above?
This is similar to your second approach, but chained together:mask = (df[['C','D']].gt(50) # in the case you have different thresholds for `C`, `D` [50, 60] .all(axis=1) # check for both True on the rows .groupby([df['A'],df['B']]) # normal groupby .transform('max') # 'any' instead of 'max' also works )df.loc[mask]If you don't want the data, you can forgo the transform:mask = df[['C','D']].min(axis=1).gt(50).groupby([df['A'],df['B']]).any()mask[mask].index# out# MultiIndex([(0, 1)],# names=['A', 'B'])
Django serve file in memory instead of saving on disk I want to render plot (matplotlib) in one place of code and then in ViewSet serve to it user without saving on disk. I tried to use io library to keep file in memory, but it seems that always something is wrong.My code where I save plot on disk:def some_func(self): ...generating plot... filename = self.generate_path() # generate random name for file plt.savefig(filename, transparent=True) return filenameCode of ViewSet:class SomeViewsSet(ViewSet): def create(self, request): ... some code ... path = some_func() name = path.split('/')[-1] with open(path, 'rb') as file: response = HttpResponse(file, content_type=guess_type(path)[0]) response['Content-Length'] = len(response.content) response['Content-Disposition'] = f'attachment; filename={name}' return response
Make sure that you pass matplotlib a BytesIO object, and not a StringIO. Then get the bytes using getvalue(), and pass them to HttpResponse. If that's what you've already tried, please post your code and the error message you're seeing.
calculate the content of a variable I'm learning python, and I have encountered a problem.for i in input: operator = i.split()[0] number1 = i.split()[1] number2 = i.split()[2] equation = (number1 + ' ' + operator + ' ' + number2)This code is supposed to calculate a randomly generated input, for example:+ 9 16this one wants me to print the result of 9 + 16so I made code which converts the input into an equation but i have no idea how do tell the code to calculate it.Could anybody help me?
You don't need a loop:a = input()operator = a.split()[0]number1 = a.split()[1]number2 = a.split()[2]equation = (number1 + ' ' + operator + ' ' + number2)print(equation)
how to scrape all the pages on a real estate website using pyton? I need some assistance scraping multiple pages for a real estate website. I have written the code to scrape page 1 successfully and attempted to implement code to scrape all 25 pages of it but am now stuck. Any tips/help would be greatly appreciated.import requestsfrom bs4 import BeautifulSoupfrom csv import writerbase_url = 'https://www.rew.ca/properties/areas/kelowna-bc'url = '/page/2'while url: response = requests.get(f"{base_url}{url}") soup = BeautifulSoup(response.text, "html.parser") listings = soup.find_all("article") with open("property4.csv", "w") as csv_file: csv_writer = writer(csv_file) csv_writer.writerow(["title", "type", "price", "location", "bedrooms", "bathrooms", "square feet", "link"]) for listing in listings: location = listing.find(class_="displaypanel-info").get_text().strip() price = listing.find(class_="displaypanel-title hidden-xs").get_text().strip() link = listing.find("a").get('href').strip() title = listing.find("a").get('title').strip() type = (listing.find(class_="clearfix hidden-xs").find(class_="displaypanel-info")).get_text() bedrooms = (listing.find_all("li")[2]).get_text() bathrooms = (listing.find_all("li")[3]).get_text() square_feet = (listing.find_all("li")[4]).get_text() csv_writer.writerow([title, type, price, location, bedrooms, bathrooms, square_feet, link]) next_btn = soup.find(class_="paginator-next_page paginator-control") url = next_btn.find("a")["href"] if "href" else None
You should increment the page number every time that it scrapes a page. Try this:import requestsfrom bs4 import BeautifulSoupfrom csv import writerbase_url = 'https://www.rew.ca/properties/areas/kelowna-bc'for i in range(1, 26): url = '/page/' + str(i) while url: response = requests.get(f"{base_url}{url}") soup = BeautifulSoup(response.text, "html.parser") listings = soup.find_all("article") with open("property4.csv", "w") as csv_file: csv_writer = writer(csv_file) csv_writer.writerow(["title", "type", "price", "location", "bedrooms", "bathrooms", "square feet", "link"]) for listing in listings: location = listing.find(class_="displaypanel-info").get_text().strip() price = listing.find(class_="displaypanel-title hidden-xs").get_text().strip() link = listing.find("a").get('href').strip() title = listing.find("a").get('title').strip() type = (listing.find(class_="clearfix hidden-xs").find(class_="displaypanel-info")).get_text() bedrooms = (listing.find_all("li")[2]).get_text() bathrooms = (listing.find_all("li")[3]).get_text() square_feet = (listing.find_all("li")[4]).get_text() csv_writer.writerow([title, type, price, location, bedrooms, bathrooms, square_feet, link]) next_btn = soup.find(class_="paginator-next_page paginator-control") url = next_btn.find("a")["href"] if "href" else None
Django ckeditor upload image I'm trying to use ckeditor to upload an image. Looks like everything is set up by documentation, but still I'm getting an error while trying to upload an image. I think that there is a problem with static files. Looks like ckeditor doesn't know where to upload files, even though I've provided all needed parameters:MEDIA_URL = '/media/'MEDIA_ROOT = os.path.join(BASE_DIR, 'media')CKEDITOR_UPLOAD_PATH = 'uploads/'CKEDITOR_IMAGE_BACKEND = 'pillow'I'm getting this message:[23/Feb/2020 20:17:47] "POST /ckeditor/upload/&responseType=json HTTP/1.1" 302 0And here's what I get in browser. The red one is for "incorrect server response".
Looks like there is a problem with 'static' folder location in my project. I've solved my problem adding CKEDITOR_STORAGE_BACKEND = 'django.core.files.storage.FileSystemStorage'To my settings file. Not sure if it will work for you, but it definitely works for me since 'FileSystemStorage' looks for 'MEDIA_ROOT' setting by default.
How to insert values form sqlite to excel without brackets and in specific column (Python)? I am new to Python. I want to know how to add values into an excel file. I did some research on google but I still can't find the way to make it.This is what I have:wb = load_workbook('example.xlsx')ws = wb.activecon = sqlite3.connect(database=r'database.db')cur = con.cursor()cur.execute("Select colors from store")row= cur.fetchall()ws['B14'].value = str(row)With these code, it shows all the values in column B14 with brackets. What I want isEach value in a column within B14 to B33The value without bracketsAnyone help with this?
.fetchall returns a list of tuples, and str(row) gives you the string representation of that list.If you want each individual element of this list in its own cell you'll need to iterate over the list and modify the cell name in each iteration:row_num = 14for elem in cur.fetchall(): ws[f'B{row_num}'].value = elem[0] row_num += 1Another way, with enumerate:starting_row_num = 14for row_num, elem in enumerate(cur.fetchall(), starting_row_num): ws[f'B{row_num}'].value = elem[0]
Python: Cannot put json format(dict) values into a list I tried to extract a set of date values from a json input. And when i finished the extraction of F_Date , it was correct.2020-05-20T00:00:002020-05-18T00:00:002020-05-15T00:00:002020-05-13T00:00:00I set a list to contain the values, so I wanna use the index of list like List[0]to take the first value in the list or the other value in the list for further purpose. However, it failed. It gave me a column of first charactor. If I try List[1], ofc it gave me a column of 0 instead of 2020-05-18T00:00:00,which is what exactly I want to have. Any Idea what's wrong with my code? I fell confused to what type F_Date is now, it is not a list?Thanks in advance.F_Date = []for cell in json["Cells"]: if cell["ColumnName"] == "T": if cell["RowIndex"] + 1 > 9: F_Date = cell["DisplayValue"] print(F_Date[0])output:2222Json:json = { "SheetName": "price", "SheetIndex": 4, "Cells": [ { "ColumnName": "T", "RowName": "10", "Address": "T10", "ColumnIndex": 19, "RowIndex": 9, "Value": "2020-05-20T00:00:00", "DisplayValue": "2020-05-20T00:00:00", "ValueType": "Date" }, { "ColumnName": "U", "RowName": "10", "Address": "U10", "ColumnIndex": 20, "RowIndex": 9, "Value": 2.75, "DisplayValue": 2.75, "ValueType": "Numeric" }, { "ColumnName": "V", "RowName": "10", "Address": "V10", "ColumnIndex": 21, "RowIndex": 9, "Value": 2.15, "DisplayValue": 2.15, "ValueType": "Numeric" }, { "ColumnName": "T", "RowName": "11", "Address": "T11", "ColumnIndex": 19, "RowIndex": 10, "Value": "2020-05-18T00:00:00", "DisplayValue": "2020-05-18T00:00:00", "ValueType": "Date" }, { "ColumnName": "U", "RowName": "11", "Address": "U11", "ColumnIndex": 20, "RowIndex": 10, "Value": 2.75, "DisplayValue": 2.75, "ValueType": "Numeric" }, { "ColumnName": "V", "RowName": "11", "Address": "V11", "ColumnIndex": 21, "RowIndex": 10, "Value": 2.15, "DisplayValue": 2.15, "ValueType": "Numeric" }, { "ColumnName": "T", "RowName": "12", "Address": "T12", "ColumnIndex": 19, "RowIndex": 11, "Value": "2020-05-15T00:00:00", "DisplayValue": "2020-05-15T00:00:00", "ValueType": "Date" }, { "ColumnName": "U", "RowName": "12", "Address": "U12", "ColumnIndex": 20, "RowIndex": 11, "Value": 2.75, "DisplayValue": 2.75, "ValueType": "Numeric" }, { "ColumnName": "V", "RowName": "12", "Address": "V12", "ColumnIndex": 21, "RowIndex": 11, "Value": 2.15, "DisplayValue": 2.15, "ValueType": "Numeric" }, { "ColumnName": "T", "RowName": "13", "Address": "T13", "ColumnIndex": 19, "RowIndex": 12, "Value": "2020-05-13T00:00:00", "DisplayValue": "2020-05-13T00:00:00", "ValueType": "Date" } ]}
All you need to do is print F_Date instead of F_Date[0] which only prints the first Character as a String can be interpreted as a list of Characters and you are printing the index 0.print(F_Date)If you are confused to what F_Date is, F_Date is your Date String as it is the value to the key in the Dictonary you get.F_Date = []for cell in json["Cells"]: if cell["ColumnName"] == "T": if cell["RowIndex"] + 1 > 9: F_Date = cell["DisplayValue"] # Here you set F_Date to the value print(F_Date[0])i think what you ment to do is thisF_Date = []for cell in json["Cells"]: if cell["ColumnName"] == "T": if cell["RowIndex"] + 1 > 9: F_Date.append(cell["DisplayValue"]) print(cell["DisplayValue"])
How do i add a cooldown or a ratelimit to this event? discord.py @commands.Cog.listener() async def on_message(self, message): user_in = message.content.lower() if "gn" in user_in.split(" ") or "good night" in user_in : if message.author.bot: return if not message.guild: return await message.channel.send(f"Good Night, <@{message.author.id}>")I need to know how to add a cooldown to this event so people dont spam gn or goodnight
Discord.py (Rewrite) How to get cooldowns working with on_message event?I would recommend you to check out this post.
Why does MATLAB produce an error when calling a python script with "from tensorflow import keras"? I have the following python script (test_from_import.py)from tensorflow import keras#import tensorflow.kerasfrom tensorflow.keras import backend as Kthat I call from MATLAB (R2018a) with the following code:testDir = '.....' % Directory of 'test_from_import.py'addpath(testDir)% Specify Python Executable Directory.pcPythonExeDir = 'C:\Users\dmattioli\AppData\Local\Programs\Python\Python37\python.exe';[ver, exec, loaded] = pyversion(pcPythonExeDir);pyversion % Print to command line.% Ensure python-matlab integration code is on matlab path.pyDir = fullfile(matlabroot, 'toolbox', 'matlab', 'external', 'interfaces', 'python');addpath(pyDir);% Directory containing all relevant python libraries.pyLibraryDir = 'C:\Users\dmattioli\AppData\Local\Programs\Python\Python37\Lib\site-packages';% Add folders to python system path.insert(py.sys.path, int64(0), testDir);insert(py.sys.path, int64(0), pyDir);insert(py.sys.path, int64(0), pyLibraryDir);%% Call python script.py_test_mod = py.importlib.import_module('test_from_import')% % Using system call instead of matlab-python integration functionality.% [result, status] = python('test_from_import.py') % Does not return error.This produces an error message (see bottom of post) tracing back to the "from tensorflow import keras" line at the top.This error does not occur if/when you:Comment out the first line and uncomment the "import tensorflow.keras" line (the error then shifts to the "from tensorflow.keras import backend as K" line).Run the command "python test_from_import.py" in the command line, orRun the "[result, status] = ..." system call line instead of the "py_test_mod = ..." line (https://www.mathworks.com/matlabcentral/answers/153867-running-python-script-in-matlab), orFor various reasons, I'd prefer to resolve this problem rather than using one of those 3 alternatives.I installed everything using pip, with tensorflow being the first installation. Versions of software (Windows 10) are:Python 3.6.8 (3.7.3 has the same issue).h5py 2.90Keras 2.2.4Tensorflow 1.14.0>> py_test_mod = py.importlib.import_module('test_from_import')Error using h5r>init h5py.h5r (line 145)Python Error: AttributeError: type object 'h5py.h5r.Reference' has no attribute '__reduce_cython__'Error in h5r>init h5py._conv (line 21)Error in __init__><module> (line 36)from ._conv import register_converters as _register_convertersError in saving><module> (line 38) import h5pyError in network><module> (line 40)from tensorflow.python.keras.engine import savingError in training><module> (line 42)from tensorflow.python.keras.engine.network import NetworkError in multi_gpu_utils><module> (line 22)from tensorflow.python.keras.engine.training import ModelError in __init__><module> (line 38)from tensorflow.python.keras.utils.multi_gpu_utils import multi_gpu_modelError in advanced_activations><module> (line 27)from tensorflow.python.keras.utils import tf_utilsError in __init__><module> (line 29)from tensorflow.python.keras.layers.advanced_activations import LeakyReLUError in __init__><module> (line 26)from tensorflow.python.keras import layersError in __init__><module> (line 25)from tensorflow.python.keras import applicationsError in __init__><module> (line 82)from tensorflow.python import kerasError in __init__><module> (line 24)from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importError in test_from_import><module> (line 1)from tensorflow import kerasError in <frozen importlib>_call_with_frames_removed (line 219)Error in <frozen importlib>exec_module (line 728)Error in <frozen importlib>_load_unlocked (line 677)Error in <frozen importlib>_find_and_load_unlocked (line 967)Error in <frozen importlib>_find_and_load (line 983)Error in <frozen importlib>_gcd_import (line 1006)Error in __init__>import_module (line 127) return _bootstrap._gcd_import(name[level:], package, level)
According to this issue in the h5py repository, the problem is some version incompatibility. The solution that worked for several people was downgrading to h5py v2.8.0.Installing a specific version using pip can be done on Windows using:pip install h5py==2.8.0 --force-reinstallSome additional information about using pip in this context can be found in this Q&A.
How to fix a Traceback problem in Dataframe Python on Ubuntu I tried to use DataFrame in Python. Commands are:import pandas as pdfrom numpy.random import uniformdf = pd.DataFrame(uniform(0,1,(3,4)), index = 'A B C D'.split(), columns='E F G H'.split())But unfortunately I get the following error. Does anybody have an idea how to fix this issue? --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in create_block_manager_from_blocks(blocks, axes) 1680 -> 1681 mgr = BlockManager(blocks, axes) 1682 mgr._consolidate_inplace() ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in init(self, blocks, axes, do_integrity_check) 142 if do_integrity_check: --> 143 self._verify_integrity() 144 ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in _verify_integrity(self) 344 if block._verify_integrity and block.shape[1:] != mgr_shape[1:]: --> 345 construction_error(tot_items, block.shape[1:], self.axes) 346 if len(self.items) != tot_items: ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in construction_error(tot_items, block_shape, axes, e) 1718 raise ValueError( -> 1719 "Shape of passed values is {0}, indices imply {1}".format(passed, implied) 1720 ) ValueError: Shape of passed values is (5, 4), indices imply (4, 4) During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) in 1 df = pd.DataFrame(uniform(0,1,(5,4)), 2 index = 'A B C D'.split(), ----> 3 columns='W X Y Z'.split()) ~/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in init(self, data, index, columns, dtype, copy) 438 mgr = init_dict({data.name: data}, index, columns, dtype=dtype) 439 else: --> 440 mgr = init_ndarray(data, index, columns, dtype=dtype, copy=copy) 441 442 # For data is list-like, or Iterable (will consume into list) ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in init_ndarray(values, index, columns, dtype, copy) 211 block_values = [values] 212 --> 213 return create_block_manager_from_blocks(block_values, [columns, index]) 214 215 ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in create_block_manager_from_blocks(blocks, axes) 1686 blocks = [getattr(b, "values", b) for b in blocks] 1687 tot_items = sum(b.shape[0] for b in blocks) -> 1688 construction_error(tot_items, blocks[0].shape[1:], axes, e) 1689 1690 ~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in construction_error(tot_items, block_shape, axes, e) 1717 raise ValueError("Empty data passed with indices specified.") 1718 raise ValueError( -> 1719 "Shape of passed values is {0}, indices imply {1}".format(passed, implied) 1720 ) 1721 ValueError: Shape of passed values is (5, 4), indices imply (4, 4)​​
You are creating an 3x4 matrix but providing 4 row indices. Provide only 3 rows to your index.import pandas as pdfrom numpy.random import uniformdf = pd.DataFrame(uniform(0,1,(3,4)), index = 'A B C'.split(), columns='E F G H'.split())
beautifulsoup: Dropping text inside tags I am trying to extract strings from a html file using beautifulsoup. A query replies with label tags inside them, how can I get rid of those tags.from bs4 import BeautifulSoupimport requestswith open('/Desktop/filename.html') as html_file: soup = BeautifulSoup(html_file, 'lxml')string = soup.find('div', class_="col-sm-8 col-xs-6")print(string)Output-<div class="col-sm-8 col-xs-6"> Sherlock Holmes <br> <label for="AgentAddress" style="display: none;"> Detective's Address </label> 221B Baker Street London <br> <label for="AgentCityStateZip" style="display: none;"> City, State, Zip </label> London, United Kingdom </div>print(string.text) outputs Sherlock Holmes Detective's Address 221B Baker Street London City, State, Zip London, United Kingdom I am not interested in the text inside the <label></label> tags, how can I get rid of them so that the output is- Sherlock Holmes 221B Baker Street London London, United Kingdom
You can try with decompose, example, before the print use this:for label_element in string.find_all("label"): label_element.decompose()
TypeError: Expected tensorflow.python.framework.tensor_spec.TensorSpec, found numpy.ndarray I am getting the following error when i would like to migrate from TFF 0.12.0 to TFF 0.18.0,Knowing that I have an image dataset, Here is my sample_batchimages, labels = next(img_gen.flow_from_directory(path0,target_size=(224, 224), batch_size=2))sample_batch = (images,labels)...def model_fn(): keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=sample_batch, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])So how can I modifiy my sample_batch to be correct with this version ? please Help !! thanks
In version 0.13.0 the sample_batch parameter was deprecated. The input_spec parameter must be a tff.Type or tf.TensorSpec as per the documentation.To build a structure of tf.TensorSpec from a numpy.ndarray:def tensor_spec_from_ndarray(a): return tf.TensorSpec(dtype=tf.dtypes.as_dtype(a.dtype), shape=a.shape)sample_batch = (images,labels) # assumes images and labels are np.ndarrayinput_spec = tf.nest.map_structure( tensor_spec_from_ndarray, sample_batch)
Jpype: ModuleNotFoundError when import classes in jar ProblemI got an ModuleNotFoundError: No module named 'spoon' when I set the -Djava.class.path as a directory "jars/*".Project structureutils├── __init__.py├── jars│   └── spoon-core-9.2.0-beta-4.jar└── parse_utils.py# parse_utils.pyimport jpypeimport jpype.importsfrom jpype.types import *JARS_PATH = 'jars/*'jpype.startJVM(jpype.getDefaultJVMPath(), '-Djava.class.path=' + JARS_PATH)import spoon.Launcher as LauncherWhat I've triedJClass and JPackageI found a similar problem at stackoverflow Jpype import cannot find module in jar and I tried the top answer but failed.Launcher = jpype.JPackage('spoon').Launcher # AttributeError: Java package 'spoon' is not validLauncher = jpype.JClass('spoon.Launcher') # TypeError: Class spoon.Launcher is not foundLauncher = jpype.JClass("java.lang.Class").forName('spoon.Launcher') # java.lang.ClassNotFoundException: java.lang.ClassNotFoundException: spoon.Launcheruse jar pathJARS_PATH = 'jars/spoon-core-9.2.0-beta-4.jar'and I got :Traceback (most recent call last): File "ClassLoader.java", line 357, in java.lang.ClassLoader.loadClass File "Launcher.java", line 338, in sun.misc.Launcher$AppClassLoader.loadClass File "ClassLoader.java", line 424, in java.lang.ClassLoader.loadClass File "URLClassLoader.java", line 381, in java.net.URLClassLoader.findClassjava.lang.ClassNotFoundException: java.lang.ClassNotFoundException: com.martiansoftware.jsap.JSAPExceptionThe above exception was the direct cause of the following exception:Traceback (most recent call last): File "org.jpype.pkg.JPypePackage.java", line -1, in org.jpype.pkg.JPypePackage.getObject File "Class.java", line 348, in java.lang.Class.forName File "Class.java", line -2, in java.lang.Class.forName0Exception: Java ExceptionThe above exception was the direct cause of the following exception:Traceback (most recent call last): File "/Users/audrey/Documents/GitHub/xiaoven/codegex/utils/parse_utils.py", line 16, in <module> import spoon.Launcher as Launcher File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 971, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 914, in _find_spec File "/usr/local/Caskroom/miniconda/base/envs/codegex/lib/python3.8/site-packages/jpype/imports.py", line 194, in find_spec if not hasattr(base, parts[2]):java.lang.NoClassDefFoundError: java.lang.NoClassDefFoundError: com/martiansoftware/jsap/JSAPException
See this link:I would start with checking to see if the jar was picked up in the path.import jpypejpype.startJVM(classpath="jars/*")print(jpype.java.lang.System.getProperty("java.class.path"))My guess is there is a syntax error in your test code, or the spoon jar is missing some dependency, but nothing stands out in your example. Do you have the required jar dependencies including com.martiansoftware.jsap.JSAPException in the jars directory?After I download the missing jars manually according to the error log (or the External Libraries list under IDEA project view), it succeeded to import spoon classes.
how to use python to open a browser page and make it on top I need to use python to open a selenium browser page and make it shown as the top page. My command is:driver.execute_script('''window.open("https://www.abcxyzle.com", "_blank");''')The problem is the url portion of the command needs to be in a variable becuase my code needs iterate through many diffferent url:The result is:(1)if use the fullurl name: https://www.abcxyz.com",then it works exactly right.(2) when i replace that portion with testurl = https://www.abcxyzle.com",and my command is driver.execute_script('''window.open(testurl, "_blank");''')the I get the following error message"Message: javascript error: testurl is not defined."I could not understand what is wrong?
try this :testurl = "https://www.google.com"driver.execute_script(f'''window.open("{testurl}", "_blank");''')driver.switch_to.window(driver.window_handles[1])
Adafruit MM8451 & Raspberry PI SPI Error 121 with Buster Working with a Raspberry PI and interfacing with an Adafruit MMA8451 Accelerometer board. I am trying a fresh installation of Buster after I had all this working on Stretch. I have installed all the latest libraries and done all the latest updates. I am able to have the MMA8451 show up using sudo i2cdetect -y 1 at its correct location of 0x1D. When I try the example programs I normally get back this big block of error codes:Traceback (most recent call last): File "Downloads/simpletest.py", line 15, in <module> sensor = adafruit_mma8451.MMA8451(i2c) File "/usr/local/lib/python3.7/dist-packages/adafruit_mma8451.py", line 103, in __init__ while self._read_u8(_MMA8451_REG_CTRL_REG2) & 0x40 > 0: File "/usr/local/lib/python3.7/dist-packages/adafruit_mma8451.py", line 134, in _read_u8 self._read_into(address, self._BUFFER, count=1) File "/usr/local/lib/python3.7/dist-packages/adafruit_mma8451.py", line 130, in _read_into in_end=count) File "/usr/local/lib/python3.7/dist-packages/adafruit_bus_device/i2c_device.py", line 150, in write_then_readinto in_start=in_start, in_end=in_end) File "/usr/local/lib/python3.7/dist-packages/busio.py", line 89, in writeto_then_readfrom in_start=in_start, in_end=in_end, stop=stop) File "/usr/local/lib/python3.7/dist-packages/adafruit_blinka/microcontroller/generic_linux/i2c.py", line 61, in writeto_then_readfrom readin = self._i2c_bus.read_i2c_block_data(address, buffer_out[out_start:out_end], in_end-in_start) File "/usr/local/lib/python3.7/dist-packages/Adafruit_PureIO/smbus.py", line 227, in read_i2c_block_data ioctl(self._device.fileno(), I2C_RDWR, request)OSError: [Errno 121] Remote I/O errorThe maddening thing is that occasionally the whole thing will work. Is there something I can check? I had this working before so I believe it is a software issue rather than a hardware issue. I have tried both a Raspberry PI 3 and Raspberry PI 4 board, both have given the same error numerous times.
I think I found a fix. Unsure if this is caused by Buster or something else.I went into /boot/config.txt and added incore_freq=500core_freq_min=500dtparm=i2c_arm=on,i2c_arm_baudrate=10000That seems to get it working every time instead of having communication errors.
Unable to use "Filter" in AWS Rest API request I am trying to use "Filter" in request parameters while sending REST API request to AWS. Surprisingly, below request parameter just works:request_parameters = 'Action=DescribeAvailabilityZones&Version=2016-11-15'However, as soon as I change it to:request_parameters = 'Action=DescribeAvailabilityZones&Filter.1.state=available&Version=2016-11-15'I get, "The parameter state is not recognized"I am picking up the Filter's syntax from hereAny suggestions please? Thanks in advance.
figured out the solution. The parameters list expects the filter to be passed in a key/value fashion. Below is the amendment which I found to be working:request_parameters = 'Action=DescribeAvailabilityZones&Filter.1.Name=state&Filter.1.Value=available&Version=2016-11-15'I also noticed that unless this option is present in the list of recognized filters, it wont work. This can be found here under specific Actions.Also, filters tags bear relation with tags in XML response. for e.g. the filter to list state of an AvailabilityZone is "state" but in the XML response it is tagged as <zoneState>.
How to find shared values among pandas dataframe rows and number of occurrences i'm dealing with a pretty large db, in particular with these two columns:First column features an id and second column values consists in lists of uids associated to the id. Uids on lists are can repeat themselves, they are not unique.What i'm trying to do is extracting a list of tuples (or other formats, it's not mandatory) with this structure:(inst_id1, inst_id2, shared_uids, times_each_uid_is_shared)Where:Shared_uids is list of uids shared, with or without duplicates;Times_each_uid_is_shared is a dictionary {'uid': 'number of times it occurs on both uids lists'}So, let's say i got:inst_1 | [uid_1, uid_2, uid_1, uid_3, uid_4]inst_2 | [uid_2, uid3, uid_1, uid_5, uid_3]i would like to come out with:(inst_1, inst_2, [uid_1, uid_2, uid_3], {'uid_1': 1, 'uid_2': 1, 'uid_3': 2})Or some other data structure gathering the same kind of informations.I wrote a function, the obvious one, that loops on the inst_IDs and make a set instersection between the associated uids lists, but it's pretty slow even in df.apply(lambda etc) fashion and i'm looking forward to a vectorized way of doing this task.Thank you in advance for all the suggestions.Stay safe,Alessandro
True, df.apply() can become extremely slow when it comes to large datasets. There is a library called Bodo which uses high-performance computing under the hood to speed up Pandas code. It works very well with user-defined functions. Here is an example: https://medium.com/bodo-ai/making-pandas-dataframe-apply-faster-with-bodo-bbae1c485bdfHere is the link to the installation of Bodo: https://docs.bodo.ai/latest/source/getting_started.html
Python RegEx - Single vs multiple line test re.finditer() will pick the (.com) is the below text is in multiline string. The same function does not work if the text is in a single string (var ss). Can anyone please help me to understand?s = """ example (.com) w3resource github (.com) stackoverflow (.com) """# ss = """ example (.com) w3resource github (.com) stackoverflow (.com) """match = re.finditer(r'\(.+\)',s)print(match)for i in match: print(i)
The dot ( . ) will match any character except a newline, so in first case your greedy + stops when it finds the newline, on the later case, the greedy + works till the last parenthesis and hence perform only a single match.So it means your regex should be modified, try replacing it with below, so what I wrote it is this:Inside a parenthesis , match a literal dot (escaped here) with a word (one or more characters) followed by a closing parenthesis. The output matches as expectedimport res = """ example (.com) w3resource github (.com) stackoverflow (.com) """ss = """ example (.com) w3resource github (.com) stackoverflow (.com) """match = re.finditer(r'\(\.\w+\)',ss) ## changes are done hereprint(match)for i in match: print(i)
Create and append single column of 1’s, 0’s, and -1’s in csv file based on assessment of several other pre-existing columns signal_1 signal_2 signal_3 signal_4 0 0 0 0 1 1 0 -1 1 1 0 -1 1 0 -1 -1 0 0 -1 -1I have the signal data above in a csv file that I can pull into numpy arrays representing each column using the code below:"Assign all the data from particular columns to a variable." signal_1 = np.genfromtxt("%s.csv" % (i), delimiter=',', usecols=(13,), skip_header=1, unpack=True)signal_2 = np.genfromtxt("%s.csv" % (i), delimiter=',', usecols=(14,), skip_header=1, unpack=True)signal_3 = np.genfromtxt("%s.csv" % (i), delimiter=',', usecols=(15,), skip_header=1, unpack=True)signal_4 = np.genfromtxt("%s.csv" % (i), delimiter=',', usecols=(16,), skip_header=1, unpack=True)I want to append a fourth column named ‘Result’ based on the logic as follows:If signal_1 and signal_2 == 1, then result == 1,elIf signal_3 and signal_4 == -1, then result == -1,else result == 0Using the code below:"Determine the action to take." action_A = np.where(np.all(np.logical_and(signal_1, signal_2) > 0), 1, 0)action_B = np.where(np.all(np.logical_and(signal_3, signal_4) < 0), -1, 0)"Append the trading decision to the csv rate files."file_path = "%s.csv" % (i)df = pd.read_csv(file_path) if action_A: action_A = np.concatenate([np.zeros(len(df) - action_A.shape[0]), action_A]) df[result'] = pd.Series(action_A, index=df.index) # parameter index=False removes the index column inserted by index=df.index above. # removing the index ensures the signal calculation is on the same column after each iteration. df.to_csv(file_path, index=False)elif action_B: action_B = np.concatenate([np.zeros(len(df) - action_B.shape[0]), action_B]) df[result'] = pd.Series(action_B, index=df.index) df.to_csv(file_path, index=False)else: df[result'] = pd.Series(0, index=df.index) df.to_csv(file_path, index=False)But I get the table below (wrong one because “result” column is all zeroes):signal_1 signal_2 signal_3 signal_4 result 0 0 0 0 0 1 1 0 -1 0 1 1 0 -1 0 1 0 -1 -1 0 0 0 -1 -1 0Instead of the one below (the correct one):signal_1 signal_2 signal_3 signal_4 result 0 0 0 0 0 1 1 0 -1 1 1 1 0 -1 1 1 0 -1 -1 -1 0 0 -1 -1 -1
You can use numpy.select() as follows:import numpy as npdf['result'] = np.select([(df['signal_1'] == 1) & (df['signal_2'] == 1), (df['signal_3'] == -1) & (df['signal_4'] == -1)], [1, -1])Or, if you don't already have a df assembled with all the signal_n series, you can also use:import numpy as npdf['result'] = np.select([signal_1 == 1) & (signal_2 == 1), (signal_3 == -1) & (signal_4 == -1)], [1, -1])Result:print(df) signal_1 signal_2 signal_3 signal_4 result0 0 0 0 0 01 1 1 0 -1 12 1 1 0 -1 13 1 0 -1 -1 -14 0 0 -1 -1 -1
how to access resource files after pip install in virtual env in python? Let says that I have this project structure:src |my_package __init__.py |utils __init__.py util.py |resources __init__.py my_resource.ymlIn util.py, I have this code which need the resource file to work:import yamlimport importlib.resourcesfrom my_package import resourcesclass Util: def merge_settings(self, settings: dict)->dict: with importlib.resources.path(resources, 'my_resource.yml') as p: with open(p) as file: default_settings = yaml.safe_load(file)and everything works fine in my development environment.Then I make a wheel with this code with my setup.py file:import setuptoolsimport globresource_folder = 'my_package/resources'setuptools.setup( name="my_package", version="0.3", packages=setuptools.find_packages(), data_files=[(resource_folder, glob.glob(resource_folder+r'/*.yml'))]then I create a wheel:python .\setup.py bdist_wheel and I finally install it to be used in another project, using a virtual environment with name my_env:(my_env) D:\dev pip install my_package-0.3-py3-none-any.whlBut my code is not running any more due to this line:importlib.resources.path(resources, 'my_resource.yml')The reason is found when exploring my_env folder, my_resource.yml is not in my_package anymore.my_env |my_package |resources my_resource.yml |Lib |site-packages |my_package |resources __init__.pyBut this location could be quite useful to modify easily this file... The how can I deal in the same time with a correct call of resources in my development environment and when using it after pip install ? I would like to always have access to the yml file for edition when required, even after pip install...Tks for your help
your data_files is both mis-specified and not the setting you want (it's intended for non-package data). the keys in data_files are placed from the root of the prefix (so say you install your package into ./venv instead of your data ending at ./venv/lib/python#.#/site-packages/my_package/resources/... they're going to end up at venv/my_package/resources -- definitely not what you want!).the actual setting that you want is package_data: package_data={ 'my_package.resources': '*.yml', },the mapping maps from dotted package names to globs and will place it inside site-packagesthere's no need to use MANIFEST.in, etc. as these files are automatically included in your packagefor more on this, I made a video on the subject
Why are models having there parent class names in admin Django I have created models like thisclass User(AbstractUser): login_count = models.PositiveIntegerField(default=0)class Supplier(User): company_name= models.CharField(max_length=30) company_domain=models.CharField(max_length=30) class Worker(User): ACCOUNT_TYPE = ( ('1', 'Admin'), ('2', 'Regular'), ) account_type = models.CharField(max_length=1, choices=ACCOUNT_TYPE)and in the users.admin.py, I haveadmin.site.register(Supplier)admin.site.register(Worker)Why is it that I have all models names as Users in the Django Admin? instead of Workers and Suppliers?
Because AbstractUser is an abstract model it's Meta class is inherited by all subclasses, docs.You need to provide your own Meta class for each model and pass the verbose_name and verbose_name_plural attributes to override the values set in AbstractUsers Meta classclass Supplier(User): company_name = models.CharField(max_length=30) company_domain = models.CharField(max_length=30) class Meta: verbose_name = 'supplier' verbose_name_plural = 'suppliers'class Worker(User): ACCOUNT_TYPE = ( ('1', 'Admin'), ('2', 'Regular'), ) account_type = models.CharField(max_length=1, choices=ACCOUNT_TYPE) class Meta: verbose_name = 'worker' verbose_name_plural = 'workers'
.datalog format using Z3 I'm trying to use the Z3 extension: muZ with fixed-point constraints following this tutorial: https://rise4fun.com/Z3/tutorial/fixedpoints.As marked in this tutorial, three different text-based input formats are accepted. The basic datalog is one of these accepted formats.I have programs of this form indicated in the tutorial:Z 64P0(x: Z) inputGt0(x : Z, y : Z) inputR(x : Z) printtuplesS(x : Z) printtuplesGt(x : Z, y : Z) printtuplesGt(x,y) :- Gt0(x,y).Gt(x,y) :- Gt(x,z), Gt(z,y).R(x) :- Gt(x,_).S(x) :- Gt(x,x0), Gt0(x,y), Gt0(y,z), P0(z).Gt0("a","b").Gt0("b","c").Gt0("c","d").Gt0("a1","b").Gt0("b","a1").Gt0("d","d1").Gt0("d1","d").P0("a1").How can I parse these programs using Z3Py (or Z3).
If you put that program text in a file (say a.datalog), you can directly call z3 on it. (Note that the extension has to be datalog).When I do that, I get:$ z3 a.datalogTuples in Gt: (x=a(0),y=b(1)) (x=b(1),y=c(2)) (x=c(2),y=d(3)) (x=a1(4),y=b(1)) (x=b(1),y=a1(4)) (x=d(3),y=d1(5)) (x=d1(5),y=d(3)) (x=a(0),y=c(2)) (x=a(0),y=a1(4)) (x=b(1),y=d(3)) (x=c(2),y=d1(5)) (x=a1(4),y=c(2)) (x=a1(4),y=a1(4)) (x=b(1),y=b(1)) (x=d(3),y=d(3)) (x=d1(5),y=d1(5)) (x=a(0),y=d(3)) (x=a1(4),y=d(3)) (x=b(1),y=d1(5)) (x=a(0),y=d1(5)) (x=a1(4),y=d1(5))Tuples in R: (x=a(0)) (x=b(1)) (x=c(2)) (x=a1(4)) (x=d(3)) (x=d1(5))Tuples in S: (x=a(0)) (x=a1(4))Time: 3msParsing: 0ms, other: 1msIs this what you're trying to do?
How to aggregate some data in pandas DataFrame I have dataframe like this: df = pd.DataFrame({'id': [115,120,200], 'category': ['a','a', 'b'], 'clust': [1, 2, 3]})I want to aggregate and count the amount of id of every category, which is in particular clust. For instance, result can also data frame where index row is clust and index column is category and values are amount of id
IIUC, let's use groupby and unstack:import pandas as pddf = pd.DataFrame({'id': [115,120,200], 'category': ['a', 'a', 'b'], 'clust': [1, 2, 3]})dfInput Dataframe: category clust id0 a 1 1151 a 2 1202 b 3 200Group, aggegrate and reshape:df_out = df.groupby(['clust','category'])['id'].count().unstack()Output:category a bclust 1 1.0 NaN2 1.0 NaN3 NaN 1.0
Open Tkniter Toplevel only if it doesn't already exist I am trying to create a python app with a Tkinter UI and am currently having the following issue. I am trying to set up the UI such that a log is being kept in the background, and when the user presses a button a Toplevel window appears. The window displays the log, and appends updates to it in real time. So far all of that works properly.However I want to make it so that if the Toplevel window is open, then it can't be opened again.Additionally, the main program will be fullscreen when it is being run. This means that if the Log Window is open and the user interacts with the main program again, the Log Window is no longer visible. Is there a way to keep the Toplevel window on top of the root window, even while the user is interacting with the root window?Here is the code I have been fiddling with:import tkinter as tkclass guiapp(tk.Frame): def __init__(self, master): tk.Frame.__init__(self, master) self.master = master self.value = 0.0 self.alive = True self.list_for_toplevel = [] btn = tk.Button(self.master, text = "Click", command = self.TextWindow) btn.pack() def TextWindow(self): #if not tk.Toplevel.winfo_exists(self.textWindow): self.textWindow = tk.Toplevel(self.master) self.textFrame = tk.Frame(self.textWindow) self.textFrame.pack() self.textArea = tk.Text(self.textWindow, height = 10, width = 30) self.textArea.pack(side = "left", fill = "y") bar = tk.Scrollbar(self.textWindow) bar.pack(side = "right", fill = "y") bar.config(command = self.textArea.yview) self.alive = True self.timed_loop() def timed_loop(self): if self.alive == True and tk.Toplevel.winfo_exists(self.textWindow): self.master.after(1000, self.timed_loop) self.value += 1 self.list_for_toplevel.append(self.value) self.textArea.delete(1.0, "end-1c") for item in self.list_for_toplevel: self.textArea.insert('end', "{}\n".format(item)) self.textArea.see('end') else: self.alive = Falseif __name__ == "__main__": root = tk.Tk() myapp = guiapp(root) root.mainloop()The line I have commented out in the TextWindow method (if not tk.Toplevel.winfo_exists(self.textWindow)) is what I was attempted to use as a "If this exists, don't make the window" kind of deal. However running it I get the error:'guiapp' has no attribute ''textWindow'I mean I understand that the program doesn't have the attribute textWindow before it exists. That's the whole reason I was attempting to use winfo_exists() in the first place.I'm wondering if I should create a isOpen boolean, but the problem is that I don't know how to detect when a window closes.Any help is aprpeciated.
You just need to initialize self.textWindow in addition to checking whether it exists:class guiapp(tk.Frame): ... self.textWindow = None ... def TextWindow(self): if self.textWindow is None or not self.textWindow.winfo_exists(): self.textWindow = tk.Toplevel(self.master) ...
Python Dictionary to CSV Issue I put together a python script to clean CSV files. The reformatting works, but the data rows the writer writes to the new CSV file are wrong. I am constructing a dictionary of all rows of data before writing using writer.writerows(). When I check the dictionary using print statements, the correct data is appending to the list. However, after appending, the incorrect values are in the dictionary.import csvdata = []with open(r'C:\\Data\\input.csv', 'r') as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') line_count = 0 street_fields = [] # Store new field names in list street_fields.append("startdate") street_fields.append("starttime") street_fields.append("sitecode") street_fields.append("recordtime") street_fields.append("direction") street_fields.append("turnright") street_fields.append("wentthrough") street_fields.append("turnleft") street_fields.append("pedestrians") for row in csv_reader: # Read input rows if line_count == 0: startdate = row[1] # Get Start Date from B1 line_count += 1 elif line_count == 1: starttime = row[1] # Get Start Time from B2 line_count += 1 elif line_count == 2: sitecode = str(row[1]) # Get Site code from B3 line_count += 1 elif line_count == 3: street_count = len(row) - 3 # Determine number of streets in report streetnames = [] i = 1 while i < street_count: streetnames.append(row[i]) # Add streets to list i += 4 line_count += 1 elif line_count > 4: street_values = {} # Create dictionary to store new row values n = 1 for street in streetnames: turnright = 0 + n wentthrough = 1 + n turnleft = 2 + n pedestrians = 3 + n street_values["startdate"] = startdate street_values["starttime"] = starttime street_values["sitecode"] = sitecode street_values["recordtime"] = row[0] street_values["direction"] = street street_values["turnright"] = int(row[turnright]) street_values["wentthrough"] = int(row[wentthrough]) street_values["turnleft"] = int(row[turnleft]) street_values["pedestrians"] = int(row[pedestrians]) data.append(street_values) # Append row dictionary to list #print(street_values) ### UNCOMMENT TO SEE CORRECT ROW DATA ### #print(data) ### UNCOMMENT TO SEE INCORRECT ROW DATA ### n += 4 line_count += 1 else: line_count += 1with open(r'C:\\Data\\output.csv', 'w', newline='', encoding="utf-8") as w_scv_file: writer = csv.DictWriter(w_scv_file,fieldnames=street_fields) writer.writerow(dict((fn,fn) for fn in street_fields)) # Write headers to new CSV writer.writerows(data) # Write data from list of dictionariesAn example of the list of dictionaries created (JSON):[ { "startdate":"11/9/2017", "starttime":"7:00", "sitecode":"012345", "recordtime":"7:00", "direction":"Cloud Dr. From North", "turnright":0, "wentthrough":2, "turnleft":11, "pedestrians":0 }, { "startdate":"11/9/2017", "starttime":"7:00", "sitecode":"012345", "recordtime":"7:00", "direction":"Florida Blvd. From East", "turnright":4, "wentthrough":433, "turnleft":15, "pedestrians":0 }, { "startdate":"11/9/2017", "starttime":"7:00", "sitecode":"012345", "recordtime":"7:00", "direction":"Cloud Dr. From South", "turnright":15, "wentthrough":4, "turnleft":6, "pedestrians":0 }, { "startdate":"11/9/2017", "starttime":"7:00", "sitecode":"012345", "recordtime":"7:00", "direction":"Florida Blvd. From West", "turnright":2, "wentthrough":219, "turnleft":2, "pedestrians":0 }, { "startdate":"11/9/2017", "starttime":"7:00", "sitecode":"012345", "recordtime":"7:15", "direction":"Cloud Dr. From North", "turnright":1, "wentthrough":3, "turnleft":8, "pedestrians":0 }]What actually writes to the CSV:Note the Direction field and data rows are incorrect. For some reason when it loops through the streetnames list, the last street name and the corresponding row values persist for the individual record time. Do I need to delete my variables before re-assigning them values?
It looks like you are appending the same dictionary to the list over and over.In general, when appending a nuber of separate dictionaries to a list, I would use mylist.append(mydict.copy()), otherwise later on when you assign new values within a dictionary of the same name you are really just updating your old dictionary, including entries in your list that point to a dictionary of the same name (see mutable vs immutable objects in python).In short: If you want the dictionary in the list to be a separate entity from the new one, create a deep copy using dict.copy() when appending it to the list.
This is the error that I am getting while executing from sklearn import preprocessing, cross_validation, svm Traceback (most recent call last): File "C:/Python27/12.py", line 4, in <module> from sklearn import preprocessing, cross_validation, svm File "C:\Python27\lib\site-packages\sklearn\__init__.py", line 57, in <module> from .base import clone File "C:\Python27\lib\site-packages\sklearn\base.py", line 10, in <module> from scipy import sparseImportError: No module named scipyPlease help me resolve this.
Install this package using:easy_install scipyorsudo apt-get install python-scipyorpip install scipyorconda install scikit-learnif you are using windows refer: https://stackoverflow.com/a/39577864/6633975https://stackoverflow.com/a/32064281/6633975
run python in terminal using sublime I use Linux Mint 17.3 and recently installed Sublime Text 3 (unregistered version). In order to run python scripts in terminal (the external terminal of the OS, not the internal one of the IDE) I fount somewhere this:Tools -> Build system -> New build systemtype this:{"cmd": ["gnome-terminal -e 'bash -c \"python3 -u $file;echo;echo Press ENTER to exit; read line\"'"],"shell": true}and save it as python3.sublime-buildAfter that I quit sublime and relaunch it. I open a python file, then select Tools -> Build system -> python3, andTools -> BuildA terminal window should appear but instead, nothing happens. Any suggestions would be appreciated.
could you go to Tools -> Build System -> new build system paste the following in the window that open { "path": "/usr/local/bin", "cmd": ["python3", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "selector": "source.python"}then save it as pybuild.sublime-build for example and go to then go Tools -> Build System -> whatever name you chosethen Ctrl+B on whatever python file you want to run , this worked like a charm for me on Manjaro .
In Pandas, how to send the output from groupby transform to the original dataframe? Consider the following exampleimport pandas as pdimport numpy as npdf = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'D' : np.random.randn(8)})group=df.groupby(['A','B'])agg_df=group.agg({'D':lambda x: x[x>0].sum(), 'D':lambda x: x[x<0].sum()} )Here I would like to get two additional variables in the original dataframe df. One that is the sum of positive elements in D, and one that is the sum of negative elements in D. Using agg is straighforward, as you can see in the code above. However, I would like to have these values repeated in the main dataframe for each line corresponding to a particular groupby combination. The naive syntax would be to use:transform_df=group.transform({'D':lambda x: x[x>0].sum(), 'D':lambda x: x[x<0].sum()} )but thats fails.What Am I doing wrong here?Thanks
if expressed in two lines, the logic becomes cleaner to write & readdf['d_pos_sum'] = df.groupby(['A', 'B']).transform(lambda x: x[x>0].sum())df['d_neg_sum'] = df.groupby(['A', 'B']).transform(lambda x: x[x<0].sum())
How to draw samples from two variables from population I have dataset that female students have than male. I need analyze which gender perform better in their test. Because their number not equal, I need to draw sample which equal.female=df.sample (df.query ("gender=='female'")=200)male=df.sample (df.query ("gender=='male'")=200)Is this correct code ?
Not quite. Among other things, you have a syntax error. Assuming you want 200 samples from each population, try this:female = df[df['gender']=='female'].sample(200)male = df[df['gender']== 'male'].sample(200)
Pandas DataFrame index - month and day only I'd like to have a DataFrame with a DatetimeIndex, but I only want the months and days; not years. I'd like it to look like the following:(index) (values)01-01 56.201-02 59.6...01-31 62.302-01 61.6...12-31 44.0I've tried creating a date_range but this seems to require the year input, so I can't seem to figure out how to achieve the above.
you can do it this way:In [78]: df = pd.DataFrame({'val':np.random.rand(10)}, index=pd.date_range('2000-01-01', freq='10D', periods=10))In [79]: dfOut[79]: val2000-01-01 0.4220232000-01-11 0.2158002000-01-21 0.1860172000-01-31 0.8042852000-02-10 0.0140042000-02-20 0.2966442000-03-01 0.0486832000-03-11 0.2390372000-03-21 0.1293822000-03-31 0.963110In [80]: df.index.dtype_strOut[80]: 'datetime64[ns]'In [81]: df.index.dtypeOut[81]: dtype('<M8[ns]')In [82]: df.index = df.index.strftime('%m-%d')In [83]: dfOut[83]: val01-01 0.42202301-11 0.21580001-21 0.18601701-31 0.80428502-10 0.01400402-20 0.29664403-01 0.04868303-11 0.23903703-21 0.12938203-31 0.963110In [84]: df.index.dtype_strOut[84]: 'object'In [85]: df.index.dtypeOut[85]: dtype('O')NOTE: the index dtype is a string (object) nowPS of course you can do it in one step if you nedd:In [86]: pd.date_range('2000-01-01', freq='10D', periods=5).strftime('%m-%d')Out[86]:array(['01-01', '01-11', '01-21', '01-31', '02-10'], dtype='<U5')
Right way to set variables at python class Which is the right way to work with variables inside a class?1- setting them as class attributes where we get them and access them from class itself:class NeuralNetwork(object): def __init__(self, topology): self.topology = topology self.buildLayers() def buildLayers(self): for layer in self.topology: #do thing2- passing them through methods that we need them without assign at class if they are not really useful variables:class NeuralNetwork(object): def __init__(self, topology): self.buildLayers(topology) def buildLayers(self, topology): for layer in topology: #do thing3- a mix of the above two:class NeuralNetwork(object): def __init__(self, topology): self.topology = topology self.buildLayers(self.topology) # or self.buildLayers(topology) ? def buildLayers(self, topology): for layer in topology: #do thingI think that the first one is the correct, but it don't let you to reuse the function for different purposes without assigning a new value to the variable, what would look like these:self.topology = xself.buildLayers()What looks weird and you don't really understand that changing self.topology is affecting the call of self.buildLayers()
In general the first way is the "really object oriented" way, and much preferred over the second and the third.If you want your buildLayers function to be able to change the topology occasionally, give it a param. topology with default value = None.As long as you don't pass that param. at calling buildLayers, it will use this.topology. If you pass it, it will use the one you passed, and (if you wish) change this.topology to it.By the way, while it's wise to stick to rules like this in the beginning, there are no real dogmas in programming. As experience grows, you'll find that to each rule there are many perfectly sane exceptions.That's the fun of programming, you never stop learning from experience.
How to create table in hbase using pyspark? I wants to create new hbase table if not exist in namespace/hbase from pyspark code for storing data, can someone help me do this task?
I think the easiest way is that using happybase. You can find document herehappybase. It is an example belowhbase(main):001:0> listTABLE emp 1 row(s) in 0.7750 seconds=> ["emp"]There is only one table and I will create new table which is called my_table with using Spark>>> import happybase>>> host = 'your host'>>> connection = happybase.Connection(host = host) #not specify port>>> connection.create_table(... 'my_table',... {'col1': dict(), # it uses defaults, if you want you can define column definitions... 'col2': dict(),... 'col3': dict()... }... )And check hbasehbase(main):002:0> listTABLE emp my_table 2 row(s) in 0.0660 seconds=> ["emp", "my_table"]New table is created. You can read tables in Spark via happybase as well.>>> import happybase>>> host = 'your host'>>> connection = happybase.Connection(host = host) #not specify port>>> table = connection.table('emp')>>> table.row('1'){b'personal data:name': b'raju'}
How to get the best estimator & parameters out from pipelined gridsearch and cross_val_score? I'd like to find the best parameters from SVC, using nested CV approach:import numpy as npimport pandas as pdimport matplotlib.pyplot as plt%matplotlib inlinefrom sklearn.datasets import load_breast_cancercancer = load_breast_cancer()X, y = load_breast_cancer(return_X_y=True)from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)from sklearn.model_selection import GridSearchCV, cross_val_scorefrom sklearn.pipeline import make_pipelinefrom sklearn.preprocessing import Imputer, StandardScalerfrom sklearn.decomposition import PCAfrom sklearn.svm import SVCpipe_svc = make_pipeline(Imputer(),StandardScaler(),PCA(n_components=2),SVC(random_state=1))param_range = [0.001,0.01,0.1,1,10,100,1000]param_grid = [{'svc__C': param_range, 'svc__kernel': ['linear']}, {'svc__C': param_range, 'svc__gamma': param_range,'svc__kernel': ['rbf']}]gs = GridSearchCV(estimator=pipe_svc,param_grid=param_grid, scoring='accuracy',n_jobs=4, cv = 2)scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)scores# how do I get the best parameters out from gridsearch after cross_val?Out[]: array([0.925 , 0.9375 , 0.925 , 0.95 , 0.94871795])gs.best_estimator_Out[]: Pipeline(memory=None, steps=[('imputer', Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0)), ('standardscaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('pca', PCA(copy=True, iterated_power='auto', n_components=2, random_state=None, svd_solver='auto', tol=0.0, whiten=False)...ar', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False))])The last line of code only gave 5 accuracy scores. gs.bestestimator_ doesn't yield any useful information either.What's the best way to combine GridSearchCV with Cross_Val in a pipeline?
Well, you don't have to use cross_val_score, you can get all information and meta results during the cross-validation and after finding best estimator.Please consider this example:from sklearn.model_selection import train_test_split, StratifiedKFoldfrom sklearn.model_selection import GridSearchCVfrom sklearn.pipeline import make_pipelinefrom sklearn.preprocessing import Imputer, StandardScalerfrom sklearn.decomposition import PCAfrom sklearn.svm import SVCfrom sklearn.datasets import load_breast_cancerX, y = load_breast_cancer(return_X_y=True)X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)pipe_svc = make_pipeline(Imputer(),StandardScaler(),PCA(n_components=2),SVC(random_state=1))param_range = [0.001,0.01,0.1,1,10,100,1000]param_grid = {'svc__C': [0.001,0.01,0.1,1,10,100,1000], 'svc__kernel': ['linear', 'rbf'], 'svc__gamma': [0.001,0.01,0.1,1,10,100,1000]}cv = StratifiedKFold(n_splits=5)gs = GridSearchCV(estimator=pipe_svc,param_grid=param_grid, scoring='accuracy', cv = cv, return_train_score=True)gs.fit(X_train, y_train)print("Best Estimator: \n{}\n".format(gs.best_estimator_))print("Best Parameters: \n{}\n".format(gs.best_params_))print("Best Test Score: \n{}\n".format(gs.best_score_))print("Best Training Score: \n{}\n".format(gs.cv_results_['mean_train_score'][gs.best_index_]))print("All Training Scores: \n{}\n".format(gs.cv_results_['mean_train_score']))print("All Test Scores: \n{}\n".format(gs.cv_results_['mean_test_score']))# # This prints out all results during Cross-Validation in details#print("All Meta Results During CV Search: \n{}\n".format(gs.cv_results_))OutputBest Estimator: Pipeline(memory=None, steps=[('imputer', Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0)), ('standardscaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('pca', PCA(copy=True, iterated_power='auto', n_components=2, random_state=None, svd_solver='auto', tol=0.0, whiten=False)...ar', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False))])Best Parameters: {'svc__gamma': 0.001, 'svc__kernel': 'linear', 'svc__C': 1}Best Test Score: 0.9422110552763819Best Training Score: 0.9440783896216558All Training Scores: [0.90012027 0.64070503 0.90012027 0.64070503 0.90012027 0.64070503 0.90012027 0.64070503 0.90012027 0.64070503 0.90012027 0.64070503 0.90012027 0.64070503 0.92587291 0.64070503 0.92587291 0.64070503 0.92587291 0.64070503 0.92587291 0.64070503 0.92587291 0.64070503 0.92587291 0.64070503 0.92587291 0.64070503 0.93779697 0.68906962 0.93779697 0.91582382 0.93779697 0.92901362 0.93779697 0.88879951 0.93779697 0.64070503 0.93779697 0.64070503 0.93779697 0.64070503 0.94407839 0.91394491 0.94407839 0.93277932 0.94407839 0.93968376 0.94407839 0.95413931 0.94407839 0.98052483 0.94407839 0.9949725 0.94407839 0.99937304 0.94533822 0.93090042 0.94533822 0.94345143 0.94533822 0.94911575 0.94533822 0.96293448 0.94533822 0.99434357 0.94533822 1. 0.94533822 1. 0.94533822 0.94219554 0.94533822 0.94219357 0.94533822 0.95099466 0.94533822 0.98052286 0.94533822 1. 0.94533822 1. 0.94533822 1. 0.94596518 0.9428225 0.94596518 0.94345537 0.94596518 0.95539323 0.94596518 0.99371858 0.94596518 1. 0.94596518 1. 0.94596518 1. ]All Test Scores: [0.88944724 0.64070352 0.88944724 0.64070352 0.88944724 0.64070352 0.88944724 0.64070352 0.88944724 0.64070352 0.88944724 0.64070352 0.88944724 0.64070352 0.92713568 0.64070352 0.92713568 0.64070352 0.92713568 0.64070352 0.92713568 0.64070352 0.92713568 0.64070352 0.92713568 0.64070352 0.92713568 0.64070352 0.9321608 0.68090452 0.9321608 0.90954774 0.9321608 0.92211055 0.9321608 0.84422111 0.9321608 0.64070352 0.9321608 0.64070352 0.9321608 0.64070352 0.94221106 0.9120603 0.94221106 0.92713568 0.94221106 0.91959799 0.94221106 0.93969849 0.94221106 0.81407035 0.94221106 0.65075377 0.94221106 0.64572864 0.94221106 0.92964824 0.94221106 0.92964824 0.94221106 0.92462312 0.94221106 0.92211055 0.94221106 0.80653266 0.94221106 0.65326633 0.94221106 0.64572864 0.94221106 0.92964824 0.94221106 0.93969849 0.94221106 0.92713568 0.94221106 0.90954774 0.94221106 0.82663317 0.94221106 0.65326633 0.94221106 0.64572864 0.93969849 0.94221106 0.93969849 0.93467337 0.93969849 0.92964824 0.93969849 0.87939698 0.93969849 0.8241206 0.93969849 0.65326633 0.93969849 0.64572864]
Seaborn multiple barplots I have a pandas dataframe that looks like this: class men woman children0 first 0.91468 0.667971 0.6605621 second 0.30012 0.329380 0.8826082 third 0.11899 0.189747 0.121259How would I create a plot using seaborn that looks like this? Do I have to rearrange my data in some way?(source: mwaskom at stanford.edu)
Yes you need to reshape the DataFrame:df = pd.melt(df, id_vars="class", var_name="sex", value_name="survival rate")dfOut: class sex survival rate0 first men 0.9146801 second men 0.3001202 third men 0.1189903 first woman 0.6679714 second woman 0.3293805 third woman 0.1897476 first children 0.6605627 second children 0.8826088 third children 0.121259Now, you can use factorplot (v0.8.1 or earlier):sns.factorplot(x='class', y='survival rate', hue='sex', data=df, kind='bar')For versions 0.9.0 or later, as Matthew noted in the comments, you need to use the renamed version, catplot. sns.catplot(x='class', y='survival rate', hue='sex', data=df, kind='bar')
How to sort list of tuples by several keys I am doing an exercise on Python and lists with one problem:I have a list of tuples sorted by second key:[('f', 3), ('a', 3), ('d', 3), ('b', 2), ('c', 2)]And I need sort it: Second value by number and first value by alphabetical order. And it must look like:[('a', 3), ('d', 3), ('f', 3), ('b', 2), ('c', 2)]When I used the sorted function I got:[('a', 3), ('b', 2), ('c', 2), ('d', 3), ('f', 3)]It sorted by first element (and I lost arrangement of second). I also tried to use key:def getKey(item): return item[0]a = (sorted(lis, key=getKey))And it didn't help me either.
The sort() method is stable. Call it twice, first for the secondary key (alphabetically), then for the primary key (the number):>>> lst = [('f', 3), ('a', 3), ('d', 3), ('b', 2), ('c', 2)]>>> lst.sort()>>> lst.sort(key=lambda kv: kv[1], reverse=True)>>> lst[('a', 3), ('d', 3), ('f', 3), ('b', 2), ('c', 2)]
Converting JSONL file to CSV - "JSONDecodeError: Extra data" I am using tweepy's Streamlistener to collect Twitter Data and the code I am using generates a JSONL file with a bunch of meta data. Now I would like to convert the file into a CSV for which I found a code for just that. Unfortunately I have run into the Error reading: raise JSONDecodeError("Extra data", s, end)json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 7833)I have read through other threads and I reckon it has something to do with json.loads not being able to process multiple parts of data within the json file (which is of course the case for my json list file). How I can circumvent this problem within the code? Or do I have to use a completely different approach to convert the file? (I am using python 3.6, and the tweets I am streaming are mostly in Arabic).__author__ = 'seandolinar'import jsonimport csvimport io'''creates a .csv file using a Twitter .json filethe fields have to be set manually'''data_json = io.open('stream_____.jsonl', mode='r', encoding='utf-8').read() #reads in the JSON filedata_python = json.loads(data_json)csv_out = io.open('tweets_out_utf8.csv', mode='w', encoding='utf-8') #opens csv filefields = u'created_at,text,screen_name,followers,friends,rt,fav' #field namescsv_out.write(fields)csv_out.write(u'\n')for line in data_python:#writes a row and gets the fields from the json object#screen_name and followers/friends are found on the second level hence two get methodsrow = [line.get('created_at'), '"' + line.get('text').replace('"','""') + '"', #creates double quotes line.get('user').get('screen_name'), unicode(line.get('user').get('followers_count')), unicode(line.get('user').get('friends_count')), unicode(line.get('retweet_count')), unicode(line.get('favorite_count'))]row_joined = u','.join(row)csv_out.write(row_joined)csv_out.write(u'\n')csv_out.close()
If the data file consists of multiple lines, each of which is a single json object, you can use a generator to decode the lines one at a time.def extract_json(fileobj): # Using "with" ensures that fileobj is closed when we finish reading it. with fileobj: for line in fileobj: yield json.loads(line)The only changes to your code is that the data_json file is not read explicitly, and data_python is the result of calling extract_json rather than json.loads. Here's the amended code:import jsonimport csvimport io'''creates a .csv file using a Twitter .json filethe fields have to be set manually'''def extract_json(fileobj): """ Iterates over an open JSONL file and yields decoded lines. Closes the file once it has been read completely. """ with fileobj: for line in fileobj: yield json.loads(line) data_json = io.open('stream_____.jsonl', mode='r', encoding='utf-8') # Opens in the JSONL filedata_python = extract_json(data_json)csv_out = io.open('tweets_out_utf8.csv', mode='w', encoding='utf-8') #opens csv filefields = u'created_at,text,screen_name,followers,friends,rt,fav' #field namescsv_out.write(fields)csv_out.write(u'\n')for line in data_python: #writes a row and gets the fields from the json object #screen_name and followers/friends are found on the second level hence two get methods row = [line.get('created_at'), '"' + line.get('text').replace('"','""') + '"', #creates double quotes line.get('user').get('screen_name'), unicode(line.get('user').get('followers_count')), unicode(line.get('user').get('friends_count')), unicode(line.get('retweet_count')), unicode(line.get('favorite_count'))] row_joined = u','.join(row) csv_out.write(row_joined) csv_out.write(u'\n')csv_out.close()
Pandas- masking rows/columns between two dataframes where indexes are not shared The ProblemI have two datasets that describe, let's say, the temperature at certain depths and at certain latitudes for a sea. The datasets are from two different models and therefore have differing resolution, with model 1 have a higher resolution for latitude and both models having different levels for the depth dimension. I've turned both datasets into pandas dataframes with depth as the vertical index and latitudes as the column labels. I want to mask out the rows (depths) and columns (latitudes) that are not shared between both dataframes since I'll be taking a difference and don't want to interpolate data. I've found how to mask out certain values within rows and columns, but I want to mask out rows and columns in their entirety.I've used np.intersect1d on the depths as lists to find which depths are not shared between the models, and I created a boolean list using a conditional statement showing True for every index where the value is unique to that dataframe. However, I'm not sure how to use this as a mask or even if I can. DataFrame.mask says the "array conditional must be the same shape as self", but array conditional is one-dimensional and the dataframe is two-dimensional. I'm not sure how to refer to the indexes of the dataframe only to apply the mask. I feel like I'm on the right track, but I'm not entirely sure since I'm still new to pandas. (I've tried searching for similar questions, but none match my problem quite exactly from what I've seen.)The Code (simplified working example)Note- This was written in the Jupyter notebook environmentimport numpy as npimport pandas as pd# Model 1 datadepthmod1 = [5, 10, 15, 20, 30, 50, 60, 80, 100] #depth in meterslatmod1 = [50, 50.5, 51, 51.5, 52, 52.5, 53] #latitude in degrees northtmpumod1 = np.random.randint(273,303,size=(len(depthmod1),len(latmod1))) #temperaturedfmod1 = pd.DataFrame(tmpumod1,index=depthmod1,columns=latmod1)print(dfmod1) 50.0 50.5 51.0 51.5 52.0 52.5 53.05 299 300 300 293 285 293 27310 273 288 293 292 290 302 27315 277 279 284 302 280 294 28420 291 295 277 276 295 279 27430 281 284 284 275 295 284 28250 284 276 291 282 286 295 29560 298 294 289 294 285 289 28880 285 284 275 298 287 277 300100 292 295 294 273 291 276 290# Model 2 datadepthmod2 = [5, 10, 15, 25, 35, 50, 60, 100]latmod2 = [50, 51, 52, 53]tmpumod2 = np.random.randint(273,303,size=(len(depthmod2), len(latmod2)))dfmod2 = pd.DataFrame(tmpumod2,index=depthmod2,columns=latmod2)print(dfmod2) 50 51 52 535 297 282 275 29210 298 286 292 28215 286 285 288 27325 292 288 279 29935 301 295 300 28850 277 301 281 27760 276 293 295 297100 275 279 292 287# Find shared depthsdepthxsect = np.intersect1d(depthmod1, depthmod2)print(depthxsect, depthxsect.shape)Shared depths: [ 5 10 15 50 60 100] (6,)# Boolean mask for model 1depthmask = dfmod1.index.isin(depthxsect) == Falseprint("Bool showing where mod1 index is NOT in mod2: ", depthmask)Bool showing where mod1 index is NOT in mod2: [False False False True True False False True False]# Mask datadfmod1masked = dfmod1.mask(depthmask1,np.nan)print(dfmod1masked)---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-14-fedf013c2200> in <module>----> 1 dfmod1masked = dfmod1.mask(depthmask1,np.nan) 2 print(dfmod1masked)[...]ValueError: Array conditional must be same shape as selfThe QuestionHow can I mask the rows by index such that I'm left only with rows/indexes [ 5 10 15 50 60 100] useable in both dataframes? I'll be doing similar masking for the columns (latitudes) so hopefully the solution for the rows will work for columns as well. I also do not want to merge the dataframes. They should remain separate unless a merge is needed for this.
depthxsect returns an np.array of the indices that you need. So, you can skip creating the boolean array depthmask and just pass the np.array to your datframe using .loc. You should use .mask if you are trying to keep all of the rows but just return NaN values on the other indices.After getting dfmod1 and depthxsect, you can simply use:dfmod1.loc[depthxsect]Full reproducible code:import pandas as pdimport numpy as np# Model 1 datadepthmod1 = [5, 10, 15, 20, 30, 50, 60, 80, 100] #depth in meterslatmod1 = [50, 50.5, 51, 51.5, 52, 52.5, 53] #latitude in degrees northtmpumod1 = np.random.randint(273,303,size=(len(depthmod1),len(latmod1))) #temperaturedfmod1 = pd.DataFrame(tmpumod1,index=depthmod1,columns=latmod1)depthmod2 = [5, 10, 15, 25, 35, 50, 60, 100]latmod2 = [50, 51, 52, 53]tmpumod2 = np.random.randint(273,303,size=(len(depthmod2), len(latmod2)))dfmod2 = pd.DataFrame(tmpumod2,index=depthmod2,columns=latmod2)depthxsect = np.intersect1d(depthmod1, depthmod2)dfmod1.loc[depthxsect]Out[2]: 50.0 50.5 51.0 51.5 52.0 52.5 53.05 284 291 280 287 297 286 27710 294 279 302 283 284 298 29115 278 296 286 298 279 275 28650 284 281 297 290 302 299 28060 290 301 302 298 283 286 287100 285 283 297 287 289 282 283I have included the approach you were trying as well. You have t ospecify mask on a column. You were doing it on the entire dataframe:import pandas as pdimport numpy as np# Model 1 datadepthmod1 = [5, 10, 15, 20, 30, 50, 60, 80, 100] #depth in meterslatmod1 = [50, 50.5, 51, 51.5, 52, 52.5, 53] #latitude in degrees northtmpumod1 = np.random.randint(273,303,size=(len(depthmod1),len(latmod1))) #temperaturedfmod1 = pd.DataFrame(tmpumod1,index=depthmod1,columns=latmod1)dfmod1depthmod2 = [5, 10, 15, 25, 35, 50, 60, 100]latmod2 = [50, 51, 52, 53]tmpumod2 = np.random.randint(273,303,size=(len(depthmod2), len(latmod2)))dfmod2 = pd.DataFrame(tmpumod2,index=depthmod2,columns=latmod2)depthxsect = np.intersect1d(depthmod1, depthmod2)depthmask = dfmod1.index.isin(depthxsect) == Falsefor col in dfmod1.columns: dfmod1[col] = dfmod1[col].mask(depthmask, np.nan)dfmod1Out[3]: 50.0 50.5 51.0 51.5 52.0 52.5 53.05 289.0 274.0 297.0 274.0 277.0 278.0 277.010 282.0 280.0 277.0 302.0 297.0 289.0 278.015 300.0 282.0 297.0 297.0 300.0 279.0 291.020 NaN NaN NaN NaN NaN NaN NaN30 NaN NaN NaN NaN NaN NaN NaN50 285.0 297.0 292.0 301.0 296.0 289.0 291.060 295.0 299.0 278.0 295.0 299.0 293.0 277.080 NaN NaN NaN NaN NaN NaN NaN100 292.0 293.0 289.0 291.0 289.0 276.0 286.0
Correct use of multiple windows for MVC pattern with Tkinter I'm trying to make a Python program with a GUI in which various animations will be displayed on a canvas. I decided to use a MVC pattern and Tkinter. When launching my program, a window should pop and you have to choose the dimensions of the canvas before displaying the GUI.However, I can't find an efficient way to do that. I tried with Dialog, Toplevel, Frame, etc. but as I am using a MVC pattern, I can't find how to link my controller functions with my GUI if this GUI window is opened after the setting window as it is not instantiated yet.class View(tk.Tk): def __init__(self): tk.Tk.__init__(self) self.setting_window = SettingWindow(self) self.setting_window.ok_button.configure(command=self.open_editor) self.withdraw() self.mainloop() def open_editor(self): map_dim = self.setting_window.getValues() self.editor_window = Window(self, map_dim) self.setting_window.destroy()
The way I found to solve this problem was to create a window Tk() and just unpacking and packing the different frames. I guess it is not the best way to do it but it worked !
Flask is not found despite it should have been installed I am a beginner in Flask and I'm trying to code a web email application with flask and python. But right after trying to import Flask with the command from flask import flask it gives me the following Error:> *Traceback (most recent call last): File "C:\Users\fabia\OneDrive\Desktop\Ehemaligenseite OneDrive for> Business_files\Email_Contact_Formular.py", line 1, in <module>> from flask import Flask ModuleNotFoundError: No module named 'flask'*But when I enter cmd there was the following answer:> C:\Users\fabia>pip install flask> Requirement already satisfied: flask in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (2.0.2)> Requirement already satisfied: Werkzeug>=2.0 in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (from flask) (2.0.2)> Requirement already satisfied: Jinja2>=3.0 in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (from flask) (3.0.3)> Requirement already satisfied: itsdangerous>=2.0 in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (from flask) (2.0.1)> Requirement already satisfied: click>=7.1.2 in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (from flask) (8.0.3)> Requirement already satisfied: colorama in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (from click>=7.1.2->flask) (0.4.4)> Requirement already satisfied: MarkupSafe>=2.0 in c:\users\fabia\appdata\local\programs\python\python39\lib\site-packages> (from Jinja2>=3.0->flask) (2.0.1)So why can't Pycharm find flask despite the cmd saying it should be installed?
pip install flask command in cmd install flask in global environment.But Pycharm has a virtual environment, so you need to install flask in virutal environment of Pycharm.Open Pycharm and in it's terminal type pip install flask
python function parameter control def getBooks(self,name): query = "SELECT * FROM books" self.cursor.execute(query) books = self.cursor.fetchall() return booksI have a function called "getBooks", and this function is actually a combination of 2 functions. one function must work without taking the 'name' parameter, the other must work by taking the 'name' parameter because i have to change the sql template according to the 'name' parameter. how can i provide that?
You can specify a default parameter of name to be None, and then treat the variable according to its type:def getBooks(self,name=None): if name is None: ... else: ...
Look for string in a range of lines and pick all the string after a particular and append it another file I have file 1 contents as wire x;wire y;input a;input b;input c;reg m;reg n;I have to put signals a, b, c only in another file file 2 in the following mannerassign inst.a=;assign inst.b=;assign inst.c=;Can anyone please help me out on this issue using Perl or Python?
This one-line Perl programperl -lne 'print "assign inst.$1=;" if /^input\h+(\w+);/' 'file 1'
How to use variable as value in xsl:apply-templates? I'm trying to extract some data from xml file and pass it to many html files based on specific nodes.My source.xml:<?xml version="1.0" encoding="UTF-8" ?><products> <product> <id>1</id> <feature>Product description escaped html tags</feature> </product> <product> <id>2</id> <feature>Product description escaped html tags</feature> </product>......................... <product> <id>5</id> <feature>Product description escaped html tags</feature> </product></products>Expected result: to have multiple html files with content like this:<p>1</p>Product description with html tagsI'm using this python code:import lxml.etree as ETdoc = ET.parse('source.xml')xslt = ET.parse('modify.xsl')transform = ET.XSLT(xslt)products = doc.findall('product')for product in products: i = product.find('id') n = ET.XSLT.strparam(str(i)) describ = transform(doc, item_num=n) f = open('file_nr_{}.html'.format(i.text), 'wb') f.write(describ) f.close()My current stylesheet modify.xsl:<?xml version="1.0" encoding="UTF-8"?><xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"><xsl:output method="html" indent="yes" encoding="UTF-8" /> <xsl:param name="item_num"/> <xsl:template match="/products"> <xsl:apply-templates select="product[id=$item_num]" /> </xsl:template> <xsl:template match="id" > <p><xsl:value-of select="."/></p> <xsl:value-of select="following-sibling::feature" disable-output-escaping="yes"/> </xsl:template> </xsl:stylesheet>..gives me multiple complete empty zero bytes files. But when i change line:<xsl:apply-templates select="product[id=$item_num]" />to this:<xsl:apply-templates select="product[id<4]" />it gives me five files with the same content:<p>1</p>Product description with still escaped html tagsProduct description with still escaped html tags<p>2</p>Product description with still escaped html tagsProduct description with still escaped html tags<p>3</p>Product description with still escaped html tagsProduct description with still escaped html tagsI don't know how to:properly use variable in path matching only one <product> with specific <id>effective disable escapingproperly ask google to find solution... ;)I tried this and this and this and this and searched here but I cannot use this knowledge in my case. Probably because I still don't understand how passing a variable value works. I'm trying to deal with this on my own since friday, but the only result I have is a headache.. Please help.
Your attempt:<xsl:template match="/products"> <xsl:apply-templates select="product[id=$item_num]" /></xsl:template><xsl:template match="id" > <p><xsl:value-of select="."/></p> <xsl:value-of select="following-sibling::feature" disable-output-escaping="yes"/></xsl:template>applies templates to <product> nodes, but has a template for <id> nodes.Make a template for <product> nodes.<xsl:template match="product"> <p><xsl:value-of select="id"/></p> <xsl:value-of select="feature" disable-output-escaping="yes" /></xsl:template>Whenever XSLT cannot find a template for a specific node, it falls back to default behavior. Default behavior is "copy child text nodes to output, and apply templates to child elements". This explains why you see what you see with your code.Regarding your other issue: .find('...') returns XML nodes, not string values, i.e. .find('id') finds an <id> element. You wanted to pass the .text of the found node as the XSLT parameter, not the node itself:import lxml.etree as ETdoc = ET.parse('source.xml')xslt = ET.parse('modify.xsl')transform = ET.XSLT(xslt)products = doc.findall('product')for product in products: i = product.find('id').text describ = transform(doc, item_num=i) with open(f'file_nr_{i}.html', 'wb') as f: f.write(describ)
PANDAS reading dataframe from file properly my file(text file) looks like: -1 1 2.99988E-02-4.93580E-17 4.28928E-17-2.01725E-16 4.57184E-18 1.54030E-16 -1 2 2.99988E-02-4.93581E-17-3.85396E-17-2.02655E-16-4.41397E-17-2.23963E-16 -1 3 2.99988E-02 2.47173E-17 4.28930E-17 1.60350E-16 5.28503E-17 1.53007E-16...i want to create a dataframe with header and index, so that it looks like: 0 1 2 3 4 5 6 0 1 2.99988E-02-4.62001E-17 3.51002E-17-1.90612E-16 1.52704E-17 1.41065E-161 2 2.99988E-02-4.62001E-17-2.81042E-17-1.88765E-16-3.45762E-17-2.06278E-16...i tried this but it didnt work out:df = pd.read_table(file_dir, delim_whitespace=True, header=None)anddf = pd.read_table(file_dir, sep='s+', header=None)
This is not a delimited file, but a fixed widths one. It used to be a common format in the 80' when we used the Fortran IV language...But it is still supported by pandas with the read_fwf function:df = pd.read_fwf(file_dir, header=None, widths=(3,10) + 6 * (12,))It should directly give: 0 1 2 ... 5 6 70 -1.0 1.0 0.029999 ... -2.017250e-16 4.571840e-18 1.540300e-161 -1.0 2.0 0.029999 ... -2.026550e-16 -4.413970e-17 -2.239630e-162 -1.0 3.0 0.029999 ... 1.603500e-16 5.285030e-17 1.530070e-16...
Converting all text files with multiple encodings in a directory into a utf-8 encoded text files I am new starter in Python and, in general, in coding. So any help is greatly appreciated.I have more than 3000 text files in a single directory with multiple encodings. And I need to convert them into a single encoding (e.g. utf8) for further NLP work. When I checked the type of these files using shell, I identified the following encodings:Algol 68 source text, ISO-8859 text, with very long linesAlgol 68 source text, Little-endian UTF-16 Unicode text, with very long linesAlgol 68 source text, Non-ISO extended-ASCII text, with very long linesAlgol 68 source text, Non-ISO extended-ASCII text, with very long lines, with LF, NEL line terminatorsASCII textASCII text, with very long linesdatadiff output text, ASCII textISO-8859 text, with very long linesISO-8859 text, with very long lines, with LF, NEL line terminatorsLittle-endian UTF-16 Unicode text, with very long linesNon-ISO extended-ASCII textNon-ISO extended-ASCII text, with very long linesNon-ISO extended-ASCII text, with very long lines, with LF, NEL line terminatorsUTF-8 Unicode (with BOM) text, with CRLF line terminatorsUTF-8 Unicode (with BOM) text, with very long lines, with CRLF line terminatorsUTF-8 Unicode text, with very long lines, with CRLF line terminatorsAny ideas how to convert text files with the above mentioned encodings into text files with a utf-8 encoding?
I met the same problem like you.I used two steps to solve this problem.code is below:import os, sys, codecsimport chardetFirst, using chardet package to identify the coding of text.for text in os.listdir(path): txtPATH = os.path.join(path, text) txtPATH=str(txtPATH) f = open(txtPATH, 'rb') data = f.read() f_charInfo = chardet.detect(data) coding2=f_charInfo['encoding'] coding=str(coding2) print(coding) data = f.read()Second, if coding of text is not utf-8, rewrite the text as utf-8 encoding to the directory. if not re.match(r'.*\.utf-8$', coding, re.IGNORECASE): print(txtPATH) print(coding) with codecs.open(txtPATH, "r", coding) as sourceFile: contents = sourceFile.read() with codecs.open(txtPATH, "w", "utf-8") as targetFile: targetFile.write(contents)Hope this can help! Thanks
Chromedriver test work locally and on CI CD env python What I have: CURRENT_BROWSER=chrome in Win Environmentsdef before_scenario(context, scenario): use_fixture(browser, context)def after_scenario(context, scenario): context.cache.clear() context.driver.quit()@fixturedef browser(context): browser_type = os.getenv('CURRENT_BROWSER', 'chrome') if browser_type is None: raise Exception(f"Unable to identify test browser which is {browser_type}") if browser_type == 'chrome': chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') # chrome_options.add_argument('--incognito') context.driver = webdriver.Chrome(desired_capabilities=chrome_options.to_capabilities()) if browser_type == 'firefox': pass yield context.driverWhat I need is: the answer how to deal with the chromedriver on CI CD (azureDevops) should I also put ENV variable similar to Browser in to the PATH and do the same on CI CD or there is different way to deal with chrome driver. I need above code will work locally and on CI CD and I never did that before. Locally I use above code + chromedriver.exe added in to project structure
If you are using a Microsoft-hosted agent: windows-latest, windows-2019 or vs2017-win2016, the Chrome Driver 87.0.4280.88 is already installed.If you want to use another version of Chrome Driver, you can download it using npm:- script: npm install chromedriver --chromedriver_version=LATESTClick this document for detailed information.If you are using a Self-hosted agent and the agent is on a machine that has already downloaded the Chrome Driver and configured PATH, you can use Chrome Driver just as you work on your own machine.
Group by and Aggregate with nested Field I want to group by with nested serializer field and compute some aggregate function on other fields.My Models Classes:class Country(models.Model): code = models.CharField(max_length=5, unique=True) name = models.CharField(max_length=50)class Trade(models.Model): country = models.ForeignKey( Country, null=True, blank=True, on_delete=models.SET_NULL) date = models.DateField(auto_now=False, auto_now_add=False) exports = models.DecimalField(max_digits=15, decimal_places=2, default=0) imports = models.DecimalField(max_digits=15, decimal_places=2, default=0)My Serializer Classes:class CountrySerializers(serializers.ModelSerializer): class Meta: model = Country fields = '__all__'class TradeAggregateSerializers(serializers.ModelSerializer): country = CountrySerializers(read_only=True) value = serializers.DecimalField(max_digits=10, decimal_places=2) class Meta: model = Trade fields = ('country','value')I want to send import or export as query parameters and apply aggregate (avg) over it shown by distinct countriesMy View Class:class TradeAggragateViewSet(viewsets.ModelViewSet): queryset = Trade.objects.all() serializer_class = TradeAggregateSerializers def get_queryset(self): import_or_export = self.request.GET.get('data_type') queryset = self.queryset.values('country').annotate(value = models.Avg(import_or_export)) return querysetI want to get the data in format like:[ { country:{ id: ..., code: ..., name: ..., }, value:... }, ...]But having an error on country serializerAttributeError: Got AttributeError when attempting to get a value for field `code` on serializer `CountrySerializers`. The serializer field might be named incorrectly and not match any attribute or key on the `int` instance. Original exception text was: 'int' object has no attribute 'code'.
I have found the solution. Actually to_representation in serializer class got only the id of country not its object so i override to_representation as:class TradeAggregateSerializers(serializers.ModelSerializer):...... def to_representation(self, instance): #instance['country'] = some id, not object #getting actual object country = Country.objects.get(id=instance['country']) instance['country'] = country data = super(TradeAggregateSerializers, self).to_representation(instance) return data
Including many-to-many from another model in field in django admin interface Greetings,I'm sure there is a simple solution to what I'm trying to do but unfortunately I wasn't able to find it in the documentation.I have the following model (simplified version shown):models.py:class Student(models.Model): student_id = models.IntegerField(primary_key=True, unique=True, db_index=True, max_length=9) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) def __unicode__(self): return u"%s %s" % (self.first_name, self.last_name)class Course(models.Model): course_id = models.AutoField(primary_key=True, unique=True, db_index=True, max_length=4) title = models.CharField(max_length=50) dept = models.CharField(max_length=6) number = models.IntegerField(max_length=5) student_id = models.ManyToManyField(Student, blank=True) def __unicode__(self): return u"%s %s" % (self.dept, self.number)What I wanted was to be able to add students to multiple classes in the admin interface similar to the way that I can add students in the classes admin interface. If there is yet another way that seems more beneficial I would be interested in that as well.Thanks in advance for any help!
You can use the inlines in your related model, or this blog post might be of some help.
Python script to copy file and directory from one remote server to another remote server I am running a python script - ssh.py on my local machine to transfer file and directory from one remote server (ip = 35.189.168.20) to another remote server (ip = 10.243.96.94)This is how my code looks: HOST = "35.189.168.207"USER = "sovith"PASS = "xxx"destHost = "10.243.96.94"destUser = "root"destPass = "xxx"#SSH Connectionclient1=paramiko.SSHClient()client1.set_missing_host_key_policy(paramiko.AutoAddPolicy())client1.connect(HOST,username=USER,password=PASS)#SFTP inside SSH server connectionwith pysftp.Connection(host=destHost, username=destUser, password=destPass) as sftp: #put build directory - sftp.put_r(source, destination) sftp.put_r('/home/sovith/build' , '/var/tmp') sftp.close()client1.close()Let me just tell you that all directory paths and everything is correct. I just feel that there's some logically mistake inside the code. The output i got after execution is : Traceback (most recent call last): File "ssh.py", line 108, in <module> func() File "ssh.py", line 99, in func sftp.put_r('/home/sovith/nfmbuild' , '/var/tmp') File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pysftp/__init__.py", line 427, in put_r os.chdir(localpath)FileNotFoundError: [Errno 2] No such file or directory: '/home/sovith/build'Can you just correct the mistake from my code or suggest any better method to accomplish the task. In simpler words, i want a script to copy files and directory between two remote server.
That's not possible. Not the way you do it. The fact that you open a connection to one remote server, does not make the following code magically work, as if it was executed on that server. It still runs on the local machine. So the code is trying to upload local files (which do not exist).There's actually no way to transfer files between two remote SFTP servers from local machine.In general, you will need to download the files from the first server to a local temporary directory. And then upload them to the second server.See Python PySFTP transfer files from one remote server to another remote serverAnother option is to connect to one remote server using SSH and then run SFTP client on the server to transfer the files to/from the second server.But that's not copying from one SFTP server to another SFTP server. That's copying from one SSH server to SFTP server. You need an SSH access, mere SFTP access is not enough.To execute a command of a remote SSH server, use pysftp Connection.execute. Though using pysftp to execute a command on a server is a bit overkill. You can use directly Paramiko instead:Python Paramiko - Run command(pysftp is just a wrapper around Paramiko with more advanced SFTP functions)
Django(djongo) can't connect to MondoDB Atlas after Heroku deployment I managed to get it working locally (different cluster, separate settings.py), but not after when deployed to Heroku.Heroku - automatically adds DATABASE_URL config var with a postgresql, and I cannot remove/edit it.MongoDB Atlas - I've set the MongoDB Atlas cluster to allow IPs from everywhere. And the password has no funny characters.django production settings.pyDATABASES = { 'default': { 'ENGINE': 'djongo', 'NAME': 'DBProd', 'CLIENT': { 'host': "mongodb+srv://XXX:YYY@ZZZ.pc4rx.mongodb.net/DBProd?retryWrites=true&w=majority", } }}I ran migrate straight after the deployment and it's all green OKsheroku run python manage.py migrateEverything works functional wise, just that the data are not stored in the MongoDB Atlas cluster. There are lots of posts from various sites on this, but they all have different instructions... Some of the posts I tried to follow:https://developer.mongodb.com/how-to/use-atlas-on-heroku/Django + Heroku + MongoDB Atlas (Djongo) = DatabaseError with No ExceptionConnecting Heroku App to Atlas MongoDB Cloud service-- A very confused beginner
I've been having the same issue. Everything works fine locally. The problem is when deploying on Heroku. I have added 'authMechanism': 'SCRAM-SHA-1' and I have also configured MongoDB as my database by adding a MONGODB_URI config var. Heroku still autoconfigures DATABASE_URL config var with a postgresql, and I cannot remove/edit it.In my Javascript code I used fetch('127.0.0.1:8000/<something>') which I specified in my urls.py and that way views.py read data from pymongo and returned it as a response.After deploying my app to Heroku (and hardcoding the 127.0.0.1:8000 to <myappname>.heroku.com) the same fetch method seems to return [] instead of json containing values from MongoDB.This is the most identical issue I found in a post I hope I'm not out of subject.
Remove a row based on two empty columns in python pandas I want to be able to remove rows that are empty in column NymexPlus and NymexMinusright now the code I have isdf.dropna(subset=['NymexPlus'], inplace=True)The thing about this code is that it will also delete rows in the column NymexMinus which I don't want to happen.Is there an If/AND statement that will work in terms of only getting rid of empty cells for both of the columns?
Use a list as subset parameter and how='all':df.dropna(subset=['NymexPlus', 'NymexMinus'], how='all', inplace=True)
Why is it a TypeError to use an arithmetic expression in %-style print formatting? I tried to input a float number and output a simple result using two methods:t = float(input())print('{:.2f}'.format(1.0 - 0.95 ** t))print('%.2f' % 1.0 - 0.95 ** t)The first method worked but a TypeError occurred in the second one:unsupported operand type(s) for -: 'str' and 'float'.What's wrong with that?
On this line: print('%.2f' % 1.0 - 0.95 ** t)Python is trying to do '%.2f' % 1.0 first, then subtracting 0.95 ** t from the result. That's a problem because the first term is a string and the second one is a float.Use parentheses to control the order of operations. That line should be:print('%.2f' % (1.0 - 0.95 ** t))
How can I save the random output to use it in another function in python I am new to python3 I am trying very hard to add the output of the function, is there any way that I can save an output of an random integerso that i can altogether add it please help me.def dice(): import random rollball = int(random.uniform(1, 6)) return (rollball)def dice2(): import random rollball = int(random.uniform(1, 6)) return (rollball)print(dice())input("you have 10 chances left")print(dice2())input("you have 9 chances left")print(dice() + dice2())#i want to print this last function but by adding only this first and second nothing else
Use a variable or set a global variable for itimport randomdef dice(): rollball = int(random.uniform(1, 6)) return (rollball)def dice2(): rollball = int(random.uniform(1, 6)) return (rollball)roll1 = dice()print(roll1)input("you have 10 chances left")roll2 = dice2()print(roll2)input("you have 9 chances left")print(roll1 + roll2)#i want to print this last functionOruseimport randomroll1 = 0roll2 = 0def dice(): global roll1 rollball = int(random.uniform(1, 6)) roll1 = rollball return (rollball)def dice2(): global roll2 rollball = int(random.uniform(1, 6)) roll2 = rollball return (rollball)print(dice())input("you have 10 chances left")print(dice2())input("you have 9 chances left")print(roll1 + roll2)#i want to print this last function
trying to import png images to torchvision I am attempting to import images for use with torch and torchvision. But I am receiving this error:TypeError: Caught TypeError in DataLoader worker process 0.Original Traceback (most recent call last): File "c:\python38\lib\site-packages\torch\utils\data\_utils\worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "c:\python38\lib\site-packages\torch\utils\data\_utils\fetch.py", line 47, in fetch return self.collate_fn(data) File "c:\python38\lib\site-packages\torch\utils\data\_utils\collate.py", line 79, in default_collate return [default_collate(samples) for samples in transposed] File "c:\python38\lib\site-packages\torch\utils\data\_utils\collate.py", line 79, in <listcomp> return [default_collate(samples) for samples in transposed] File "c:\python38\lib\site-packages\torch\utils\data\_utils\collate.py", line 81, in default_collate raise TypeError(default_collate_err_msg_format.format(elem_type))TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'PIL.Image.Image'>Based on this post, I am converting them to Tensor:https://discuss.pytorch.org/t/typeerror-default-collate-batch-must-contain-tensors-numpy-arrays-numbers-dicts-or-lists-found-class-imageio-core-util-array/62667Here is my code:import torchimport torchvisionimport torchvision.transformsfrom torchvision import datasets, transformstransform = transforms.Compose([ transforms.Resize(256), transforms.ToTensor()])dataset = torchvision.datasets.ImageFolder('datasets')dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True, num_workers=12)tensor_dataset = []for i, data in enumerate(dataloader, 0): Tensor = torch.tensor(data) tensor_dataset.append(Tensor.flatten)The first last part is from https://github.com/TerragonDE/PyTorch but I have had no success. The data I am trying to load is from here:http://www.cvlibs.net/datasets/kitti/How can I solve this?UPDATE:Thanks @trialNerror, but now I am getting this error:ValueError Traceback (most recent call last)<ipython-input-6-aa72392b67e8> in <module> 1 for i, data in enumerate(dataloader, 0):----> 2 Tensor = torch.tensor(data) 3 tensor_dataset.append(Tensor.flatten)ValueError: only one element tensors can be converted to Python scalarsThis is what I have found so far but am not sure how to apply it:https://discuss.pytorch.org/t/pytorch-autograd-grad-only-one-element-tensors-can-be-converted-to-python-scalars/56681UPDATE 2:The reason why I didn't end up using the dataloader is because I end up getting this error:num_epochs = 10loss_values = list()for epoch in range(1, num_epochs): for i, data in enumerate(train_array, 0): outputs = model(data.unsqueeze(0)) loss = criterion(outputs,data.unsqueeze(0)) optimizer.zero_grad() loss.backward() optimizer.step() print('Epoch - %d, loss - %0.5f '%(epoch, loss.item())) loss_values.append(loss.item())torch.Size([1, 16, 198, 660])torch.Size([1, 32, 97, 328])torch.Size([1, 1018112])RuntimeError Traceback (most recent call last)<ipython-input-106-5e6fa86df079> in <module> 4 for epoch in range(1, num_epochs): 5 for i, data in enumerate(train_array, 0):----> 6 outputs = model(data.unsqueeze(0)) 7 loss = criterion(outputs,data.unsqueeze(0)) 8 c:\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else:--> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result)<ipython-input-90-467a3f84a03f> in forward(self, x) 29 print(out.shape) 30 ---> 31 out = self.fc(out) 32 print(out.shape) 33 c:\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else:--> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result)c:\python38\lib\site-packages\torch\nn\modules\linear.py in forward(self, input) 85 86 def forward(self, input):---> 87 return F.linear(input, self.weight, self.bias) 88 89 def extra_repr(self):c:\python38\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias) 1608 if input.dim() == 2 and bias is not None: 1609 # fused op is marginally faster-> 1610 ret = torch.addmm(bias, input, weight.t()) 1611 else: 1612 output = input.matmul(weight.t())RuntimeError: size mismatch, m1: [1 x 1018112], m2: [512 x 10] at C:\w\b\windows\pytorch\aten\src\TH/generic/THTensorMath.cpp:41I realize that if you have m1: [a * b] and m2: [c * d] then b and c have to be the same value, but I am not sure, what is the best way to resize my images?
Your transform variable is unused, it should be passed to the Dataset constructor:`dataset = torchvision.datasets.ImageFolder('datasets', transform=transform)`Because of that, the ToTensor is never applied to your data, and thus they remain PIL images, not tensors.
Selecting distinct random items from a list I am trying to make a rummy program in python 3.8, And a have a set list of all the possible cards, how do I pick 13 random distinct cards such that once those cards are chosen by the player, the other player cannot receive them?For examplecard =['Ah','Ad','Ac','As','2h','2d','2c','2s','3h','3d','3c','3s','4h','4d','4c','4s','5h','5d','5c','5s','6h','6d','6c','6s','7h','7d','7c','7s','8h','8d','8c','8s','9d','9c','9h','9s','10h','10d','10c','10s','Jh','Jd','Jc','Js','Qh','Qd','Qc','Qs','Kh','Kd','Kc','Ks','Joker1','Joker2']n1=[random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),random.choice(card),]if this command is done, the cards are not distinct, Kc can be repeated twice in the 13 cardscan you please help me?
Use the random.sample which selects unique values:from random import samplecard = ['Ah','Ad','Ac','As','2h','2d','2c','2s','3h','3d','3c','3s','4h','4d','4c','4s','5h','5d','5c','5s','6h','6d','6c','6s','7h','7d','7c','7s','8h','8d','8c','8s','9d','9c','9h','9s','10h','10d','10c','10s','Jh','Jd','Jc','Js','Qh','Qd','Qc','Qs','Kh','Kd','Kc','Ks','Joker1','Joker2']# e.g.player_1_selected = sample(card, 13)# ['5s', '10c', 'As', '2h', 'Qh', 'Kc', '10s', '4h', 'Qc', '9h', '8c', '4d', '3s']print(player_1_selected)remaining_to_select = list(set(card) - set(player_1_selected))# ['3c', 'Ac', 'Qs', '6h', '9s', '7s', '5c', '3h', 'Ad', 'Qd', '9d', '7h', '10d', '6d', '2d', '3d', '5h', '7d', '6c', 'Kd', '2s', 'Jh', '8s', '9c', 'Kh', '6s', 'Ah', '10h', 'Jd', '7c', 'Ks', '4c', '2c', 'Joker2', '8h', '8d', 'Jc', 'Js', 'Joker1', '5d', '4s']print(remaining_to_select)
Retrieve data from django-rest to show in a form I have two classes linked with a ForeignKey. My first class is "Categories", with "ID" and "Name" properties. The second class is "Documents", with "ID", "Title" and "User" properties, the last one is linked with a category defined in the first class.I'm developing the front-end based on Vue, so I want to know how to retrieve data from django-rest to show it in a form.For example, when a user adds a new doc, he must choose between all category options. Note that categories will be common to all documents.Example: Categories = [{"ID": 0, "Name "Sports"}, {"ID": 1, "Name": "Music"}]Form to add a new document:Title: XXXXCategory: Sports, Musicmodels.pyfrom django.db import modelsclass Category(models.Model): id = models.AutoField(primary_key=True) name = models.CharField(max_length=200)class Documents(models.Model): id = models.AutoField(primary_key=True) title= models.CharField(max_length=200) category = models.ForeignKey(Category, on_delete=models.CASCADE)I think that if I get a answer like below it will resolve my problem.["docs": [{"ID": 1, "Title": "Doc1", "Category": Sports}, {"ID": 2, "Title": "Doc2", "Category": Music}], "categories": [{"ID": 0, "Name: "Sports"}, {"ID": 1, "Name": "Music"}]
You could make a separated request to get all the category before you create the form.To do this, you need to create a CategorySerializer and a ListCategoryProviderclass CategorySerializer(serializers.ModelSerializer): class Meta: model = Category fields = {'name'}class ListCategoryProvider(generics.ListAPIView): serializer_class = CategorySerializer def get_queryset(self): queryset = Category.objects.all() return querysetThen you need to parse the JSON containing all the categories names, to display them in your form
BeautifulSoup fails to parse long view state I try to use BeautifulSoup4 to parse the html retrieved from http://exporter.nih.gov/ExPORTER_Catalog.aspx?index=0 If I print out the resulting soup, it ends like this:kZXI9IjAi"/></form></body></html>Searching for the last characters 9IjaI in the raw html, I found that it's in the middle of a huge viewstate. BeautifulSoup seems to have a problem with this. Any hint what I might be doing wrong or how to parse such a page?
BeautifulSoup uses a pluggable HTML parser to build the 'soup'; you need to try out different parsers, as each will treat a broken page differently.I had no problems parsing that page with any of the parsers, however:>>> from beautifulsoup4 import BeautifulSoup>>> import requests>>> r = requests.get('http://exporter.nih.gov/ExPORTER_Catalog.aspx?index=0')>>> for parser in ('html.parser', 'lxml', 'html5lib'):... print repr(str(BeautifulSoup(r.text, parser))[-60:])... ';\r\npageTracker._trackPageview();\r\n</script>\n</body>\n</html>\n''();\r\npageTracker._trackPageview();\r\n</script>\n</body></html>''();\npageTracker._trackPageview();\n</script>\n\n\n</body></html>'Make sure you have the latest BeautifulSoup4 package installed, I have seen consistent problems in the 4.1 series solved in 4.2.