questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Tensorflow object detection training makes python crash I am training ssd_mobilenet_v1_coco_2017_11_17 model from Tensorflow object detection model zoo.My dataset is satellite imagery and my aim is to detect vehicles in the images.But training fails with python memory issues. I am training on CPU and my windows 10 machine has 32 gb RAM. TF record file for training is aroung 1.7 GB in size.I am unable to detect the reason for this failure.Please help.
try to adjust batch size, queue capacity, reader threads.
Python3 - Adding User Response to a List Instead of Over-writing Existing Data I am trying to write a basic store-front script that loops until the customer says no to the question. Each time there's input of an Item Number, I'm trying to store it and then eventually be able to match those numbers up with the Item Name and the Price (not quite yet, though)...I am just, now, trying to get it to add to the empty list "item_nums" instead of adding the last entry and over-writing the previous numbers.STOREKEEPERproducts = ['Notebook', 'Atari', 'TrapperKeeper', 'Jeans', 'Insects', 'Harbormaster', 'Lobotomy', 'PunkRock', 'HorseFeathers', 'Pants', 'Plants', 'Salami']prices = ['$4.99', '$99.99', '$89.99', '$3.99', '$2.99', '$299.99', '$19.99', '$3.99', '$4.99', '$2.99', '$119.99', '$1.99']SKUs = [1, 2, 3, 4, 5, 6, 7, 8 ,9, 10, 11, 12]item_nums = ()quantity = []response = ''#MORE VARIABLES AND FUNCTIONS WILL GO HEREprint("Jay's House of Rip-Offs\n\n")titles = ['Item Number', 'Item Name', 'Price']data = [titles] + list(zip(SKUs, products, prices))for i, d in enumerate(data): line = '|'.join(str(x).ljust(16) for x in d) print(line) if i == 0: print('-' * len(line))response = str(input("Order products [Y / N]?: "))while response != 'N': item_nums = input("Enter an item number: ") SKUs.append(item_nums) response = str(input("Order products [Y / N]?: ")) if response == 'N': breakprint("Here is the list of items you ordered: ",item_nums[0])
I'm not sure why you're appending to SKU, you need a new list to track order numbers. orders = []while str(input("Order products [Y / N]?: ")) != 'N': item_nums = input("Enter an item number: ") orders.append(item_nums)print("Here is the list of items you ordered: ", orders)
Postgresql failed to start I am trying to use PostgreSQL on Ubuntu. I installed it and everything was working fine. However, I needed to change the location of my database due to space constraints so I tried an online guide to do it.I proceeded to stop postgresql, create a new empty directory and give it permissions by usingchown postgres:postgres /my/dir/pathThat worked fine too. Then I used initdb -D /my/dir/pathto enable my database. I also changed the path_data in the postgresql.conf file to my new directory.When I now try to start the database, it says: The postgresql server failed to start, please check the log file. However, there is no log file! Something got screwed up when I changed the default directory. How do I fix this?
First: You may find it easier to manage your Pg installs on Ubuntu using the custom tools Ubuntu provides as part of pg_wrapper: pg_createcluster, pg_dropcluster, pg_ctlcluster etc. These integrate with the Ubuntu startup scripts and move the configuration to /etc/postgresql/ where Ubuntu likes to keep it, instead of the PostgreSQL default of in the datadir. To move where the actual files are stored, use a symbolic link (see below).When you have a problem, how are you starting PostgreSQL?If you're starting it via pg_ctl it should work fine because you have to specify the data directory location. If you're using your distro package scripts, though, they don't know you've moved the data directory.On Ubuntu, you will need to change configuration in /etc/postgresql to tell the scripts where the data dir is, probably pg_ctl.conf or start.conf for the appropriate version. I'm not sure of the specifics as I've never needed to do it. This is why:There's a better way, though. Use a symbolic link from your old datadir location to the new one. PostgreSQL and the setup scripts will happily follow it and you won't have to change any configuration.cd /var/lib/postgresql/9.1/mainmv main main.oldln -s /new/datadir/location mainI'm guessing "9.1" because you didn't give your Ubuntu version or your PostgreSQL version.An alternative is to use mount -o bind to map your new datadir location into the old place, so nothing notices the difference. Then add the bind mount to /etc/fstab to make it persistent across reboots. You only need to do that if one of the tools doesn't like the symbolic link approach. I don't think that'll be an issue with pg_wrapper etc.You should also note that since you've used initdb manually, your new datadir will have its configuration directly inside the datadir, not in /etc/postgresql/. It's way easier if you just use the Ubuntu cluster management scripts instead.
the same audio have different length using different tools (librosa,ffprobe) I want to measure an audio file's duration.I'm using two different tools and got different values.ffprobe:I'm using this line to get duration using ffprobe ffprobe -i audio.m4a -show_entries format=duration -v quiet -of csv="p=0"result :780.320000 seconds 2. Librosa (python library)and using this line to get duartion using librosay1, sr1 = librosa.load(audio_path, sr=44100)librosa.get_duration(y1, sr1) * 1000result 780329.7959183673 milliseconds Does anyone know what's causing the difference?
This is likely just normal floating point error. The two libraries probably make mathematically similar computations, but use different internal representation of the values which produce small rounding errors. This is normal and expected in floating point numbers.
python gspread updating multiple cells from reponse body I am using this python script to take a response from Progresso API:http://docs.progresso.apiary.io/#reference/behaviour/behaviour-events-collection/get-behaviour-eventsfrom urllib2 import Request, urlopenimport smtplib import gspread from oauth2client.service_account import ServiceAccountCredentialseaders = {'Authorization': 'Bearer [CURRENT_TOKEN]'}request = Request('https://private-anon-ae5edf57e7-progresso.apiary-mock.com/BMEvents/?Behaviour=new', headers=headers)response_body = urlopen(request).read()scope = ['https://spreadsheets.google.com/feeds']credentials = ServiceAccountCredentials.from_json_keyfile_name('ProgressoAPI-2f6ecaa6635c.json', scope)gc = gspread.authorize(credentials)wks = gc.open("Progresso Test").sheet1wks.clear()cell_list = wks.range('A1:H20')for cell in cell_list:cell.value = response_bodywks.update_cells(cell_list)I know the cell.value = response body is wrong and I don't know how I can get it right - I am stuck. it appears in every cell like this:"{ ""BehaviourEntryId"": 13798177, ""LearnerId"": 245277, ""LearnerCode"": ""2009-0080"", ""RegGroup"": ""U6-RWE"", ""Behaviour"": ""Negative"", ""IncidentDate"": ""2017-02-07"", ""Subject"": ""BE"", ""Location"": ""CLS"", ""Published"": ""Yes"", ""Creator"": ""DhDr"", ""Editor"": null, ""Assignee"": ""DiRo"", ""Status"": ""Completed"", ""Details"": [ { ""Category"": ""CL"", ""Type"": ""CLatt"", ""Severity"": ""S2"", ""point"": 0 }, { ""Category"": ""CL"", ""Type"": ""CLBEH"", ""Severity"": ""S2"", ""point"": 2 } ], ""Comments"": [ { ""BehaviourEntryCommentId"": 5648278, ""Confidential"": true, ""Comment"": ""Asked to go to the toilet and went to the one furthest away just to waste time."" }, { ""BehaviourEntryCommentId"": 5648279, ""Confidential"": false, ""Comment"": ""Spat gum out on floor"" }, { ""BehaviourEntryCommentId"": 5648280, ""Confidential"": false, ""Comment"": ""Was rude to memeber of Staff"" } ], ""Actions"": [ ""HTO"", ""ISO"" ]}"How do I separate the text to how I want in the cell range and bulk update it?
If you mean something like two columns with one row being "BehaviourEntryId" and the other row being 13798177, you can try something like this:import jsonresponse = json.loads(response_body) #decode the json response string, returns a dictresponse_pairs = list(response.items)for i in range(1, len(response_body)+1): current_pair = response_pairs[i-1] current_key = current_pair[0] current_value = current_pair[1] wks.update_acell('A{}'.format(i), current_key) wks.update_acell('B{}'.format(i), current_value)
How to use multiple "or" in python code My code below only prints "Remove Special Character". but if i leave only ("#"), it runs very well. def name_character(word=input("Username: ")): if ("#") or ("$") or ("&") in word: return print ("Remove Special Character") if word == "": return print ("Enter Username") else: return print (word)(name_character())
Try this: >>> username = "foo#">>> any(x in username for x in "#&$")True>>> username = "bar">>> any(x in username for x in "#&$")False
How to find position of character in a list with respect to other characters in a list in o(n) time? Suppose I have a string PRIME on a list ['P','R','I','M','E']. If we iterate through the list, the first element 'P' has 3 elements less than it which is ['I','M','E'] and the second element 'R' has only three elements less than it (note that we are looking for smaller elements going forward in the list so while looking for elements smaller than 'R', 'P' would not be considered as we are done with it) so the positional list would be [3,3,1,1,0] in above example. I could do this in o(n**2) time by using a nested loop but is there any way to do this in o(n)? I tried something like this but it failed horribly:for _ in range(int(input())):x=list(input())y=sorted(x)lis=[]for _ in x: res=abs(y.index(_)-x.index(_)) lis.append(res)print(lis)
Here is mine (not O(n), but not O(n^2) either I guess):>>> def find_dict_position(s): from collections import defaultdict counter = defaultdict(int) result = [] less_count = 0 for e in s[::-1]: less_count = sum(counter[c] for c in counter if c<e) result.append(less_count) counter[e] += 1 return reversed(result)>>> list(find_dict_position('PRIME'))[3, 3, 1, 1, 0]
Class hierarchy in Python I have a relaisboard connected to an arduino via firmata protocol using Python bindings. The communication works without problems using pyfirmata (https://github.com/tino/pyFirmata).The relaisboard has 16 relais. Every group of 3 relais is a channel. Every channel is connected to a Device Under Test input or output. This just to have a rough description of the purpose of the releboard.Below you can find a skeleton of the code.#!/usr/bin/env python__version__ = '0.1'# Fault Injection Unit# Power is connected to Fault Bus 1# Ground is connected to Fault Bus 2from pyfirmata import Arduinoclass FaultInsertionBoard(object): def __init__ (self, comPort = 'COM3'): """Initalize the Fault insertion Board Open communication with host via serial port Arguments: comPort -- The serial port used to connect the board to the host. """ self.board = Arduino(comPort) class Channel(object): def __init__ (self, aChannel): """ Create a Channel""" pass def NoFault(): """ Set the channel to the "No fault" condition No Fault condition is: -- DUT channel connected to the testing sistem -- DUT channel disconnected from the Fault bus 1 -- DUT channel disconnected from the Fault bus 2 """ pass def OpenCircuit(): """ Set the channel to the "Open Circuit fault" condition Open Circuit fault condition is: -- DUT channel disconnected from the testing sistem -- DUT channel disconnected from the Fault bus 1 -- DUT channel disconnected from the Fault bus 2 """ pass def ShortToGround(): """ Set the channel to the "Short to Ground fault" condition Open Circuit fault condition is: -- DUT channel disconnected from the testing sistem -- DUT channel disconnected from the Fault bus 1 -- DUT channel connected to the Fault bus 2 """ pass def ShortToPower(): """ Set the channel to the "Short to Ground fault" condition Open Circuit fault condition is: -- DUT channel disconnected from the testing sistem: channel relay is open -- DUT channel connected to the Fault bus 1: Fault Bus 1 relay is closed -- DUT channel disconnected from the Fault bus 2: Fault Bus 1 relay is open """ passdef main(): FaultBoard = FaultInsertionBoard('COM3') VoutSensor = FaultBoard.Channel(0) IOutSensor = FaultBoard.Channel(1) VoutSensor.NoFault() IOutSensor.NoFault() VoutSensor.ShortToGround() IOutSensor.ShortToPower()if __name__ == "__main__": main()Where:FaultInsertionBoard is a simple wrapper of the Arduino class inFirmata.Channel(n) identifies the n-th group of three relaisNoFault, ShortToPower, ShortToGround are various configurations ofthe three relais of each channel (it does not matter the actualconfiguration).Now the question: I have a very good experience with embedded firmware written in C, far less in Python. Obviously the code above is not correct. Can someone suggest me a class framework in order to get the above functionality? In other words, how can I write the Python code in order to drive the relais as described above?PS: alternatively I could write something like this:FaultBoard = FaultInsertionBoard('COM3')FaultBoard.Channel(0).NoFault()but I think it is less elegant and clear.
On the one hand, your actual question is pretty general, and you should try to be more specific in the future. On the other hand, it is often difficult for a beginner to know where to start, so I will provide you with some design tips that should help you get through this challenge.No nested classesNested classes are pretty much useless in Python. Perfectly legal, but pointless. They do not give you magical access to the containing class, and are not present in any of the instances (as they might be in Java). All that nesting does is to make the namespace more complex.The first thing I would do is to move Channel out of FaultInsertionBoard. A simple unindent will suffice. I will show you how to use it a bit further down.Naming conventionAnother thing to keep in mind is Python naming convention. While not a requirement, it is common to capitalize only class names, while everything else is lowercase with underscores between words (instead of camelCase).It is also conventional not to put spaces around the = in the definition of a default value for a function parameter.I will follow these conventions throughout this answer.Inheritance vs containmentYou should probably use inheritance rather than containment for FaultInsertionBoard:class FaultInsertionBoard(Arduino): passThis will make FaultInsertionBoard have all the methods and attributes of Arduino. You can now do fault_board.method() instead of fault_board.board.method(), where method is some method of the Arduino class.You will probably need to define some additional initialization steps, like setting a default value for the com_port, and later setting up the channels. You can define your own version of __init__ and call the parent class's implementation whenever you want:class FaultInsertionBoard(Arduino): def __init__(self, com_port='COM3'): super().__init__(com_port)If you use Python 2.x, use super(FaultInsertionBoard, self).__init__.Adding channelsTo be able to actually access channel instances, you need to define some data structure to hold them, and initialize some channels up front. The data structure can be accessible directly as an attribute, or through a method that does some additional checking of the parameters.As I mentioned earlier, nesting the Channel class will not move you in this direction at all. In fact, since your Channel class probably needs access to its parent board, we will add a new initialization parameter to its constructor:class Channel: def __init__(self, channel_id, parent): self.id = channel_id self.parent = parentYou have a couple of options available to you. The simplest is to initialize a sequence of Channels in FaultInsertionBoard, which you can access via [] rather than ():class FaultInsertionBoard(Arduino): def __init__(self, com_port='COM3'): super().__init__(com_port) self.channels = [] self.channels.append(Channel(0, self)) self.channels.append(Channel(1, self)) ...Now main will look like this:def main(): fault_board = FaultInsertionBoard('COM3') v_out_sensor = fault_board.channels[0] i_out_sensor = fault_board.channel[1] v_out_sensor.no_fault() v_out_sensor.short_to_ground() i_out_sensor.no_fault() i_out_sensor.short_to_ground()If you absolutely want to use parentheses to access the channels as channel(0), etc., you can define a method in FaultInsertionBoard. Keeping the __init__ method the same, you can add another method:def channel(self, index): # Check index if you want to, possibly raise an error if invalid return self.channels[index]In this case main will look like this:def main(): fault_board = FaultInsertionBoard('COM3') v_out_sensor = fault_board.channel(0) i_out_sensor = fault_board.channel(1) v_out_sensor.no_fault() v_out_sensor.short_to_ground() i_out_sensor.no_fault() i_out_sensor.short_to_ground()The first method has the advantage of allowing you direct access to the sequence of Channel objects. Since you are applying the same operation to the channels, you can iterate over all of them for an even simpler interface:def main(): fault_board = FaultInsertionBoard('COM3') for channel in fault_board.channels: channel.no_fault() channel.short_to_ground()Convenience methodsIt appears that you use the operation x.no_fault(); x.short_to_ground() multiple times in your code. It is often helpful to create what is called a convenience method in that case. You could add the following to Channel:def reset(self): self.no_fault() self.short_to_ground()And main could then look like this:def main(): fault_board = FaultInsertionBoard('COM3') for channel in fault_board.channels: channel.reset()
how to split findall result which contain "," in data x = re.findall(r'FROM\s(.*?\s)(WHERE|INNER|OUTER|JOIN|GROUP,data,re.DOTALL)I am using above expression to parse oracle sql query and get the result. I get multiple matches and want to print them each line by line.How can i do that.Some result even have "," in between them.
You can try this :for elt in x: print('\n'.join(elt.split(',')))join returns a list of the comma-separated elements, which are then joined again with \n (new line). Therefore, you get one result per line.
running bash command from python3 I am trying to remove some file (from my linux machine), except few:touch INCAR KPOINTS foo bar$lsbar foo INCAR KPOINTS$python3 mini.pyJob Done$lsbar foo INCAR KPOINTS The mini.py is:#!/usr/bin/python3import subprocesssubprocess.run(['rm', '-f', '!(INCAR|KPOINTS|PO*|*.sh)'])print("Job Done")As can be seen in the output of mini.py, its notgiving any error but neither its doing its job.What I am doing wrong here?
It doesn't work because !() is an extended matching pattern, and needs to be enabled explicitly:subprocess.run(['/bin/bash', '-O', 'extglob', '-c', 'rm -f !(INCAR|KPOINTS|PO*|*.sh)'])Note this will remove the script itself...
Odoo Python : How to copy/forward a phone number into user's device ( like behaviour) using Odoo python backend? Suppose in Odoo Form, i have a button. This button will trigger a python method in odoo backend to search the PIC and then forward/copy the PIC's phone_number to user's device, so that user can make a phonecall to the PIC using their device.The concern is, PIC is changing overtime. I mean, the pic will bebased on shift schedule. So, if i attached the PIC or Phone Number inthe Form View, it will gives the problem : the user will open the form view,taking time to view the data, scrolling for a moment, and then clickto call the pic on another minute, but now the pic maybe different. Thats whyi need to implement it in python way.Its like the <href="tel:"> in html, but i need to search the phone_number first.def search_n_call_pic(self, input): # search the responsible_pic pic_object = self.search_pic(input) # get the pic's phone number pic_phone_number = pic_object.phone self.forward_this_phone_number_to_user_device(pic_phone_number)i need to implement this method "forward_this_phone_number_to_user_device" so that it has the same behaviour like using html:<a href="tel:pic_phone_number">pic_phone_number</a> but i have to implement it in python.Please help, Thanks.
Let's say you have the pic_phone_number field on res.users model. And you want to show a tel: URI on model crm.lead.First you have to create a new computed field on model crm.lead:pic_phone_number_uri = fields.Char(compute="_compute_pic_phone_number_uri")def _compute_pic_phone_number_uri(self): user_phone = self.env.user.pic_phone_number user_uri = "tel:{}".format(user_phone) for record in self: record.pic_phone_number_uri = user_uriNow just add this field into the form view with widget url.<field name="pic_phone_number_uri" widget="url" readonly="1" />If you're on Odoo 13 or later you can use the phone widget and can skip adding tel: in the computation:<field name="pic_phone_number_uri" widget="phone" readonly="1" />
Error printing a string: %d format: a number is required, not str I am planning on taking a trip to Disney World late this summer and I have been trying to make a program to calculate an approximate cost of the trip for fun and to try and keep myself from getting too rusty. My problem is that when I try to display all of my calculated values, I keep receiving the error that is in the title. My code is:###Function to display costsdef Display(days, nights, building_type, person, room_cost, room_cost_person, DisneyPark, Hopper, IslandPark, IslandPTP, Island_parking, gas_cost, gas_cost_person, park_person, Total_cost_person, mpg, gas, downpay):print('''Cost of trip for a %i day/%i night stay in a %%s%%:Number of people going: %iTotal room cost ($) %4.2fRoom cost/person ($) %4.2fPrice of Disney World tickets ($) %4.2fPrice of hopper ticket-Disney ($) %4.2fPrice of Universal ticket ($) %4.2f Park-to-Park %%s%%Cost to park at Universal/person ($) %4.2fTotal cost of gas ($) %4.2fCost of gas/person ($)* %4.2fCost to park/person ($) %4.2fCost of groceries/person ($)^ %4.2fCost to eat out/person ($)^# %4.2fSouvenirs ($)^ %4.2fTotal cost of trip/person ($) %4.2f*Factoring in round trip distance (1490 miles), mpg of %i, and average gas cost $%4.2f#Covers eating out at night, eating in parks (butterbeer, etc), and eating while driving^Note that these are estimates%Note that the Villa housing requires a $%4.2f downpayment (refundable) that was not included in cost calculations----------------------------------------------------------------------------------------'''%(day, night, Building, person, room_cost, room_cost_person, DisneyPark, Hopper, IslandPark, IslandPTP, Island_parking, gas_cost, gas_cost_person, park_person, Groceries, Eat, Souvenirs, Total_cost_person, mpg, gas, downpay))I've looked at the suggestions for this question:Python MySQLdb issues (TypeError: %d format: a number is required, not str) and I tried to make the changes stated but they were not of help to me. I can individually print each value just fine but when I try to print them all in this large block of text I then get my error. I'd appreciate any insight anyone has to offer.
Likely the error is caused by one of the %i formattings. For example, the following code:'this is %i' % '5'This will return the same error: TypeError: %d format: a number is required, not str.
Python conditional boolean Im trying to understand the possibilities with Booleans in Python.I don't want to use an If statement.its_valid = True but I want something like thisits_valid = True if taking_stones == 2 or taking_stones == 1Is it possible in python and just out of curiosity if not in what language is it?Edit: second questionIs it possible to use a range as well? (from 1 to 2)its_valid = True if taking_stones 1:2
Equality comparisons return booleans, so there's no need to explicitly write True if {something true}. You can simply write:its_valid = (taking_stones == 2 or taking_stones == 1)Or, if you want to check multiple values more succinctly:its_valid = (taking_stones in (1,2))
XKCD Web Scraper - Automate the Boring Stuff I'm currently on Chapter 11 of ATBS and working through the Web Scraper project. I can get it to run fine however the web comics are never actually downloaded on my Mac.#! /usr/bin/env python3#downloadXkcd.py - Downloads every single XKCD comic.import requests, os, bs4url = 'http://xkcd.com' # starting URLos.makedirs('xkcd', exist_ok=True) # store comics in ./xkcdwhile not url.endswith('#'): #TODO: DL the page print('Downloading page %s...' % url) res = requests.get(url) res.raise_for_status() soup = bs4.BeautifulSoup(res.text) #TODO: Find URL of image comicElem = soup.select('#comic img') if comicElem == []: print('Could not find comic image.') else: comicUrl = 'http:' + comicElem[0].get('src') #TODO: Download Image print('Downloading image %s' % (comicUrl)) res = requests.get(comicUrl) res.raise_for_status() #TODO: Save image to ./xkcd imageFile = open(os.path.join('xkcd', os.path.basename(comicUrl)), 'wb') for chunk in res.iter_content(100000): imageFile.write(chunk) imageFile.close() #TODO: Get prev button URL prevLink = soup.select('a[rel="prev"]')[0] url = 'http://xkcd.com' + prevLink.get('href')print('Done.')What do I need to fix in order to get the comics to download? Thanks.
You seemed to have left out the html.parser as follows:soup = bs4.BeautifulSoup(res.text, 'html.parser')
Cumulative custom function over grouped data in Python I am looking to create a retention function over a pandas DataFrame which runs the cumulative function over grouped portions of the data. I want to do something similar to what the R plyr package doesSay I have some dummy data as of so:df = pd.DataFrame({'x' : np.repeat(np.arange(1,11), 5), 'y': np.tile(np.arange(1,6), 10)} )This gives us (just showing 10 first lines): x y0 1 11 1 22 1 33 1 44 1 55 2 16 2 27 2 38 2 49 2 5In this case 'x' is the column I want to group by and 'y' is what I want to run the function over.The function is a retention function that applies some factor to the previous sum and adds that to the current value. In code form this is what the function should look like (might be a better way):def retention(x, r): n = len(x) D = np.zeros(n) D[0] = x[0] for i in range(1,n): D[i] = r*D[i - 1] +x[i] return DHowever I want to function to essentially start over at the beginning of a new 'x' value.The result should look like this: x y0 1 11 1 2.252 1 3.56253 1 4.8906254 1 6.222656255 2 16 2 2.257 2 3.56258 2 4.8906259 2 6.22265625I need the solution to be flexible enough so that I could group by any number of columns and have variable lengths for the groups.I've tried several methods but can not get the solution.For example, this does not work:grouped = df.groupby('x')grouped.apply(lambda x: retention(df['y'],.25))NOTE: I have done this in R before using the plyr package:retention = function(x , r) { n =length(x) D = rep(0, n) D[1] = x[1] for (i in 2:n) { D[i]=r*D[i-1] + x[i] } return(D)}x = rep(1:10, each = 5)y = rep(1:5, 10)df = data.frame(x,y)ddply(df, .(x), summarize, y = retention (y, .25))
Interesting question. It appears that your decay factor, if call it so, is 0.25, the following two steps do what is intended (first 10 observations printed, the resultant is called z):In [67]:z = df.groupby('x').y.apply(lambda x: np.convolve(x, np.power(0.25, range(len(x)))[:len(x)], mode='full')[:len(x)])print zx1 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]2 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]3 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]4 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]5 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]6 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]7 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]8 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]9 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]10 [1.0, 2.25, 3.5625, 4.890625, 6.22265625]Name: y, dtype: objectIn [68]:print pd.concat([pd.DataFrame({'x': i, 'z': v}) for i, v in zip(z.index.values, z.values)]).head(10) x z0 1 1.0000001 1 2.2500002 1 3.5625003 1 4.8906254 1 6.2226560 2 1.0000001 2 2.2500002 2 3.5625003 2 4.8906254 2 6.222656Basically, the cumulative sum operation (with a factor) is done using numpy.convolve. The rest is straight forward: just groupby the data into groups, apply the convolve and then concat the resultants together.
How do I code a data encryption program using Python? Once I choose the appropriate encryption algorithm, what function would I use, in Python, to implement it into my security software that I am working on? I can't figure the logic.
You can use PyCrypto: https://pypi.python.org/pypi/pycrypto.It's simple to use:>>> from Crypto.Cipher import AES>>> obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')>>> message = "The answer is no">>> ciphertext = obj.encrypt(message)>>> ciphertext'\xd6\x83\x8dd!VT\x92\xaa`A\x05\xe0\x9b\x8b\xf1'>>> obj2 = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')>>> obj2.decrypt(ciphertext)'The answer is no'
Cut peaks and troughs Here is an algorithm I would like to implement using numpy:For a given 1D array, calculate the maximum and the minimum over a sliding window.Create a new array, with the first value equals to the first value in the given array.For each subsequent values, clip the previous value inserted in the new array between the min and the max from the sliding window.As an example, let's take the array a=[3, 4, 5, 4, 3, 2, 3, 3] and a sliding window of size 3. We find for min and max:min = [3, 4, 3, 2, 2, 2]max = [5, 5, 5, 4, 3, 3]Now our output array will start with the first element from a, so it's 3. And for the next value, I clip 3 (the last value inserted) between 4 and 5 (the min and max found at index 1). The result is 4. For the next value I clip 4 between 3 and 5. It's still 4. And so on. So we finally have:output = [3, 4, 4, 4, 3, 3]I cannot find a way to avoid using a python for loop in my code. Here is what I have for the moment:def second_window(array, samples): sample_idx = samples - 1 output = np.zeros_like(array[0:-sample_idx]) start, stop = 0, len(array) last_value = array[0] # Sliding window is a deque of length 'samples'. sliding_window = deque(array[start : start+sample_idx], samples) for i in xrange( stop - start - sample_idx): # Get the next value in sliding window. After the first loop, # the left value gets discarded automatically. sliding_window.append(array[start + i + sample_idx]) min_value, max_value = min(sliding_window), max(sliding_window) # Clip the last value between sliding window min and max last_value = min( max(last_value, min_value), max_value) output[start + i] = last_value return outputWould it be possible to achieve this result with only numpy?
I don't think you can. You can sometime do this kind of iterative computation with unbuffered ufuncs, but this isn't the case. But let me ellaborate...OK, first the windowing an min/max calculations can be done much faster:>>> a = np.array([3, 4, 5, 4, 3, 2, 3, 3])>>> len_a = len(a)>>> win = 3>>> win_a = as_strided(a, shape=(len_a-win+1, win), strides=a.strides*2)>>> win_aarray([[3, 4, 5], [4, 5, 4], [5, 4, 3], [4, 3, 2], [3, 2, 3], [2, 3, 3]])>>> min_ = np.min(win_a, axis=-1)>>> max_ = np.max(win_a, axis=-1)Now, lets create and fill up your output array:>>> out = np.empty((len_a-win+1,), dtype=a.dtype)>>> out[0] = a[0]If np.clip where a ufunc, we could then try to do:>>> np.clip(out[:-1], min_[1:], max_[1:], out=out[1:])array([4, 3, 3, 3, 3])>>> outarray([3, 4, 3, 3, 3, 3])But this doesn't work, because np.clip is not a ufunc, and there seems to be some buffering involved.And if you apply np.minimum and np.maximum separately, then it doesn't always work:>>> np.minimum(out[:-1], max_[1:], out=out[1:])array([3, 3, 3, 3, 3])>>> np.maximum(out[1:], min_[1:], out=out[1:])array([4, 3, 3, 3, 3])>>> outarray([3, 4, 3, 3, 3, 3])although for your particular case reversing the other does work:>>> np.maximum(out[:-1], min_[1:], out=out[1:])array([4, 4, 4, 4, 4])>>> np.minimum(out[1:], max_[1:], out=out[1:])array([4, 4, 4, 3, 3])>>> outarray([3, 4, 4, 4, 3, 3])
scrape an api result page with scrapy I have this url that the content of its response, contains some JSON data. https://www.tripadvisor.com/TypeAheadJson?action=API&types=geo%2Cnbrhd%2Chotel%2Ctheme_park&legacy_format=true&urlList=true&strictParent=true&query=sadaf%20dubai%20hotel&max=6&name_depth=3&interleaved=true&scoreThreshold=0.5&strictAnd=false&typeahead1_5=true&disableMaxGroupSize=true&geoBoostFix=true&neighborhood_geos=true&details=true&link_type=hotel%2Cvr%2Ceat%2Cattr&rescue=true&uiOrigin=trip_search_Hotels&source=trip_search_Hotels&startTime=1516800919604&searchSessionId=BA939B3D93510DABB510328CBF3353131516800881576ssid&nearPages=trueEverytime i paste this url in the browser with different queries, i get a nice JSON result. But in the scrapy or scrapy shell, i don't get any result. This is my scrapy spider class :link = "https://www.tripadvisor.com/TypeAheadJson?action=API&types=geo%2Cnbrhd%2Chotel%2Ctheme_park&legacy_format=true&urlList=true&strictParent=true&query={}%20dubai%20hotel&max=6&name_depth=3&interleaved=true&scoreThreshold=0.5&strictAnd=false&typeahead1_5=true&disableMaxGroupSize=true&geoBoostFix=true&neighborhood_geos=true&details=true&link_type=hotel%2Cvr%2Ceat%2Cattr&rescue=true&uiOrigin=trip_search_Hotels&source=trip_search_Hotels&startTime=1516800919604&searchSessionId=BA939B3D93510DABB510328CBF3353131516800881576ssid&nearPages=true"def start_requests(self): files = [f for f in listdir('results/') if isfile(join('results/', f))] for file in files: with open('results/' + file, 'r', encoding="utf8") as tour_info: tour = json.load(tour_info) for hotel in tour["hotels"]: yield scrapy.Request(self.link.format(hotel))name = 'tripadvisor'allowed_domains = ['tripadvisor.com']def parse(self, response): print(response.body) For this code, in scrapy shell, i get this result: b'{"normalized":{"query":""},"query":{},"results":[],"partial_content":false}'In scrapy command line, by running the spider, i first got the Forbidden by robots.txt error for every url. I changed scrapy ROBOTSTXT_OBEY to False so it does not obey this file. Now i get [] for every request, but i should get a JSON object like this: [ { "urls":[ { "url_type":"hotel", "name":"Sadaf Hotel, Dubai, United Arab Emirates", "type":"HOTEL", "url":"\/Hotel_Review-g295424-d633008-Reviews-Sadaf_Hotel-Dubai_Emirate_of_Dubai.html" } ],...
Try removing the sessionID from the URL and maybe check how "unfriendly" your settings.py is. (Also see this blog)But it could be way easier to use Wget, like wget 'https://www.tripadvisor.com/TypeAheadJson?action=API&types=geo%2Cnbrhd%2Chotel%2Ctheme_park&legacy_format=true&urlList=true&strictParent=true&query={}%20dubai%20hotel&max=6&name_depth=3&interleaved=true&scoreThreshold=0.5&strictAnd=false&typeahead1_5=true&disableMaxGroupSize=true&geoBoostFix=true&neighborhood_geos=true&details=true&link_type=hotel%2Cvr%2Ceat%2Cattr&rescue=true&uiOrigin=trip_search_Hotels&source=trip_search_Hotels&startTime=1516800919604&nearPages=true' -O results.json
How to change Alexa talking speed? How to change Alexa talking/playback speed to super fast or super slow? Is there a way to manipulate the sampling rate of Alexa audio output?
Use prosody tag of SSML to modify the speed, pitch and volume of response speech.Ex: <speak> <prosody rate="medium">I speak in medium pace</prosody>. <prosody rate="slow">I speak in slow pace</prosody>. <prosody rate="fast">I speak in fast pace</prosody>. <prosody rate="x-fast">I speak in very fast pace</prosody>. <prosody rate="x-slow">I speak in very slow pace</prosody>.</speak>You can also give the speed of speech in % <prosody rate="50%">I speak in 50% slower pace</prosody>Percentages greater than 100% increase the rate, percentages below 100% decrease the rate.More information here
Regular expression with different number of characters I need to create a regular expression to validate strings. The strings can have only few characters and each character can be repeated only a few number of times.The regular expression should check below conditions.The string can have only a, b, c, d, e as characters.The character 'a' can appear a maximum of 2 times.The character 'b' can appear maximum of 3 times.The character 'c' can appear maximum of 3 times.The character 'd' can be appear maximum of 1 time.The character 'e' can be appear maximum of 1 time.I know that this can be achieved by string functions. But I am trying to do it with regular expressions.Any help on this is highly appreciated.
Likely, performance wise, the best way to do this is with Python native string operations.I would write like so:lim=(('a',2),('b',3),('c',3),('d',1),('e',1))results={}for s in [list_of_many_strings]: results[s]=bool(not(set(s)-set('abcde'))) and (not any(s.count(c)>x for c,x in lim))This relies on str.count(sub[, start[, end]]) to count the occurrence of a sub string in a string and any function to test if any condition is true. Since you are interested in performance, you can time how long processing 100,000 strings might take with timeit:import redef f1(li): results={} lim=(('a',2),('b',3),('c',3),('d',1),('e',1)) for s in li: results[s]=bool(not(set(s)-set('abcde'))) and (not any(s.count(c)>x for c,x in lim)) return results def f2(li): pat=re.compile(r'^a{0,2}b{0,3}c{0,3}d{0,1}e{0,1}$') results={} for s in li: results[s]=True if pat.search(''.join(sorted(s))) else False return resultsdef f3(li): pat=re.compile(r'^(?!.*[^a-e])(?!(?:.*a){3})(?!(?:.*b){4})(?!(?:.*c){4})(?!(?:.*d){2})(?!(?:.*e){2}).+') results={} for s in li: results[s]=True if pat.search(s) else False return results if __name__=='__main__': import timeit import random s='abcdeabcdebc' li=[''.join(random.sample(s,8)) for _ in range(100000)] print(f1(li)==f2(li)==f3(li)) for f in (f1,f2,f3): print(" {:^10s}{:.4f} secs".format(f.__name__, timeit.timeit("f(li)", setup="from __main__ import f, li", number=10)))On my computer, takes:True f1 0.8519 secs f2 1.1235 secs f3 1.3070 secs
Pandas: How to (cleanly) unpivot two columns with same category? I'm trying to unpivot two columns inside a pandas dataframe. The transformation I seek would be the inverse of this question.We start with a dataset that looks like this:import pandas as pdimport numpy as npdf_orig = pd.DataFrame(data=np.random.randint(255, size=(4,5)), columns=['accuracy','time_a','time_b','memory_a', 'memory_b'])df_orig accuracy time_a time_b memory_a memory_b0 6 118 170 102 2391 241 9 166 159 1622 164 70 76 228 1213 228 121 135 128 92I wish to unpivot both themwmory and time columns, obtaining this dataset in result:df accuracy memory category time0 6 102 a 1181 241 159 a 92 164 228 a 703 228 128 a 12112 6 239 b 17013 241 162 b 16614 164 121 b 7615 228 92 b 135So far I have managed to get my desired output using df.melt() twice plus some extra commands:df = df_orig.copy()# Unpivot memory columnsdf = df.melt(id_vars=['accuracy','time_a', 'time_b'], value_vars=['memory_a', 'memory_b'], value_name='memory', var_name='mem_cat')# Unpivot time columnsdf = df.melt(id_vars=['accuracy','memory', 'mem_cat'], value_vars=['time_a', 'time_b'], value_name='time', var_name='time_cat')# Keep only the 'a'/'b' as categoriesdf.mem_cat = df.mem_cat.str[-1]df.time_cat = df.time_cat.str[-1]# Keeping only the colums whose categories match (DIRTY!)df = df[df.mem_cat==df.time_cat]# Removing the duplicated category column.df = df.drop(columns='time_cat').rename(columns={"mem_cat":'category'})Given how easy it was to solve the inverse question, I believe my code is way too complex. Can anyone do it better?
Use wide_to_long:np.random.seed(123)df_orig = pd.DataFrame(data=np.random.randint(255, size=(4,5)), columns=['accuracy','time_a','time_b','memory_a', 'memory_b'])df = (pd.wide_to_long(df_orig.reset_index(), stubnames=['time','memory'], i='index', j='category', sep='_', suffix='\w+') .reset_index(level=1) .reset_index(drop=True) .rename_axis(None))print (df) category accuracy time memory0 a 254 109 661 a 98 230 832 a 123 57 2253 a 113 126 734 b 254 126 2205 b 98 17 1066 b 123 214 967 b 113 47 32
Content API for Shopping - No module named google_auth_httplib2 I'm following the link below with the aim of implementing a Google API for shopping:https://developers.google.com/shopping-content/guides/quickstart/making-an-api-callbut when I insert the command below:python -m shopping.content.products.my-insertI encounter this error:No module named google_auth_httplib2Whenever I run the following command however to test if google auth is installed:pip install --upgrade google-auth-oauthlib[tool]I get the following message:Requirement already satisfiedI'm not sure how to solve this problem sorry would greatly appreciate everyone's assistance.
You should install google-auth-httplib2 and not google-auth-oauthlibpip install google-auth-httplib2
BeforeClass and AfterClass methods in Selenium Python I am new to test automation in Selenium Python. I was trying to automate a simple login test and I want to do some things in AfterClass method like in Selenium Java.I have attached my code here.test_login.pyimport unittestfrom helper_actions import load_page, type_username, type_password, click_login_buttonfrom helper_assertions import get_welcome_text, read_error_messageclass LoginTests(unittest.TestCase): def setUp(self): super().setUp() print("Run before each methods") load_page() def test_valid_username_and_password(self): print("test_valid_username_and_password") type_username("admin") type_password("Ptl@#321") click_login_button() assert get_welcome_text() == "Welcome Admin" def test_valid_username_and_invalid_password(self): print("test_valid_username_and_invalid_password") type_username("admin") type_password("admin") click_login_button() assert read_error_message() == "Invalid credentials" def tearDown(self): print("Run after each method") super().tearDown()if __name__ == "__main__": unittest.main()helper_actions.pyfrom selenium import webdriverfrom selenium.webdriver.chrome.service import Service as ChromeServicefrom webdriver_manager.chrome import ChromeDriverManagerfrom selenium.webdriver.common.by import Bydriver = webdriver.Chrome(service=ChromeService( ChromeDriverManager().install()))def load_page(): driver.get("http://hrm.pragmatictestlabs.com/")def type_username(username): driver.find_element(By.ID, "txtUsername").send_keys(username)def type_password(password): driver.find_element(By.ID, "txtPassword").send_keys(password)def click_login_button(): driver.find_element(By.ID, "btnLogin").click()helper_assertions.pyfrom selenium import webdriverfrom selenium.webdriver.chrome.service import Service as ChromeServicefrom webdriver_manager.chrome import ChromeDriverManagerfrom selenium.webdriver.common.by import Bydriver = webdriver.Chrome(service=ChromeService( ChromeDriverManager().install()))def get_welcome_text(): return driver.find_element(By.ID, "welcome").textdef read_error_message(): return driver.find_element(By.ID, "spanMessage").textHow can I add driver.quit() at the end of the tests run?
@Anupama Balasooriya you can use setUpClass() and tearDownClass() for your purpose. You can refer this link for details. https://docs.python.org/3/library/unittest.html
Confusion matrix get value error I am trying to create a confusion matrix with sci-kit learn for epileptic data set fromhttps://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognitionafter preparation, doing cross validation and modeling i got the result as follow (i tagged the screenshot):now when i want to get confusion matrix i get this error: from sklearn.metrics import confusion_matrix conf = confusion_matrix(pred["y"], pred["PredictedLabel"]) print(conf)how can i solve this problem?
You can convert both predicted and true label to str:conf = confusion_matrix(pred["y"].astype(str), pred["PredictedLabel"].astype(str))Trying to recreate the similar issue, consider following case where predicted and true are different types:import pandas as pdfrom sklearn.metrics import confusion_matrixpred = pd.DataFrame()pred["y"] = [1,2,3]pred["PredictedLabel"] = ['1','2','3']conf = confusion_matrix(pred["y"], pred["PredictedLabel"])print(conf)It will give error: ValueError: Mix of label input types (string and number). If you convert them both to str type (you may use other as int or float as well where both has to be same though for predicted and true labels):import pandas as pdfrom sklearn.metrics import confusion_matrixpred = pd.DataFrame()pred["y"] = [1,2,3]pred["PredictedLabel"] = ['1','2','3']conf = confusion_matrix(pred["y"].astype(str), pred["PredictedLabel"].astype(str))print(conf)Result:[[1 0 0] [0 1 0] [0 0 1]]
Pandas - resample rows based on another df index I have a datframe looks like this:zone Datetime Demand 48 2020-08-02 00:00:00 14292.550740 48 2020-08-02 01:00:00 14243.490740 48 2020-08-02 02:00:00 9130.840744 48 2020-08-02 03:00:00 10483.510740 48 2020-08-02 04:00:00 10014.970740I want to resample (sum) the demand values according to another df index looks like:2020-08-02 03:00:002020-08-02 06:00:002020-08-02 07:00:002020-08-02 10:00:00What is the best way to handle that?
I believe you need merge_asof:print (df2) a2020-08-02 03:00:00 12020-08-02 06:00:00 22020-08-02 07:00:00 32020-08-02 10:00:00 4df1['Datetime'] = pd.to_datetime(df1['Datetime'])df2.index = pd.to_datetime(df2.index)df = pd.merge_asof(df1, df2.rename_axis('date2').reset_index(), left_on='Datetime', right_on='date2', direction='forward' )print (df) zone Datetime Demand date2 a0 48 2020-08-02 00:00:00 14292.550740 2020-08-02 03:00:00 11 48 2020-08-02 01:00:00 14243.490740 2020-08-02 03:00:00 12 48 2020-08-02 02:00:00 9130.840744 2020-08-02 03:00:00 13 48 2020-08-02 03:00:00 10483.510740 2020-08-02 03:00:00 14 48 2020-08-02 04:00:00 10014.970740 2020-08-02 06:00:00 2And then aggregate sum, e.g. if need by both columns:df = df.groupby(['zone','date2'], as_index=False)['Demand'].sum()print (df) zone date2 Demand0 48 2020-08-02 03:00:00 48150.3929641 48 2020-08-02 06:00:00 10014.970740
Null or None function in Python I am using the CreateTrackBar function in OpenCV to create a trackbar. But I don't want any callback to happen on change. I will be doing that in a separate loop where I get the trackbar value using cv2.getTrackbarPos(). But Python returns an error if I don't give a callable function as an argument to CreateTrackBar(). The documentation for OpenCV says : If the callback is the NULL pointer, no callbacks are called, but only value is updated.I am guessing that is for the C++ implementation. Is there a similar null pointer or a null or None function in Python? I understand that I can just make a function that does nothing. Just seeing if there is a more elegant way of doing this. I tried None and got the error that None is not callable. import cv2cv2.namedWindow("Window")cv2.createTrackbar("Value", "Window", 100, 255, None)#Do stuff here in a while loopcv2.waitKey(0)cv2.destroyAllWindows()
I think you might just need to define a quick function. You can use an anonymous lambda and avoid explicitly defining a function to return None:cv2.createTrackbar("Value", "Window", 100, 255, lambda x:x)
Using If Statements To Check If Something Raises An Error I'm trying to write a program that will solve questions about parametric equations for me. I'm trying to do the following:I'm trying to find 2 answers to a parametric equation. The first answer will be the positive square root. The second answer will be the negative square root. If the first square root raises a math domain error, don't find the second answer. This is what I have so far: def hitsGround(vertical_velocity, y_coordinate): h = vertical_velocity/-16/-2 k = -16*(h)**2 + vertical_velocity*(h) + y_coordinate time = float(input("At what height do you want me to solve for time?: ")) try: hits_height1 = math.sqrt((time - k)/-16) + h except ValueError: print("It won't reach this height.") else: print(f"It will take {hits_height1} seconds to hit said height.") try: hits_height2 = -math.sqrt((time - k)/16) + h except ValueError: print("There is no second time it will reach this height.") else: print(f"It will take {hits_height2} seconds to hit said height.")Is there any way to use an if statement to check if the first equation raises a math domain error so I can make it so it doesn't find the second answer? Thanks!
You cannot test for a run-time exception with if; that's exactly what try-except does. However, when the illegal operation is so directly defined, you can test for that condition before you try the sqrt opertaion:if (time - k)/-16 < 0: # no rootselse: # proceed to find roots.
Issue while extracting data using Beautifulsoup My objective is to convert an xls file to xlsx file. The xls file which I am trying to convert is actually an html file containing tables (This xls file is obtained as a result of a query from jira). To facilitate the conversion I have created a file handler and then given that file handler to a beautiful soup and have extracted the table on interest and this extracted table is converted to a string and given to pandas dataframe for further processing. This works fine but when the file size is large say around 80 MB it takes a large amount of time to process. How do I overcome this? import bs4, os import pandas as pd print('Begin') fileName = 'TestSample.xls' fileHandler=open(fileName, encoding='utf-8') soup = bs4.BeautifulSoup(fileHandler,'html.parser') tbl = soup.find_all('table', id='issuetable') df=pd.read_html(str(tbl)) df[0].to_excel("restult.xlsx", index=False) print('Completed')
There is no good way for large files, but you can try different ways.from simplified_scrapy import SimplifiedDocprint('Begin')fileName = 'TestSample.xls'html=open(fileName, encoding='utf-8').read()doc = SimplifiedDoc(html)start = 0 # If a string can uniquely mark the starting position of data, the performance will be bettertbl = doc.getElement('table', attr='id',value='issuetable', start=start)print(tbl.outerHtml)Or block readf=open(fileName, encoding='utf-8')html = ''start = '' # Start of blockend = '' # End of blockfor line in f.readlines(): if not html: html+=line if line.find(end)>=0: break elif line.find(start)>=0: html = line if line.find(end)>=0: breakdoc = SimplifiedDoc(html)tbl = doc.getElement('table', attr='id',value='issuetable')print(tbl.outerHtml)
Is it possible to $.post inside a $.get in jquery (Flask + Python) I'm relatively new to Flask in Python. So please bear with me if my question sounds stupid.I've a GET function like this below:@app.route('/transactions', methods=['GET','POST'])def get_transactions(): ...I've a text box (#transaction_year) on html and I'd need that input to be used within this above function (as a $.post). The above function is called by following JQuery$('#get-transactions-btn').on('click', function (e) { $.get('/transactions', function (data) { ....});});The problem is when the button (get-transactions-btn) is clicked, the function (get_transactions) runs as a GET and fetches data. Is it possible to do something like below to run $.post before $.get is executed and still have the value from the input text box (#transaction_year)?$('#get-transactions-btn').on('click', function (e) { $.get('/transactions', function (data) { var transaction_year = $("#transaction_year").val(); $.post("/transactions", { transaction_year: transaction_year }) ....});});So that I can use it within my function like below? @app.route('/transactions', methods=['GET','POST'])def get_transactions(): # Store the value from post here transaction_year = float(request.form['transaction_year']) ...Thank you and appreciate any feedback.
I've managed to get a workaround for this. I created a separate function to store the $.post value in a global variable input_year.input_year = 1@app.route('/transactions_input', methods=['POST'])def get_transactions_input(): global input_year transaction_year = float(request.form['transaction_year']) input_year = transaction_year return 'Done'Updated JQuery to below: $('#get-transactions-btn').on('click', function (e) { var transaction_year = $("#transaction_year").val(); $.post("/transactions_input", { transaction_year: transaction_year }) $.get('/transactions', function (data) { ....});});Global variable sets the input box value and this can be used inside /transactions.
populate combobox editable human_name and line edit phone and email from mariadb I have table hr and 1 combobox with a list of hr, I want to show email and phone to hremail_lineEdit and hrphone_lineEdit, but I can only show phone to hrphone_lineEdit.def hr_name(self): self._conn = pymysql.connect(host=127.0.0.1, port=3306, user='root', passwd=root, db='testhr', charset='utf8') self._cur = self._conn.cursor() sql_coop4hr = 'select human_name,email,phone from hr' count_coop4hr = self._cur.execute(sql_coop4hr) res_coop4hr = self._cur.fetchall() for row in res_coop4hr: un, email, phone = row self.hr_name_comboBox.addItem(un, email)def hr_email(self): self.hremail_lineEdit.setText(self.hr_name_comboBox.currentData()) #email self.hrphone_lineEdit.setText(self.hr_name_comboBox.currentData()) #phone
There are several options:Pass email and phone as a tuple (or list) to the userData:self.hr_name_comboBox.clear()for row in res_coop4hr: un, email, phone = row self.hr_name_comboBox.addItem(un, (email, phone))def hr_email(self): email, phone = self.hr_name_comboBox.currentData() self.hremail_lineEdit.setText(email) self.hrphone_lineEdit.setText(phone)Create 2 roles associated with each data:EMAIL_ROLE = Qt.UserRolePHONE_ROLE = Qt.UserRole + 1self.hr_name_comboBox.clear()for i, row in enumerate(res_coop4hr): un, email, phone = row self.hr_name_comboBox.insertItem(i, un) self.hr_name_comboBox.setItemData(i, email, EMAIL_ROLE) self.hr_name_comboBox.setItemData(i, phone, PHONE_ROLE)def hr_email(self): index = self.hr_name_comboBox.currentIndex() email = self.hr_name_comboBox.itemData(index, EMAIL_ROLE) phone = self.hr_name_comboBox.itemData(index, PHONE_ROLE) self.hremail_lineEdit.setText(email) self.hrphone_lineEdit.setText(phone)
How to implement DBMS_METADATA.GET_DDL in cx_Oracle and python3 and get the ddl of the table? this is the oracle command i am using :-query = '''SELECT DBMS_METADATA.GET_DDL('TABLE', 'MY_TABLE', 'MY_SCHEMA') FROM DUAL;'''cur.execute(query)now how to get the ddl of the table using cx_Oracle and python3 .please help . i am unable to extract the ddl.
The following code can be used to fetch the contents of the DDL from dbms_metadata:import cx_Oracleconn = cx_Oracle.connect("username/password@hostname/myservice")cursor = conn.cursor()def OutputTypeHandler(cursor, name, defaultType, size, precision, scale): if defaultType == cx_Oracle.CLOB: return cursor.var(cx_Oracle.LONG_STRING, arraysize = cursor.arraysize)cursor.outputtypehandler = OutputTypeHandlercursor.execute("select dbms_metadata.get_ddl('TABLE', :tableName) from dual", tableName="THE_TABLE_NAME")text, = cursor.fetchone()print("DDL fetched of length:", len(text))print(text)The use of the output type handler is to eliminate the need to process the CLOB. Without it you would need to do str(lob) or lob.read() in order to get at its contents. Either way, however, you are not limited to 4,000 characters.
Slicing a multiindexed column dataframe to obtain a new data frame import pandas as pdimport stringfrom random import randintmonths = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ]monthyAmounts = [ "actual", "budgeted", "difference" ]summary = []summary.append( [ randint( -1000, 15000 ) for x in range( 0, len( months ) * len( monthyAmounts ) ) ] )summary.append( [ randint( -1000, 15000 ) for x in range( 0, len( months ) * len( monthyAmounts ) ) ] )summary.append( [ randint( -1000, 15000 ) for x in range( 0, len( months ) * len( monthyAmounts ) ) ] )index = pd.Index( [ 'Income', 'Expenses', 'Difference' ], name = 'type' )columns = pd.MultiIndex.from_product( [months, monthyAmounts], names=['month', 'category'] )summaryDF = pd.DataFrame( summary, index = index, columns = columns )budgetMonths = pd.date_range( "January, 2018", periods = 12, freq = 'BM' )idx = pd.IndexSlicebudgetDifference = summaryDF.loc[ 'Difference', idx[:, 'budgeted' ] ].cumsum()budgetActual = summaryDF.loc[ 'Difference', idx[:, 'actual' ] ].cumsum()What I want is a dataframe containing just the actual & budgeted columns for the Difference row per month and an additional column containing the months (I need this additional column for graph generation eventually)If I do just:budgetDifference = pd.DataFrame( { 'difference' : budgetDifference, 'months' : budgetMonths } )what I end up with is a dataframe with the difference & month columns. difference monthsmonth category Jan budgeted 1097 2018-01-31Feb budgeted 11476 2018-02-28Mar budgeted 11143 2018-03-30Apr budgeted 25082 2018-04-30May budgeted 28019 2018-05-31Jun budgeted 37164 2018-06-29Jul budgeted 36747 2018-07-31Aug budgeted 44651 2018-08-31Sep budgeted 54283 2018-09-28Oct budgeted 62728 2018-10-31Nov budgeted 76144 2018-11-30Dec budgeted 77781 2018-12-31However, when I try:budgetDifference = pd.DataFrame( { 'difference' : budgetDifference, 'actual' : budgetActual, 'months' : budgetMonths } )I get:ValueError: array length 12 does not match index length 24and I am not sure why.
You need to align indices for the series which constitute your dataframe:res = pd.DataFrame({'difference': budgetDifference, 'months': budgetMonths, 'actual': pd.Series(budgetActual.values, index=budgetDifference.index)})print(res) difference months actualmonth category Jan budgeted 4057 2018-01-31 1592Feb budgeted 4550 2018-02-28 2211Mar budgeted 3847 2018-03-30 4096Apr budgeted 12970 2018-04-30 9588May budgeted 17459 2018-05-31 19623Jun budgeted 30884 2018-06-29 32347Jul budgeted 35258 2018-07-31 37205Aug budgeted 35823 2018-08-31 50234Sep budgeted 47599 2018-09-28 57188Oct budgeted 61258 2018-10-31 71096Nov budgeted 65914 2018-11-30 71904Dec budgeted 73814 2018-12-31 77308
Reducing overfitting in CNN by increasing training data size versus augmenting images (preprocessing data) using DataImageGenerator Does increasing the size of the training data helps in reducing overfitting ? Or is it suggested to go for image augmentation (data preprocessing) using the ImageDataGenerator in Tensorflow to skew or rotate the image to decrease overfitting ? Which method is better to reduce overfitting ??
By means of image augmentation, basically you are increasing the size of your training data. If you have data which is different from your existing training data, then adding that data into your training data is good.So in short, both the methods are good to overcome over-fitting.
SQLalchemy not committing changes when setting role I'm creating tables using a sqlalchemy engine, but even though my create statements execute without error, the tables don't show up in the database when I try to set the role beforehand. url = 'postgresql://{}:{}@{}:{}/{}'url = url.format(user, password, host, port, db)engine = sqlalchemy.create_engine(url)# works fineengine.execute("CREATE TABLE testpublic (id int, val text); \n\nINSERT INTO testpublic VALUES (1,'foo'), (2,'bar'), (3,'baz');")r = engine.execute("select * from testpublic")r.fetchall() # returns expected tuplesengine.execute("DROP TABLE testpublic;")# appears to succeed/does NOT throw any errorengine.execute("SET ROLE read_write; CREATE table testpublic (id int, val text);")# throws error "relation testpublic does not exist"engine.execute("select * FROM testpublic")For context, I am on python 3.6, sqlalchemy version 1.2.17 and postgres 11.1 and the role "read_write" absolutely exists and has all necessary permissions to create a table in public (I have no problem running the exact sequence above in pgadmin).Does anyone know why this is the case and how to fix?
The issue here how sqlalchemy decides to issue a commit after each statement. if a text is passed to engine.execute, sqlalchemy will attempt to determine if the text is a DML or DDL using the following regex. You can find it in the sources hereAUTOCOMMIT_REGEXP = re.compile( r"\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER)", re.I | re.UNICODE)This only detects the words if they're at the start of the text, ignoring any leading whitespaces. So, while your first attempt # works fine, the second example fails to recognize that a commit needs to be issued after the statement is executed because the first word is SET. Instead, sqlalchemy issues a rollback, so it # appears to succeed/does NOT throw any error.the simplest solution is to manually commit. example: engine.execute("SET ROLE read_write; CREATE table testpublic (id int, val text); COMMIT;")or, wrap the sql in text and set autocommit=True, as shown in the documentationstmt = text('set role read_write; create table testpublic (id int, val text);').execution_options(autocommit=True)e.execute(stmt)
How to efficiently serialize python dict with known schema to binary? I have a lot of python dicts with known schema. For example, the schema is defined as Pyspark StructType like this:from pyspark.sql.types import *dict_schema = StructType([ StructField("upload_time", TimestampType(), True), StructField("name", StringType(), True), StructField("value", StringType(), True), ])I want to efficiently serialize each dict object into byte array. What serialization method will give me the smallest payload? I don't want to use pickle because the payload is very large (its embedded the schema into each serialized object).
You can use the built-in struct module. Simply "pack" the values:import structstruct.pack('Q10s5s`, time, name, value)That's assuming time is a 64-bit int, name is at most 10 characters and value is at most 20 characters. You'll need to tune that. You might also considering storing the strings as null-terminated byte sequences if the names and values do not have consistent lengths (you don't want to waste space on padding).Another good way is using NumPy, assuming the strings have fairly consistent lengths:import numpy as npa = np.empty(1000, [('time', 'u8'), ('name', 'S10'), ('value', 'S20')])np.save(filename, a)This will include a "schema" of sorts at the top of the file; you could write the raw array without that schema if you really want to.
Why it says ValueError for the below program with python3 but not python2 k=['d','e','f']v=[4,5,6]h=zip(k,v) #zippingfor i,j in h: print(i ,':',j)(k,v)=zip(*h) #unzippingprint(k)print(v)output:Traceback (most recent call last): File "hasht.py", line 6, in <module> (k,v)=zip(*h)ValueError: not enough values to unpack (expected 2, got 0)
zip creates a list in Python 2, so your h is a value that you can inspect at any time. zip creates an iterator in Python 3, so your loop with the print statement exhausts h.Use h = list(zip(k, v)) to get the same behavior in both Python 2 and 3.
Cannot insert row using Flask and SQLAlchemy I am doing CS50 project and the current task is to create a register page. User fills in the form and presses button. So it has to be insert into my PostgreSQL database. But when I fill the form and press 'Register' I got this error:StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required for bind parameter u'surname' [SQL: u'INSERT INTO users (name, surrname, nickname, password) VALUES (%(name)s, %(surname)s,%(nickname)s, %(password)s'] [parameters: [{':password': u'', ':name': u'John', ':surname': u'Young', ':nickname': u'yolojohny1'}]] (Background on this error at: http://sqlalche.me/e/cd3x)books.py import osfrom flask import Flask, session, render_template, requestfrom flask_session import Sessionfrom sqlalchemy import create_enginefrom sqlalchemy.orm import scoped_session, sessionmakerapp = Flask(__name__)# Check for environment variableif not os.getenv("DATABASE_URL"): raise RuntimeError("DATABASE_URL is not set")# Configure session to use filesystemapp.config["SESSION_PERMANENT"] = Falseapp.config["SESSION_TYPE"] = "filesystem"Session(app)# Set up databaseengine = create_engine(os.getenv("DATABASE_URL"))db = scoped_session(sessionmaker(bind=engine))@app.route("/")def index(): return render_template("register.html")@app.route('/register', methods=["POST"])def register(): "REGISTRATION PROCESS" # Get form information name = request.form.get("name") surname = request.form.get("surname") nickname = request.form.get("nickname") password = request.form.get("password") db.execute("INSERT INTO users (name, surrname, nickname, password) VALUES (:name, :surname," ":nickname, :password", {":name": name, ":surname": surname, ":nickname": nickname, ":password": password}) db.commit() return render_template("success.html")register.html: {% extends "layout.html" %}{% block title %} New User{% endblock %}{% block body %} <h1>Register new account</h1> <form action="{{ url_for('register') }}" method="post"> <div class="form-group"> <input class="form-control" name="name" placeholder="First Name"> </div> <div class="form-group"> <input class="form-control" name="surname" placeholder="Second Name"> </div> <div class="form-group"> <input class="form-control" name="nickname" placeholder="Nickname"> </div> <div class="form-group"> <input type="password" class="form-control" name="password" placeholder="Password"> </div> <div class="formgroup"> <button class="btn btn-primary">Create account!</button> </div> </form>{% endblock %}database: CREATE TABLE "users" ( id SERIAL PRIMARY KEY, name VARCHAR NOT NULL, surrname VARCHAR NOT NULL, nickname VARCHAR NOT NULL, password VARCHAR NOT NULL)What should I do? I googled this Error but nothing helps.
you are almost there.db.execute("INSERT INTO users (name, surrname, nickname, password) VALUES (:name, :surname, :nickname, :password", {":name": name, ":surname": surname, ":nickname": nickname, ":password": password})Firstly, this doesn't execute as you are missing the closing parenthesis after your VALUES list. So to get the statement to execute I had to make it:db.execute("INSERT INTO users (name, surrname, nickname, password) VALUES (:name, :surname, :nickname, :password)", {":name": name, ":surname": surname, ":nickname": nickname, ":password": password})I imagine that is just a typo translating info into the questions, but nonetheless.Now, the colon (:), prior to the parameter names in your SQL statement is there to tell SQLAlchemy that that name is a bound parameter. You can read more about it here.. You don't need to include that in the keys of your parameter values dict, and that is why you are seeing the error. SQLAlchemy is looking for a parameter named surname but that doesn't exist in your supplied parameter values, :surname does, but ':surname' != 'surname' and so no value can be found for the parameter.This should get you across the line:db.execute("INSERT INTO users (name, surrname, nickname, password) VALUES (:name, :surname, :nickname, :password)", {"name": name, "surname": surname, "nickname": nickname, "password": password})
ModelMultipleChoiceField django and debug mode i'm trying to implement a ModelMultipleChoiceField in my application, like that: Linkmodel.pyclass Services(models.Model): id = models.AutoField(primary_key=True) type = models.CharField(max_length=300)class Professionals_Services(models.Model): professional = models.ForeignKey(User, on_delete=models.CASCADE) service = models.ForeignKey(Services, on_delete=models.CASCADE)form.pyclass ProfileServicesUpdateForm(forms.ModelForm): service = forms.ModelMultipleChoiceField(required=False, queryset=Services.objects.all()) class Meta: model = Professionals_Services fields = ['service'] def clean(self): # this condition only if the POST data is cleaned, right? cleaned_data = super(ProfileServicesUpdateForm, self).clean() print(cleaned_data.get('service'))view.pyclass EditProfileServicesView(CreateView): model = Professionals_Services form_class = ProfileServicesUpdateForm context_object_name = 'services' template_name = 'accounts/edit-profile.html' @method_decorator(login_required(login_url=reverse_lazy('professionals:login'))) def dispatch(self, request, *args, **kwargs): return super().dispatch(self.request, *args, **kwargs) def post(self, request, *args, **kwargs): form = self.form_class(data=request.POST) if form.is_valid(): services = form.save(commit=False) services.save()html<select class="ui search fluid dropdown" multiple="" name="service" id="id_service"> {% for service in services_list %} <option value="{{ service.id }}">{{ service.type }}</option> {% endfor %}</select>For development i'm using Pycham Professionals(latest version) with docker, when i run the application and i try to make a POST the answer is:Cannot assign "<QuerySet [<Services: Services object (2)>, <Services: Services object (5)>, <Services: Services object (6)>, <Services: Services object (7)>]>": "Professionals_Services.service" must be a "Services" instance.But if i run the application in debug mode and with a breakpoints on the if form.is_valid():the application works fineThat's because the validate is equal to Unknown not in debugyou know how to fix?
Your service is a ForeignKey: service = models.ForeignKey(Services, on_delete=models.CASCADE)A ForeignKey means that you select a single element, not multiple ones. You use a ManyToManyField [Django-doc] to select multiple elements:class Professionals_Services(models.Model): professional = models.ForeignKey(User, on_delete=models.CASCADE) service = models.ManyToManyField(Service)You should also not override the post method, and you can make use of the LoginRequiredMixin [Django-doc] to ensure that the user is logged in:from django.contrib.auth.mixins import LoginRequiredMixinclass EditProfileServicesView(LoginRequiredMixin, CreateView): login_url = reverse_lazy('professionals:login') model = Professionals_Services form_class = ProfileServicesUpdateForm context_object_name = 'services' template_name = 'accounts/edit-profile.html' def form_valid(self, form): form.instance.user = self.request.user return super().form_valid(form)In your Form you should also return the cleaned data:class ProfileServicesUpdateForm(forms.ModelForm): service = forms.ModelMultipleChoiceField(required=False, queryset=Services.objects.all()) class Meta: model = Professionals_Services fields = ['service'] def clean(self): # this condition only if the POST data is cleaned, right? cleaned_data = super(ProfileServicesUpdateForm, self).clean() print(cleaned_data.get('service')) return cleaned_dataNote: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.Note: Models in Django are written in PerlCase, not snake_case,so you might want to rename the model from Professionals_Services to ProfessionalService.Note: normally a Django model is given a singular name, so Services instead of Service.
Keeping variable names when exporting Pyomo into a .mps file So, i'm currently working with a pyomo model with multiple instances that are being solved in parallel. Issue is, solving them takes pyomo quite a long time (like 2 to 3 secs, even though the solving part by gurobi takes about 0.08s). I've found out that, by exporting a pyomo instance into an .mps file and then giving it to gurobipy i can get like an increase of 30% in overall speed.The problem comes later, when i want to work with the variables of the solved model, because ive noticed that, when exporting the original instance from pyomo into a .mps file, variable names get lost; they all get named "x" (so, for example, model.Delta, model.Pg, model.Alpha, etc get turned into x1, x2, ... ,x9999 instead of Delta[0], Delta[1], ... Alpha[99,99]).Is there a way to keep the original variable name when exporting the model?
Managed to solve it!To anyone who might find this useful, i passed a dictionary with "symbolic_solver_labels" as an io_options argument for the method, like this:instance.write(filename = str(es_) + ".mps", io_options = {"symbolic_solver_labels":True})Now my variables are correctly labeled in the .mps file!
How to specify the `--formats` sdist option inside setup.py? I try to create a zipped source python package on a linux distribution without specifying the --formats option to sdist on the command line (using an existing Jenkins pipeline which do not support this option).In the documentation here, it states:(assuming you haven’t specified any sdist options in the setup script or config file), sdist creates the archive of the default format for the current platform. The default format is a gzip’ed tar file (.tar.gz) on Unix, and ZIP file on Windows.But it doesn't say how should you specify sdist options in the setup script?
From the linked documentation previous topic:The basic syntax of the configuration file is simple:[command]option=value...where command is one of the Distutils commands (e.g. build_py, install), and option is one of the options that command supportsand later an example for build_ext --inplace[build_ext]inplace=1That means that you must write into the setup.cfg file:[sdist]formats=zipBeware: untested because I have no available Python2...
A question about the use of parenthesis with arguments def calculate_pythagoras(): pythagoras_list = list() for i in range(1, 101): for j in range(1, 101): c = (i ** 2 + j ** 2) ** 0.5 if (c == int(c)): pythagoras_list.append((i, j, int(c))) return pythagoras_listfor i in calculate_pythagoras(): print(i) pythagoras_list.append((i, j, int(c)))Correct:((i, j, int(c)))Incorrect:(i, j, int(c))Why do I get an error when I remove the outer parenthesis?
def calculate_pythagoras(): pythagoras_list=list() for i in range(1,101): for j in range(1,101): c=(i**2 + j**2)**0.5 if(c==int(c)): pythagoras_list.append((i,j,int(c))) # You're calling the append method with those extra parentheses. return pythagoras_listfor i in calculate_pythagoras(): print(i)"""pythagoras_list.append((i,j,int(c)))((i,j,int(c)))----->correct(i,j,int(c))----->incorrectwhy do i get an error when i remove the outer parenthesis?*** calls append( (set of items ( int ) ) )"""
How to parse a dataframe efficiently, while storing data (specific row, or multiple rows) in others dataframe using a specific pattern? How to parse data on all rows, and use this row to populate other dataframes with data from multiple rows ?I am trying to parse a csv file containing several data entry for training purpose as I am quite new to this technology.My data consist in 10 columns, and hunderds of rows.The first column is filled with a code that is either 10, 50, or 90.Example :Dataframe 1 :0110Power-22090End10Power-29090End10Power-44590End10Power-39050Clotho50Kronus90End10Power-55050Ares50Athena50Artemis50Demeter90EndAnd the list goes on..On one hand I want to be able to read the first cell, and to populate another dataframe directly if this is a code 10.On the other hand, I'd like to populate another dataframe with all the codes 50s, but I want to be able to get the data from the previous code 10, as it hold the type of Power that is used, and populate a new column on this dataframe.The new data frames are supposed to look like this:Dataframe 2 :0110Power-22010Power-29010Power-44510Power-39010Power-550Dataframe 3 :01250ClothoPower-39050KronusPower-39050AresPower-55050AthenaPower-55050ArtemisPower-55050DemeterPower-550So far, I was using iterrows, and I've read everywhere that it was a bad idea.. but i'm struggling implementing another method..In my code I just create two other dataframes, but I don't know yet a way to retrieve data from the previous cell. I would usually use a classic method, but I think it's rather archaic.for index, row in df.iterrows(): if (df.iat[index,0] == '10'): df2 = df2.append(df.loc[index], ignore_index = True) if (df.iat[index,0] == '50'): df3 = df3.append(df.loc[index], ignore_index = True)Any ideas ?(Update)
For df2, it's pretty simple:df2 = df.rename(columns={'Power/Character': 'Power'}) \ .loc[df['Code'] == 10, :]For df3, it's a bit more complex:# Extract power and fill forward valuespower = df.loc[df['Code'] == 10, 'Power/Character'].reindex(df.index).ffill()df3 = df.rename(columns={'Power/Character': 'Character'}) \ .assign(Power=power).loc[lambda x: x['Code'] == 50]Output:>>> df2 Code Power0 10 Power-2202 10 Power-2904 10 Power-4456 10 Power-39010 10 Power-550>>> df3 Code Character Power7 50 Clotho Power-3908 50 Kronus Power-39011 50 Ares Power-55012 50 Athena Power-55013 50 Artemis Power-55014 50 Demeter Power-550
Creating a 'normal distribution' like range in numpy I am trying to 'bin' an array into bins (similar to histogram). I have an input array input_array and a range bins = np.linspace(-200, 200, 200). The overall function looks something like this:def bin(arr): bins = np.linspace(-100, 100, 200) return np.histogram(arr, bins=bins)[0]So, bin([64, 19, 120, 55, 56, 108, 16, 84, 120, 44, 104, 79, 116, 31, 44, 12, 35, 68])would return:array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0])However, I want my bins to be more 'detailed' as I get close to 0... something similar to an indeal normal distribution. As a result, I could have more bins (i.e. short ranges) when I am close to 0 and as I move out towards the range, the bins are bigger. Is it possible?More specifically, rather than having equally wide bins in a range, can I have an array of range where the bins towards the centre are smaller than towards the extremes?I have already looked at answers like this and numpy.random.normal, but something is just not clicking right.
Use the inverse error function to generate the bins. You'll need to scale the bins to get the exact range you wantThis transform works because the inverse error function is flatter around zero than +/- one. from scipy.special import erfinverfinv(np.linspace(-1,1))# returns: array([ -inf, -1.14541135, -0.8853822 , -0.70933273, -0.56893556, -0.44805114, -0.3390617 , -0.23761485, -0.14085661, -0.0466774 , 0.0466774 , 0.14085661, 0.23761485, 0.3390617 , 0.44805114, 0.56893556, 0.70933273, 0.8853822 , 1.14541135, inf])
TypeError: object of type 'ID3TimeStamp' has no len() I have made this code, to get the year from an mp3, and if i print it, it works, but when i write to text box in my webpage, it gives an error(traceback below), but not always, sometimes the error not show, so i suspect it is from the way the mp3 is tagged:nfo_year = ''audio_filename = 'myfile.mp3'f = mutagen.File(audio_filename)audio = ID3(audio_filename) # path: path to file# Yeartry: nfo_year = audio['TDRC'].text[0] print(nfo_year)except: passtime.sleep(2)logger_internet.info('Writing Year...')AB_author = driver.find_element_by_name('year')AB_author.send_keys(nfo_year)Traceback (most recent call last): File "D:\AB\redacted.py", line 1252, in <module> AB_author.send_keys(nfo_year) File "D:\AB\venv\lib\site-packages\selenium\webdriver\remote\webelement.py", line 478, in send_keys {'text': "".join(keys_to_typing(value)), File "D:\AB\venv\lib\site-packages\selenium\webdriver\common\utils.py", line 150, in keys_to_typing for i in range(len(val)):TypeError: object of type 'ID3TimeStamp' has no len()my question is: an error on the mp3 tag or am i doing something wrong?
nfo_year is a timestamp object, of type ID3TimeStamp. You have to pass strings to AB_author.send_keys. Since print worked, you can try str(nfo_year).
Cant delete/drop columns from multiple files through looping in python I am facing some issue while trying to drop columns from multiple excel files in Python. I get the below error, when I am trying the same code on single file it works, but it doesn't work on multiple files while looping and I don't undersand why the error is [columns ] not found in axis . I am not sure what is the problem with my code. Any help much appreciated.import osimport globimport pandas as pdfrom pathlib import Pathfolder = (r"C:\Users\kc\Documents\Extracted")for file in Path(folder).glob('*.xlsx'): df = pd.read_excel(file) df2 = df.drop(columns=['BarrierFreeAttributes.BarrierFreeAttribute', 'ConsultationHours.ConsultationHoursTimeSpan', 'Location.Coordinates.Latitude_right', 'Location.Coordinates.Longitude_right'], axis=1) df2.to_excel(file.with_suffix('.xlsx'),index = False)error:---------------------------------------------------------------------------KeyError Traceback (most recent call last)<ipython-input-21-5d2d70719121> in <module> 12 cols = cols.map(lambda x: x.replace('.','-')) 13 df.columns = cols---> 14 df2 = df.drop(columns=['BarrierFreeAttributes.BarrierFreeAttribute', 'ConsultationHours.ConsultationHoursTimeSpan', 'Location.Coordinates.Latitude_right', 'Location.Coordinates.Longitude_right'], axis=1) 15 16 ~\Anaconda3\lib\site-packages\pandas\core\frame.py in drop(self, labels, axis, index, columns, level, inplace, errors) 4161 weight 1.0 0.8 4162 """-> 4163 return super().drop( 4164 labels=labels, 4165 axis=axis,~\Anaconda3\lib\site-packages\pandas\core\generic.py in drop(self, labels, axis, index, columns, level, inplace, errors) 3885 for axis, labels in axes.items(): 3886 if labels is not None:-> 3887 obj = obj._drop_axis(labels, axis, level=level, errors=errors) 3888 3889 if inplace:~\Anaconda3\lib\site-packages\pandas\core\generic.py in _drop_axis(self, labels, axis, level, errors) 3919 new_axis = axis.drop(labels, level=level, errors=errors) 3920 else:-> 3921 new_axis = axis.drop(labels, errors=errors) 3922 result = self.reindex(**{axis_name: new_axis}) 3923 ~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in drop(self, labels, errors) 5280 if mask.any(): 5281 if errors != "ignore":-> 5282 raise KeyError(f"{labels[mask]} not found in axis") 5283 indexer = indexer[~mask] 5284 return self.delete(indexer)KeyError: "['BarrierFreeAttributes.BarrierFreeAttribute'\n 'ConsultationHours.ConsultationHoursTimeSpan'\n 'Location.Coordinates.Latitude_right'\n 'Location.Coordinates.Longitude_right'] not found in axis
It is very likely the case that an xlsx file in the extracted folder does not have the columns that you are wishing to drop. Try adding a filter condition to process the rest of the files and also print the name of the file(s) without the columns.import osimport globimport pandas as pdfrom pathlib import Pathfolder = (r"C:\Users\kc\Documents\Extracted")for file_name in Path(folder).glob('*.xlsx'): df = pd.read_excel(file_name) drop_list = ['BarrierFreeAttributes.BarrierFreeAttribute','ConsultationHours.ConsultationHoursTimeSpan', 'Location.Coordinates.Latitude_right', 'Location.Coordinates.Longitude_right'] if all(item in drop_list for item in list(df.columns)): df = df.drop(columns=drop_list, axis=1) else: print(file_name) df.to_excel(file_name.with_suffix('.xlsx'),index = False)This way you can process the rest of your files and also print the suspect files that might not have the columns you are looking to drop.Also try not use file as your variable as it is a reserved keyword in python.
Pygame - Sprite Group movement doesnt work I'm currently trying to program an space invaders clone. I created an "Invaders"-Class with several attributes and I created an sprite group for all my enemy invaders.class Invader(pygame.sprite.Sprite): def __init__(self, settings, picture, x, y): super().__init__() self.settings = settings self.x = x self.y = y self.image = pygame.image.load(os.path.join(self.settings.imagepath, picture)).convert_alpha() self.image = pygame.transform.scale(self.image, (63,38)) self.rect = self.image.get_rect() self.rect.center = [self.x, self.y] def update(self): direction_change = False print(direction_change) if self.rect.x > 800: direction_change = True else: direction_change = False if direction_change == False: self.rect.x += 1 if direction_change == True: self.rect.x -= 1With the update function i move the sprite group. But when it moves to a specific point all sprite come together and it looks like this:Is there a way to move the group like a single object?
The moving direction has to be an attribute of the class Invader. Change the direction if the Sprite is at the left or the right of the window:class Invader(pygame.sprite.Sprite): def __init__(self, settings, picture, x, y): super().__init__() self.settings = settings self.x = x self.y = y self.image = pygame.image.load(os.path.join(self.settings.imagepath, picture)).convert_alpha() self.image = pygame.transform.scale(self.image, (63,38)) self.rect = self.image.get_rect() self.rect.center = [self.x, self.y] self.direction = 1 # <--- def update(self): if self.rect.right >= 800: self.direction = -1 if self.rect.left <= 0: self.direction = 1 self.rect.x += self.direction
Getting points in convex hull I have two overlapping sets of points T and B.I want to return all points from T that are within the convex hull of BI compute the convex hulls as followsfrom scipy.spatial import Convexhullimport numpy as npT=np.asarray(T)B=np.asarray(B)Thull = ConvexHull(T)Bhull = ConvexHull(B)How do I do the spatial query?
Here is an example of what you want using the function defined in the other question I posted in the comments:from scipy.spatial import Delaunayimport numpy as npimport matplotlib.pyplot as pltdef in_hull(p, hull): """ Test if points in `p` are in `hull` `p` should be a `NxK` coordinates of `N` points in `K` dimensions `hull` is either a scipy.spatial.Delaunay object or the `MxK` array of the coordinates of `M` points in `K`dimensions for which Delaunay triangulation will be computed """ if not isinstance(hull,Delaunay): hull = Delaunay(hull) return hull.find_simplex(p)>=0T = np.random.rand(30,2)B = T + np.array([[0.4, 0] for i in range(30)])plt.plot(T[:,0], T[:,1], 'o')plt.plot(B[:,0], B[:,1], 'o')new_points = T[in_hull(T,B)]plt.plot(new_points[:,0], new_points[:,1], 'x', markersize=8)This finds all points of T in the hull of B and saves them in new_points. I also plot it, so you see the result
How can I read a message (in HTML format) from gmail gmail-api using python (v3.7)? I tried to follow the orientation followed in the link below but I did not succeed.https://developers.google.com/gmail/api/v1/reference/users/messages/getCan someone help me by "being very specific" in what I should do to read the body message (in HTML format)?I already have the messagemessage contentmessage content typeThank you very much!
I ended up answering my question... Look below what I did to read a message using gmail's api.In some situations the best response is in a good night's rest, a bit of insistence and the letitura of documentations.Image with the programmingThank you very much @A.Wolf!
Streamlit crashes when it runs a turtle drawing more than once IntroI'm building a drawing app using the Streamlit library as a frontend and the Turtle library as the drawing engine.IssueStreamlit crashes and throw the following message when the drawing is invoked more than once:Exception ignored in: <function Image.__del__ at 0x0000017A47EF3558>Traceback (most recent call last): File "c:\users\johnsmith\anaconda3\lib\tkinter\__init__.py", line 3507, in __del__ self.tk.call('image', 'delete', self.name)RuntimeError: main thread is not in main loopTcl_AsyncDelete: async handler deleted by the wrong threadI want the user to be able to change the inputs and re-run the app as many times as they want without crashes.CodeFrontend:# frontend.pyimport streamlit as stfrom backend import *st.title("Turtle App")title = st.text_input("Canvas Title", value="My Canvas")width = st.number_input("Canvas Width", value=500)height = st.number_input("Canvas Height", value=500)length = st.number_input("Square Length", value=200)clicked = st.button("Paint")if clicked: canvas_builder(title, width, height, length)Backend:# backend.pyimport turtledef canvas_builder(title, canvas_width, canvas_height, square_length): CANVAS_COLOR = "red" PEN_COLOR = "black" scr = turtle.Screen() scr.screensize(canvas_width, canvas_height) scr.title(title) scr.bgcolor(CANVAS_COLOR) turtle.setworldcoordinates(0, 0, canvas_width, canvas_height) t = turtle.Turtle() t.color(PEN_COLOR) t.begin_fill() for i in range(4): t.forward(square_length) t.left(90) t.end_fill() turtle.done()Re-productionAll the file in the same folderFrom Conda's prompt in the same folder, run:streamlit run ./st.pyOpen the app in the browser as indicated by the shellPress the "Paint" button at the bottom of the UIClose the Turtle windowPress the "Paint" button againCheck the prompt of the streamlit app for the errorNotesIt seems to be an issue with tkinter processes. I tried to get hold of the root/main one and kill it, but it didn't work.Ideally, I would like to embed the the Turtle window inside the streamlit app as a plot.I don't want to replace streamlit and/or turtle.
I solved the problem by running turtle in a child process. New frontend.py code:import multiprocessingimport streamlit as stfrom backend import *st.title("Turtle App")title = st.text_input("Canvas Title", value="My Canvas")width = st.number_input("Canvas Width", value=500)height = st.number_input("Canvas Height", value=500)length = st.number_input("Square Length", value=200)clicked = st.button("Paint")t = multiprocessing.Process(target=canvas_builder, args=(title, width, height, length,))if clicked: t.start()
Remove font's shadow in Sankey Is it possible to remove the white shadow of the font in the following sankey diagram? import plotly.graph_objects as go fig = go.Figure(go.Sankey( arrangement = "snap", node = { "label": ["A", "B", "C", "D", "E", "F"], "x": [0.2, 0.1, 0.5, 0.7, 0.3, 0.5], "y": [0.7, 0.5, 0.2, 0.4, 0.2, 0.3], 'pad':10}, # 10 Pixels link = { "source": [0, 0, 1, 2, 5, 4, 3, 5], "target": [5, 3, 4, 3, 0, 2, 2, 3], "value": [1, 2, 1, 1, 1, 1, 1, 2]})) fig.show()
It certainly seems to not be possible. You can edit some text attributes through f['data'][0]['textfont'] like:sankey.Textfont({'color': '#2a3f5f', 'family': '"Open Sans", verdana, arial, sans-serif', 'size': 10})And as you can see sankey.Textfont has no attribute that can edit the properties of the "shadow". I've tried setting other values for 'family' but the shadow persists no matter what. Another peculiar detail here seems to be that the color can't be changed directly either. Only 'size' and 'family'
python regular expression to get a token from the page I am trying to automate a few things in Python instead of manually doing the same thing again and again. Currently, I am stuck find the 'csrfmiddlewaretoken' from a site called dnsdumpster.com. I have written a regular expression for it, but it returns the whole tag that contains the 'csrfmiddlewaretoken'. I am only interested in the token alone (which is inside the 'value' parameters of the HTML tag). This is my code:import requestsimport reheaders = { 'Host' : 'dnsdumpster.com', 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:80.0) Gecko/20100101 Firefox/80.0', 'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Language' : 'en-US,en;q=0.5', 'Accept-Encoding' : 'gzip, deflate', 'DNT' : '1', 'Upgrade-Insecure-Requests' : '1', 'Connection' : 'close'}proxies = { 'http' : 'http://127.0.0.1:8080'}with requests.Session() as s: url = 'https://dnsdumpster.com' response = s.get(url, headers=headers, proxies=proxies) response.encoding = 'utf-8' # Optional: requests infers this internally body = response.text csrfmiddlewaretoken = re.search('name="csrfmiddlewaretoken" value="[0-9a-zA-z]+', body) print(csrfmiddlewaretoken) # Embarassing way of getting the token print(body[2417:2481])I need help with the regular expression to get the token value alone.
You could use a capture group in the regex by adding parenthesesmatch = re.search('name="csrfmiddlewaretoken" value="([0-9a-zA-z]+)', body)if match: csrfmiddlewaretoken = match.group(1)else: # deal with itThe risk is that minor changes in the returned page could break your search. XML attributes are unordered and the page could switch switch them around while still technically not changing the page at all.
Get missing rows in dataframe I have a dataframe like this:Object PeriodA 202101A 202102A 202103A 202105A 202107B 202102B 202103B 202104B 202106Now I would like for each object to iterate and get the missing period between the min and the max of the object, and get something like:Object MissingValuesA 202104 / 202106B 202105To make the problem easier the object min is minimum 202101 and object max is maximum 202108.I am a bit lost on how I can do it.Can you help me?Thanks
You can convert the Period strings to Pandas period by dt.to_period(). Then group by Object and aggregate to get the missing periods for each group of Object. Finally, convert the list of missing periods to the desired layout, as follows:df['Period'] = pd.to_datetime(df['Period'], format='%Y%m').dt.to_period('M')df_out = df.groupby('Object')['Period'].agg(lambda x: sorted(list(set(pd.period_range(x.min(), x.max()).tolist()) - set(x))))df_out = df_out.apply(lambda x: ' / '.join(map(str, x))).str.replace('-', '').reset_index()Result:print(df_out) Object Period0 A 202104 / 2021061 B 202105EditIf you want the final layout of Period as a list of string e.g. ['202104','202106'] instead of '202104' / '202106', you can use:df['Period'] = pd.to_datetime(df['Period'], format='%Y%m').dt.to_period('M')df_out = df.groupby('Object')['Period'].agg(lambda x: sorted(list(set(pd.period_range(x.min(), x.max()).tolist()) - set(x))))df_out = df_out.apply(lambda x: [str(y).replace('-', '') for y in x]).reset_index()Result:print(df_out) Object Period0 A [202104, 202106]1 B [202105]
How to update certain dictionary key value in python I have a below dictionary and want to update certain value of that dictionaryfor example :my_dict = {'name':'Raju','surname':'XYZ','age':13,'dateofjoin':'12-Jul-2017'}The value for dateofjoin : 12-Jul-2017 need to update to 15-Aug-2017 and age : 13 to 18So my expected output is :my_dict = {'name':'Raju','surname':'XYZ','age':18,'dateofjoin':'15-Aug-2017'}
my_dict['dateofjoin'] = '15-Aug-2017'my_dict['age']=18This will directly update your existing dictionary. Once you run this, your output for my_dict will be{'name': 'Raju', 'surname': 'XYZ', 'age': 18, 'dateofjoin': '15-Aug-2017'}
Getting most common element from list in Python I was able to use this piece of code to find the most common value if there was only one, however, it wouldn't work if there were multiple. I want it so that if there are multiple, it would just return None.numbers = [5, 3, 5, 3, 2, 6, 7]my_dict = {}for i in numbers: if i in my_dict: my_dict[i] += 1 else: my_dict[i] = 1print(max(my_dict, key=my_dict.get))
Use a library function to sort the numbers:s = sorted(numbers)Check if the last two numbers are the same (then you have more than one max):one_max = s[-1] if (len(s)==1 or s[-1]!=s[-2]) else None
Why can't I print a random number from 1 to a variable? I was making a python program where you type in an input, and it finds a random number from 1 to that input. It won't work for me for whatever reason.I tried using something like thisimport randoma = input('pick a number: ')print(random.randint(1, a))input("Press enter to exit")but it won't work for me, because as soon as it starts to print it, cmd prompt just closes.does anyone know how to fix this?
randInt() takes in two int data types, input() returns a String. So all you need to do is convert a into an int.So do thisimport randoma = input('pick a number: ')print(random.randint(1, int(a)))input("Press enter to exit")
Numpy.putmask with images I have an image converted to a ndarray with RGBA values. Suppose it's 50 x 50 x 4.I want to replace all the pixels with values array([255, 255, 255, 255]) for array([0, 0, 0, 0]). So:from numpy import *from PIL import Imagedef test(mask): mask = array(mask) find = array([255, 255, 255, 255]) replace = array([0, 0, 0, 0]) return putmask(mask, mask != find, replace)mask = Image.open('test.png')test(mask)What am I doing wrong? That gives me a ValueError: putmask: mask and data must be the same size. Yet if I change the arrays to numbers (find = 255, replace = 0) it works.
A more concise way to do this isimg = Image.open('test.png')a = numpy.array(img)a[(a == 255).all(axis=-1)] = 0img2 = Image.fromarray(a, mode='RGBA')More generally, if the items of find and repl are not all the same, you can also dofind = [1, 2, 3, 4]repl = [5, 6, 7, 8]a[(a == find).all(axis=-1)] = repl
Find out which features are in my components after PCA I performed a PCA of my data. The data looks like the following:dfOut[60]: Drd1_exp1 Drd1_exp2 Drd1_exp3 ... M7_pppp M7_puuu Brain_Region0 -1.0 -1.0 -1.0 ... 0.0 0.0 BaGr3 -1.0 -1.0 -1.0 ... 0.0 0.0 BaGr4 -1.0 -1.0 -1.0 ... 0.0 0.0 BaGr ... ... ... ... ... ... ...150475 -1.0 -1.0 -1.0 ... 0.0 0.0 BaGr150478 -1.0 -1.0 -1.0 ... 0.0 0.0 BaGr150479 -1.0 -1.0 -1.0 ... 0.0 0.0 BaGrI know used every row until 'Brain Regions' as features. I also standardized them.These features are different experiments, that give me information about a 3D image of a brain.I'll show you my code:from sklearn.preprocessing import StandardScalerx = df.loc[:, listend1].valuesy= df.loc[:, 'Brain_Region'].valuesx = StandardScaler().fit_transform(x)from sklearn.decomposition import PCApca = PCA(n_components=2)principalComponents = pca.fit_transform(x)principalDf = pd.DataFrame(data = principalComponents , columns = ['principal component 1', 'principal component 2'])finalDf = pd.concat([principalDf, df[['Brain_Region']]], axis = 1)I then plotted finalDF:My question now is: How can I find out, which features contribute to my Components? How can I find out, to interpret the data?
You can use pca.components_ (or pca.components depending on the sklearn version).It has shape (n_components, n_features), in your case (2, n_features) and represents the directions of maximum variance in the data, which reflects the magnitude of the corresponding values in the eigenvectors (higher magnitude - higher importance). You will have something like this:[[0.522 0.26 0.58 0.56], [0.37 0.92 0.02 0.06]]implying that for the first component (first row) the first, third and last features have an higher importance, while for the second component only the second feature is important.Have a look to sklern PCA attributes description or to this post.By the way, you can also use a Random Forest Classifier including the labels, and after the training you can explore the feature importance, e.g. this post.
Limit the number of threads numpy 1.19.2 uses ContextMy computer mysteriously shutsdown when all my cores go to 100% in htop.I am running a simple program using sklearn.impute.KNNImputer from scikit-learnI have read that I need to limit the number of threads that numpy uses because scikit-learn depends on numpy.QuestionHow do I limit the number of threads numpy uses?Things I have tried:I tried the following in my python script (see bottom of question)import osos.environ["OMP_NUM_THREADS"] = "1" # export OMP_NUM_THREADS=1os.environ["OPENBLAS_NUM_THREADS"] = "1" # export OPENBLAS_NUM_THREADS=1 os.environ["MKL_NUM_THREADS"] = "1" # export MKL_NUM_THREADS=1os.environ["VECLIB_MAXIMUM_THREADS"] = "1" # export VECLIB_MAXIMUM_THREADS=1os.environ["NUMEXPR_NUM_THREADS"] = "1" # export NUMEXPR_NUM_THREADS=1informationMy numpy version is 1.19.2running np.__config__.show() returnsblas_mkl_info:libraries = ['mkl_rt', 'pthread']library_dirs = ['/home/conor/anaconda3/envs/py383/lib']define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]include_dirs =['/home/conor/anaconda3/envs/py383/include']blas_opt_info:libraries = ['mkl_rt', 'pthread']library_dirs = ['/home/conor/anaconda3/envs/py383/lib']define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]include_dirs = ['/home/conor/anaconda3/envs/py383/include']lapack_mkl_info:libraries = ['mkl_rt', 'pthread']library_dirs = ['/home/conor/anaconda3/envs/py383/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]include_dirs = ['/home/conor/anaconda3/envs/py383/include']lapack_opt_info:libraries = ['mkl_rt', 'pthread']library_dirs = ['/home/conor/anaconda3/envs/py383/lib']define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]include_dirs = ['/home/conor/anaconda3/envs/py383/include']here is a tiny program to run while looking at htop# knn imputation transform for the horse colic datasetfrom numpy import isnanfrom pandas import read_csvfrom sklearn.impute import KNNImputer# load dataseturl = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv'dataframe = read_csv(url, header=None, na_values='?')X = dataframe.values# define imputerimputer = KNNImputer()# fit on the datasetimputer.fit(X)# transform the datasetXtrans = imputer.transform(X)
import mklmkl.set_num_threads(1)successfully restricts numpy 1.19.2 on Ubuntu 18.04 to one cpu
bot.get_me() doesn't work and raises an error I can manually interact with the bot through url. For example when I send a request to api.telegram.com/bot-token/getMethe bot's basic info is returned I even get correct results using requests library in python shell but when I try bot.get_me() in the python shell it doesn't work and says thisTraceback (most recent call last): File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 140, in _new_conn conn = connection.create_connection( File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\util\connection.py", line 83, in create_connection raise err File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\util\connection.py", line 73, in create_connection sock.connect(sa)socket.timeout: timed outDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 614, in urlopen httplib_response = self._make_request(conn, method, url, File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 360, in _make_request self._validate_conn(conn) File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 857, in _validate_conn super(HTTPSConnectionPool, self)._validate_conn(conn) File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 289, in _validate_conn conn.connect() File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 284, in connect conn = self._new_conn() File "C:\Users\YM\AppData\Local\Programs\Python\Python38-32\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 144, in _new_conn raise ConnectTimeoutError(telegram.vendor.ptb_urllib3.urllib3.exceptions.ConnectTimeoutError: (<telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x024257F0>, 'Connection to api.telegram.org timed out. (connect timeout=5.0)')
Looks like you're having problems with your internet connection, i.e. the request could not be finished within the timeout of 5 seconds. Keep in mind that a lot of problems can happen unexpectedly in networking. In fact python-telegram-bot has a wiki page dedicated to that topic. Ofc you could first try to simply increase the timeout, e.g. by passing timeout=<some_value_>5> to get_me.
How to add separate id to class and subclass in python I have two classes : Child class which inherits Ancestor class.My goal is to add id attribute for each instance without override Code example:class Ancestor(tensorflow.Module): _id = 0 def __init__(self, some_list=): super(Ancestor, self).__init__() self.id = Ancestor._id + 1 Ancestor._id += 1class Child(Ancestor): _id = 0 def __init__(self): self.id = Child._id + 1 Child._id += 1 some_list = [char + self.id for char in ["a", "b", "c"] ] super(Child, self).__init__(some_list)a1 = Ancestor()a2 = Ancestor()c1 = Child()c2 = Child()print(a1.id, a2.id, c1.id, c2.id)>>> 1 2 3 4I want this setup to print:1 2 1 2How can this be achieved?Edited"some_list" in Child constructor simply is there to emphasize that Child must receive it's id, prior to calling Ancestor's super method
Python 3 users: see the bottom for a better solution using __init_subclass__.Don't hard-code the class name. Use type(self) to get access to the appropriate class for each instance.class Ancestor(object): _id = 0 def __init__(self): type(self)._id += 1 self.id = self._idclass Child(Ancestor): _id = 0 def __init__(self): super(Child, self).__init__() # Actual class-specific stuff here. # If there is none, you don't even need to override __init__a1 = Ancestor()a2 = Ancestor()c1 = Child()c2 = Child()assert (a1.id, a2.id, c1.id, c2.id) == (1,2,1,2)Or perhaps cleaner, make the attribute a generator, rather than an value to be updated. Note that since the count instance maintains its own state, there is no need to assign anything back to the class attribute, and you can thus access _id via the instance rather than its type.from itertools import countclass Ancestor(object): _id = count(1) def __init__(self, **kwargs): super(Ancestor, self).__init__(**kwargs) self.id = next(self._id)class Child(Ancestor): _id = count(1)Python 3You can use __init_subclass__ to ensure that every descendent of Ancestor has its own ID generator without having to add it explicitly in the class definition.from itertools import countclass CountedInstance: def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) cls._id = count(1) def __init__(self, **kwargs): super().__init__(**kwargs) self.id = next(self._id)class Ancestor(CountedInstance): passclass Child(Ancestor): pass
How do I print a variable which is multiline string in the body of email in python I have this piece of code:l = ["Jargon", "Hello", "This", "Is", "Great"]result = "\n".join(l[1:])print resultoutput:HelloThisIsGreatAnd I am trying to print this to a body of an email as shown below, I am getting the text as an attachment rather than as-body. can anyone please tell me if I am missing something here?msg = MIMEMultipart()msg["From"] = emailfrommsg["To"] = emailtoctype, encoding = mimetypes.guess_type(fileToSend)if ctype is None or encoding is not None: ctype = "application/octet-stream" maintype, subtype = ctype.split("/", 1)fp = open(file.csv, 'r')attachment = MIMEBase(maintype, subtype)attachment.set_payload(fp.read())fp.close()encoders.encode_base64(attachment)attachment.add_header("Content-Disposition", "attachment", fileame='file.csv')msg.attach(attachment)msg.attach(MIMEText(result, "plain"))server = smtplib.SMTP("localhost")server.sendmail(emailfrom, emailto, msg.as_string())server.quit()
When using yagmail, it works as intended.import yagmail yag = yagmail.SMTP( user=conf_yag['user'], password=conf_yag['password'])l = ["Jargon", "Hello", "This", "Is", "Great"]result = "\n".join(l[1:])yag.send(emailto, 'test from yagmail', result)# including attachmentyag.send(emailto, subject='test from yagmail', contents=result, attachments='somefile.txt')where conf_yag stores your credentials, emailto is the receiver email address, and 'somefile.txt' is the file attachment.
Installing python package - error: Microsoft Visual C++ 14.0 or greater is required I am trying to install one package - cvxpy for python in Windows 10 and keep on getting errors related to C++ 14.0. I have followed similar questions and the answers posted:I have updated to VS 2022 and the corresponding build tools.I have installed MSVCv143 and Windows 10 SDK 10.0.19041.0 from build tools and have rebooted the machine.I have tried to launch the developer shell from the build tools command prompt and ran it as well.Since the errors are related to lapack and blas with numpy, I have installed a conda environment where the numpy is built with lapack and blas.Nothing seems to work. The errors I am getting are:pip install cvxpyCollecting cvxpy Using cached cvxpy-1.1.17-cp39-cp39-win_amd64.whl (852 kB)Requirement already satisfied: scipy>=1.1.0 in c:\users\kusari\appdata\local\programs\python\python39\lib\site-packages (from cvxpy) (1.7.1)Collecting scs>=1.1.6 Using cached scs-2.1.4.tar.gz (6.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... doneCollecting ecos>=2 Using cached ecos-2.0.7.post1.tar.gz (126 kB) Preparing metadata (setup.py) ... doneCollecting osqp>=0.4.1 Using cached osqp-0.6.2.post0-cp39-cp39-win_amd64.whl (162 kB)Requirement already satisfied: numpy>=1.15 in c:\users\kusari\appdata\local\programs\python\python39\lib\site-packages (from cvxpy) (1.19.4)Collecting qdldl Using cached qdldl-0.1.5.post0-cp39-cp39-win_amd64.whl (74 kB)Building wheels for collected packages: ecos, scs Building wheel for ecos (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\kusari\AppData\Local\Programs\Python\Python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\kusari\\AppData\\Local\\Temp\\2\\pip-install-rd0d64ho\\ecos_08dde659623a4480b5813511b29f1209\\setup.py'"'"'; __file__='"'"'C:\\Users\\kusari\\AppData\\Local\\Temp\\2\\pip-install-rd0d64ho\\ecos_08dde659623a4480b5813511b29f1209\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\kusari\AppData\Local\Temp\2\pip-wheel-ogl08c5t' cwd: C:\Users\kusari\AppData\Local\Temp\2\pip-install-rd0d64ho\ecos_08dde659623a4480b5813511b29f1209\ Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\ecos copying src\ecos\ecos.py -> build\lib.win-amd64-3.9\ecos copying src\ecos\version.py -> build\lib.win-amd64-3.9\ecos copying src\ecos\__init__.py -> build\lib.win-amd64-3.9\ecos running build_ext building '_ecos' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for ecos Running setup.py clean for ecos Building wheel for scs (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\kusari\AppData\Local\Programs\Python\Python39\python.exe' 'C:\Users\kusari\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\kusari\AppData\Local\Temp\2\tmp9qqjrwh5' cwd: C:\Users\kusari\AppData\Local\Temp\2\pip-install-rd0d64ho\scs_9b8a5ab0fbd7442bab5358ce5e781772 Complete output (82 lines): Namespace(scs=False, gpu=False, float32=False, extraverbose=False, gpu_atrans=True, int32=False, blas64=False) running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\scs copying src\__init__.py -> build\lib.win-amd64-3.9\scs running build_ext blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: library_dirs = ['D:\\a\\1\\s\\numpy\\build\\openblas_info'] libraries = ['openblas_info'] language = f77 define_macros = [('HAVE_CBLAS', None)] blas_opt_info: library_dirs = ['D:\\a\\1\\s\\numpy\\build\\openblas_info'] libraries = ['openblas_info'] language = f77 define_macros = [('HAVE_CBLAS', None)] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: library_dirs = ['D:\\a\\1\\s\\numpy\\build\\openblas_lapack_info'] libraries = ['openblas_lapack_info'] language = f77 define_macros = [('HAVE_CBLAS', None)] lapack_opt_info: library_dirs = ['D:\\a\\1\\s\\numpy\\build\\openblas_lapack_info'] libraries = ['openblas_lapack_info'] language = f77 define_macros = [('HAVE_CBLAS', None)] Could not locate executable g77 Could not locate executable f77 Could not locate executable ifort Could not locate executable ifl Could not locate executable f90 Could not locate executable DF Could not locate executable efl Could not locate executable gfortran Could not locate executable f95 Could not locate executable g95 Could not locate executable efort Could not locate executable efc Could not locate executable flang don't know how to compile Fortran code on platform 'nt' C:\Users\kusari\AppData\Local\Temp\2\pip-build-env-rade5sel\overlay\Lib\site-packages\numpy\distutils\system_info.py:1914: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): C:\Users\kusari\AppData\Local\Temp\2\pip-build-env-rade5sel\overlay\Lib\site-packages\numpy\distutils\system_info.py:1914: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): C:\Users\kusari\AppData\Local\Temp\2\pip-build-env-rade5sel\overlay\Lib\site-packages\numpy\distutils\system_info.py:1914: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): C:\Users\kusari\AppData\Local\Temp\2\pip-build-env-rade5sel\overlay\Lib\site-packages\numpy\distutils\system_info.py:1748: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() C:\Users\kusari\AppData\Local\Temp\2\pip-build-env-rade5sel\overlay\Lib\site-packages\numpy\distutils\system_info.py:1748: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() {} {} error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for scsFailed to build ecos scsERROR: Could not build wheels for scs, which is required to install pyproject.toml-based projectsI am unable to install any packages which rely on MSVC internally. Thanks for your help.
I was recently struggling with this issue myself, where I had MSVC installed but I could not get python to detect it. This is what solved it for me:Clear the registry key that is mentioned in this SO thread:https://stackoverflow.com/a/64389979/15379178Some additional info:In short, there was an invalid path/file that my CMD was trying to load (unrelated to MSVC/Python) that was left there by an anaconda installation I had some time ago, giving me a "The system cannot find the path specified" error and it was causing python to think that I didn't have MSVC installed.It took me quite some time to realise that the error was unrelated to MSVC/Python (I'd done quite a few reinstalls but to no avail).Since you mention that you actually have conda installed, this exact solution may not work for you. In that case, I suggest looking at similar things (invalid paths) that may be causing python to be indirectly affected like it was in my case. Hope this helps!
Deploy pytorch .pth model in a python script After successfully training my yolact model using a custom dataset I'm happy with the inference results outputted by eval.py using this command from anaconda terminal:python eval.py --trained_model=./weights/yolact_plus_resnet50_abrasion_39_10000.pth --config=yolact_resnet_abrasion_config --score_threshold=0.8 --top_k=15 --images=./images:output_imagesNow I want to run this inference from my own python script instead of using the anaconda terminal.I wanna be able to get the bounding boxes of detections made on webcam frames obtained by this code below. Any idea ?import cv2src = cv2.VideoCapture(0)while True: ret, frame = src.read() cv2.imshow('frame', frame) key = cv2.waitKey(5) if key == (27): breakThe eval.py code is here at Yolact repository https://github.com/dbolya/yolact/blob/master/eval.py
I will just write the pseudocode here for you.Step 1: Try loading the model using the lines starting from here and ending hereStep 2: Use this function for evaluation. Instead of cv2.imread, you just need to send your frameStep 3: Follow this function to get the bounding boxes. Especially this line. Just trackback the 't' variable and you will get your bounding boxes.Hope it helps. Let me know if you need more clarification.
Beautification of the print output of the console I have an output in the Console. Unfortunately the texts are of different length and therefore looks very shifted. Is there an option that writes the texts below each other, no matter how many characters are in front of them, so that the output looks the way I want it to look?I would not like to use another library for this.print(65 * '_')print('algorithm\t\t\tsil\t\tdbs')results = ['Agglomerative Clustering', 0.8665, 0.4200]formatter_result = ("{:9s}\t\t{:.4f}\t{:.4f}")print(formatter_result.format(*results)) results = ['k-Means', 0.9865, 0.1200]formatter_result = ("{:9s}\t\t{:.4f}\t{:.4f}")print(formatter_result.format(*results))print(65 * '_')What I have_________________________________________________________________algorithm sil dbsAgglomerative Clustering 0.8665 0.4200k-Means 0.9865 0.1200_________________________________________________________________What I want_________________________________________________________________algorithm sil dbsAgglomerative Clustering 0.8665 0.4200k-Means 0.9865 0.1200_________________________________________________________________I looked at Printing Lists as Tabular Data and tried it, but dosen't work for meprint(65 * '_')heading = ['algorithm', 'sil', 'dbs']result1 = ['Agglomerative Clustering', 0.8665, 0.4200]result2 = ['k-Means', 0.9865, 0.1200]ab = np.array([heading, result1, result2])for row in ab: print("{: >20} {: >20} {: >20}".format(*row)) print(65 * '_')_________________________________________________________________ algorithm sil dbsAgglomerative Clustering 0.8665 0.42 k-Means 0.9865 0.12_________________________________________________________________
hi so you have to remove some \t cause they create tabs and you can remove them one by one to find one that you preferprint('for example, this is a tab \t\t\t there is going to be space between them')print('for example, there is no tab here and it is going to be next to each other')
How to attach file buffer to Django Mail Queue I have been trying to attach an xls buffer or a pdf buffer to a Django mail queue, but I couldn't. I've tried using FileResponse or HttpResponse and converting to a Django file object but that fails, too.This is what I tried:new_message = MailerMessage()new_message.subject = "Test"new_message.to_address = "test@gmail.com"new_message.from_address = "noreply@gmail.com"file_attachment = FileResponse('file_content_buffer', content_type='application/vnd.ms-excel')new_message.add_attachment(file_attachment) new_message.save()I got an error:'FileResponse' object has no attribute 'file'.I know that the attachment method waits for a File Object as the documentation says:https://django-mail-queue.readthedocs.io/en/latest/usage.html#attaching-files.Any idea? Thanks.Solution:Little hacky watching what mailqueue code does. What do you think?from mailqueue.models import MailerMessage, Attachmentnew_message = MailerMessage()new_message.subject = "Subject Mail"new_message.to_address = 'test@gmail.com'new_message.from_address = 'noreply@gmail.com'new_message.save()attachment = Attachment()attachment.file_attachment.save('file.xlsx',ContentFile(excel_buffer),save=False)attachment.email = new_messageattachment.original_filename = 'file.xlsx'try: new_attachment.save()except Exception as e: new_attachment.file_attachment.delete()
You don't need to use FileResponse to attach a buffer. Suppose you have a BytesIO object buffer, you just need to convert it to a bytes-like object:content = buffer.read()EmailMessage.attach('File Name.xls', content)EmailMessage.send()
How can I check for a digit or character in an SQL table column? I feel like this question has to have been answered somewhere, but I read through tons of posts and tried many variations of solutions and I can't get this to work. I have a database table called schedule and I simply want to select the rows that contain a specific digit from the days column using python. The days column contains some combination of numbers from 1 to 7 each representing a day of the week. It could be a single day, or a combination of days("1", "135", "1234567", etc). Given a specific day index (e.g. 3), I want to select all the rows from schedule that have a 3 in the days column.I made the column type VARCHAR, because I figured it would be easier to check a string than an integer value.This is my current statement, where day is the index (e.g. 3) of the selected day:"SELECT * FROM schedule WHERE CONTAINS(days, '{}')".format(day)I continually get this error:You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(days, '3')'What can I fix to make this work?
Do you use MySQL?MySQL doesn't have 'CONTAINS' operator, use 'LIKE' operator."SELECT * FROM schedule WHERE days LIKE '%{}%'".format(day)% is the wildcard character that can be any characters.
Python Pandas Dataframe Datetime Range Here is my code block:import pandas as pdimport datetime as dtfirst_day = dt.date(todays_year, todays_month, 1)print(first_day)>2021-02-01print(type(first_day))>class 'datetime.date'>My code runs successfully as below:df = pd.read_excel('AllServiceActivities.xlsx', sheet_name='All Service Activities', usecols=[7, 12, 13]).query(f'Resources.str.contains("{name} {surname}")', engine='python')Yet, I also wanna do something like this("Scheduled Start" is my column name):df = pd.read_excel('AllServiceActivities.xlsx', sheet_name='All Service Activities', usecols=[7, 12, 13]).query(f'Scheduled Start >= {first_day})', engine='python')As you can guess it does not work.There are solutions such like: Select DataFrame rows between two dates , but I want to use "query" method because I don' t want to pass all of the irrelevant data.Edit(In order to generate test):dtr = [dt.datetime(2021,1,27,12,0),dt.datetime(2021,2,3,10,0),dt.datetime(2021,1,25,9,0),dt.datetime(2021,1,15,7,59),dt.datetime(2021,1,13,10,59),dt.datetime(2021,1,12,13,59),dt.datetime(2021,1,11,13,59),dt.datetime(2021,2,2,9,29),dt.datetime(2021,1,20,7,59),dt.datetime(2021,1,19,10,59),dt.datetime(2021,2,1,10,0),dt.datetime(2021,1,19,7,59),dt.datetime(2021,1,29,7,59),dt.datetime(2021,1,28,13,0),dt.datetime(2021,1,28,10,59),dt.datetime(2021,1,27,19,30),dt.datetime(2021,1,27,13,30),dt.datetime(2021,1,18,17,30),dt.datetime(2021,1,19,9,0),dt.datetime(2021,1,18,13,0),dt.datetime(2021,2,1,14,19),dt.datetime(2021,1,29,14,30),dt.datetime(2021,1,14,13,0),dt.datetime(2021,1,8,13,0),dt.datetime(2021,1,26,10,59),dt.datetime(2021,1,25,10,0),dt.datetime(2021,1,23,16,0),dt.datetime(2021,1,21,10,0),dt.datetime(2021,1,18,10,59),dt.datetime(2021,1,11,13,30),dt.datetime(2021,1,20,22,0),dt.datetime(2021,1,20,21,0),dt.datetime(2021,1,22,19,59),dt.datetime(2021,1,12,13,59),dt.datetime(2021,1,21,13,59),dt.datetime(2021,1,20,10,30),dt.datetime(2021,1,19,16,59),dt.datetime(2021,1,19,10,0),dt.datetime(2021,1,14,9,29),dt.datetime(2021,1,19,8,53),dt.datetime(2021,1,18,10,59),dt.datetime(2021,1,13,16,0),dt.datetime(2021,1,13,15,0),dt.datetime(2021,1,12,13,59),dt.datetime(2021,1,11,10,0),dt.datetime(2021,1,8,9,0),dt.datetime(2021,1,7,13,0),dt.datetime(2021,1,6,13,59),dt.datetime(2021,1,5,12,0),dt.datetime(2021,1,10,0,0),dt.datetime(2020,12,8,13,0),dt.datetime(2021,1,7,11,10),dt.datetime(2021,1,6,8,12),dt.datetime(2021,1,5,10,0),dt.datetime(2021,1,5,15,15),dt.datetime(2021,1,4,7,59)]df1= pd.DataFrame(dtr,columns=['Scheduled Start'])df2 = df1.query("'Scheduled Start' >= @first_day")Thanks!
Without a reproducible example it's hard to know for sure. But try this. It uses the @ character for referencing variables.df = pd.read_excel( 'AllServiceActivities.xlsx', sheet_name='All Service Activities', usecols=[7, 12, 13]) \ .query('Scheduled Start >= @first_day)')
Anyone knows how to code this polynomial into python:? Are there anyone help me how to write this polynomial into python language please, I’ve tried my best, but it’s too hardP/s sorry for my bad grammar, i’m from vietnam
Assuming you simply want to be able to get y result for either one of those equations, you can just do the following:import mathdef y1(x): return 4*(x*x + 10*x*math.sqrt(x)+3*x+1)def y2(x): return (math.sin(math.pi*x*x)+math.sqrt(x*x+1))/(exp(2*x)+math.cos(math.pi/4*x))If you want to evaluate y1 or y2 given a certain x, just use y1(x), for example:print(y1(10))print(y2(10))If you want to be able to plot those equations in python, try using the python turtle module to do so.
Numpy element-wise addition with multiple arrays I'd like to know if there is a more efficient/pythonic way to add multiple numpy arrays (2D) rather than:def sum_multiple_arrays(list_of_arrays): a = np.zeros(shape=list_of_arrays[0].shape) #initialize array of 0s for array in list_of_arrays: a += array return a Ps: I am aware of np.add() but it works only with 2 arrays.
np.sum(list_of_arrays, axis=0) should work. Ornp.add.reduce(list_of_arrays).
pyspark: count number of rows written When I dodf: DataFrame = ...df.write.parquet('some://location/')Can I track and report (for monitoring) the number of rows that was just written to some://location?df.write.parquet('some://location/')# I imagine something like:spark_session.someWeirdApi().mostRecentOperation().number_of_rows_written
After doing some digging I found a way to do it:You can register a QueryExecutionListener (beware, this is annotated @DeveloperApi in the source) via py4j's callbacksbut you need to start the callback server and stop the gateway manually at the end of the run of your application.This is inspired by a post in the cloudera community, I had to port it to a more recent spark version (this uses spark 3.0.1, the answer suggested over there uses the deprecated SQLContext) and pyspark (using a py4j callback).import numpy as npimport pandas as pdfrom pyspark.sql import SparkSession, DataFrameclass Listener: def onSuccess(self, funcName, qe, durationNs): print("success", funcName, durationNs, qe.executedPlan().metrics()) print("rows", qe.executedPlan().metrics().get("numOutputRows").value()) print("files", qe.executedPlan().metrics().get("numFiles").value()) print("bytes", qe.executedPlan().metrics().get("numOutputBytes").value()) def onFailure(self, funcName, qe, exception): print("failure", funcName, exception, qe.executedPlan().metrics()) class Java: implements = ["org.apache.spark.sql.util.QueryExecutionListener"]def run(): spark: SparkSession = SparkSession.builder.getOrCreate() df: DataFrame = spark.createDataFrame(pd.DataFrame(np.random.randn(20, 3), columns=["foo", "bar", "qux"])) gateway = spark.sparkContext._gateway gateway.start_callback_server() listener = Listener() spark._jsparkSession.listenerManager().register(listener) df.write.parquet("/tmp/file.parquet", mode='overwrite') spark._jsparkSession.listenerManager().unregister(listener) spark.stop() spark.sparkContext.stop() gateway.shutdown()if __name__ == '__main__': run()
Python AES encryption program is returning Memory Error So i am following a tutorial on AES implementation with python. It is an project which demanded to implement aes. Following is the code in Python which works fine on small files , but it i tried it ona 1 gb file and then the following error occured File "Desktop\AES\encrypting.py", line 73, in encrypted = f.encrypt(encoded) File "\Python\Python38-32\lib\site-packages\cryptography\fernet.py", line 52, in encrypt return self._encrypt_from_parts(data, current_time, iv) File "\Python\Python38-32\lib\site-packages\cryptography\fernet.py", line 58, in _encrypt_from_parts padded_data = padder.update(data) + padder.finalize()MemoryError from cryptography.fernet import Fernet##Fernet uses 128 bit AES IN CBC mode.import base64import osfrom cryptography.hazmat.backends import default_backendfrom cryptography.hazmat.primitives import hashesfrom cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC##The above mentioned modules are imported to aid us in implimenting##the AES encryption. AES encryption is asymmetric encryption method##in which the encrtyption key used to encrypt the data##is the same for decrypting it.UNLIKE RSA where we have two different keys##Since we dont want to save keys everytime inside a##file now we will create a password.Now we can take##either password as input or##give it inside a variablepassword = input("Enter your password. make sure it is strong enough: ")print("\n")text_file = input("Enter the name of text file you want to encrypt: ")print("\n")salt = b'o\x10\xce\xee\xefGE=\xc4\xfe`\xd6=\xd6\xad\xde5\x0f\xa1\xdf\xa0!\x8e[\xab'#created using os.urandom(25)##salts are the additional data that are used to protect the##data which might be similar fo instance there might be a possibility##that two users can have same passwords.Thus to safeguard it we##use salt which works as##SHA256(salt+password)##Thus in this way the salted value will be different for both of the##passwords stored and it will be computationaly difficult for the hacker to##retrieve the password.#password = "password" #password should not be easily guessablepassword22 = password.encode() #encoding the password kdf = PBKDF2HMAC( #Password-based key Derivation function 2 algorithm=hashes.SHA256(), length=32, salt=salt, iterations=100000, backend=default_backend())key = base64.urlsafe_b64encode(kdf.derive(password22))print('Following is the key generated based on your password \n')print(key)##This will create an encryption file for key because it is not possible for everyone to remember long keys##and because it is very crucial for decryption therefore a file is created which will have the encryption keyfile = open('encrypt.enc','wb')file.write(key)file.close()##The text file which will be opened for encryption all the text contained inside of this text file will be encryptedfile = open(text_file ,'rb')data = file.read()encoded = data####a new object of Fernet class is being created f = Fernet(key)encrypted = f.encrypt(encoded)print('\n')print('The encrypted message is as belows \n\n')print(encrypted)print('\n\n')key2 = input("Would you like to decrypt the encrypted message: y/n")key2 =input("Enter the name of encryption file (It was created in the same directory where your code was executed under the name of encrypt.enc): ")file = open('encrypt.enc','r')key = file.read()file.close()f2 = Fernet(key)print(key)decrypted = f2.decrypt(encrypted)print('The decrypted message is as belows \n')print(decrypted)k = input("The above message is encoded in byte types would you like to convert it into string : y/n ? ")if (k == 'y'): print(decrypted.decode())else: print("THANKS FOR USING OUR PROGRAM")The plan is to encrypt a hard drive so that it can be only an encrypted file is remained else is removed.It would be very helpful for any suggestions as to how can this function encrypt all the F drive along with it's contents inside of it.Also i am still struggling to understand this code very well so if you could explain it that would be a help as well
Fernet buffers all output before returning to prevent misuse before the data is verified. However, this approach is not suitable for encryption of large files without additional framing.If you want to encrypt large files cryptography currently has no high level API for that, but you can use the streaming GCM API. This API lives in hazmat because it allows users to misuse it in a variety of ways: you could reuse a (key, nonce) pair, you could (easily) start processing data before validating the tag when decrypting, etc.
How to set the date format on wx.adv.DatePickerCtrl I am using the wxPython wx.adv.DatePickerCtrl and it presents the dates as "mm dd yyyy". I want "dd mm yyyy". How can I do this? I can see nothing in the docsif I use "date" on the command line (nix), I getSat 2 Jul 14:15:03 BST 2022import wximport wx.advimport datetimeclass MainFrame(wx.Frame): def __init__(self, *args, **kwargs): super().__init__(None, *args, **kwargs) self.Title = 'Date format' self.panel = MainPanel(self) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.panel) self.SetSizer(sizer) self.Center() self.Show()class MainPanel(wx.Panel): def __init__(self, parent, *args, **kwargs): super().__init__(parent, *args, **kwargs) date_picker = wx.adv.DatePickerCtrl(self) date_picker.SetValue(datetime.date.today()) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(date_picker) self.SetSizer(sizer)if __name__ == '__main__': wx_app = wx.App() MainFrame() wx_app.MainLoop()
since wxPython aims at a truly native user interface the control uses the users system format (on Windows f.i. Time & Language, Regional format data)
Create table error in MYSQL database using Python I wanted to create a database of my data which have been stored in files .For this I used file handling to read data from the stored files and input into the table of my database. But it shows error given below.Is it the correct way to switch data storage option from files to database.The error occured is given below -:Traceback (most recent call last): File "C:/Users/sarth/AppData/Roaming/JetBrains/PyCharmCE2020.1/scratches/scratch_2.py", line 61, in <module> indatabase() File "C:/Users/sarth/AppData/Roaming/JetBrains/PyCharmCE2020.1/scratches/scratch_2.py", line 54, in indatabase cur.execute ("SELECT * FROM PETROL") File "C:\Users\sarth\AppData\Local\Programs\Python\Python38-32\lib\site-packages\mysql\connector\cursor.py", line 566, in execute self._handle_result(self._connection.cmd_query(stmt)) File "C:\Users\sarth\AppData\Local\Programs\Python\Python38-32\lib\site-packages\mysql\connector\connection.py", line 537, in cmd_query result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query)) File "C:\Users\sarth\AppData\Local\Programs\Python\Python38-32\lib\site-packages\mysql\connector\connection.py", line 436, in _handle_result raise errors.get_exception(packet)mysql.connector.errors.ProgrammingError: 1146 (42S02): Table 'pump.petrol' doesn't existFailed to create table in MySQL: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AMOUNT FLOAT ,QUANTITY FLOAT ,Year VARCHAR(10) ,Date DATE' at line 1This is the code which reads data from files and then inputs in the database.import pickledef indatabase() : # FOR PETROL import mysql.connector try : conn = mysql.connector.connect (host='localhost' ,database='PUMP' ,user='asdad' , password='11234567') cur = conn.cursor ( ) cur.execute ("DROP TABLE IF EXISTS PETROL") cur.execute ("CREATE TABLE PETROL ( id INT AUTO_INCREMENT PRIMARY KEY , AMOUNT FLOAT ,QUANTITY FLOAT ,Year VARCHAR(10) ,Date DATE") fp1 = open ("D:/Python/petrol/pdate/pdate.txt" , "rb+") while True: try : pdate = pickle.load (fp1) cur.execute ("INSERT INTO PETROL Date VALUES(?)" , (pdate)) except EOFError : fp1.close() fp3 = open ("D:/Python/petrol/pyear/pyear.txt" , "rb+") while True: try : pyear = pickle.load (fp3) cur.execute ("INSET INTO PETROL Year VALUES(?)" , (pyear)) except EOFError : fp3.close() fp4=open ("D:/Python/petrol/petrolamt/petrolamt.txt" , "rb+") while True: try : pamt = pickle.load (fp4) cur.execute ("INSERT INTO PETROL Amount VALUES(?)" , (pamt)) except EOFError : fp4.close() fp5 = open ("D:/Python/petrol/petrolqty/petrolqty.txt" , "rb+") while True: try : pqty = pickle.load (fp5) cur.execute ("INSERT INTO PETROL Quantity VALUES(?)" , (pqty)) except EOFError : fp5.close() conn.commit ( ) except mysql.connector.Error as error : print ("Failed to create table in MySQL: {}".format (error)) finally : if (conn.is_connected ( )) : cur.execute ("SELECT * FROM PETROL") for i in cur : print (i) cur.close ( ) conn.close ( ) print ("MySQL connection is closed")indatabase()
on this lineCREATE TABLE PETROL...you are missing the closing parenthesis after Date DATE
Scraping data from the tag using python I am looking for a way to scrape data from here to a list. The data I want to extract is inrangeSelector -> series -> dataIt is a collection of the price of a specific item at a certain time. I need to get rid of all the javascript code except for the data. I will then try to use this data for plotting and calculations.I am new to web-scraping and I am looking for a simple one-time solution. What would be the best way to approach this problem?document.addEventListener('DOMContentLoaded', function () { var myChart = Highcharts.stockChart('stocks-container', { rangeSelector: { selected: 1 }, yAxis: [{ labels: { align: 'left' }, height: '80%', resize: { enabled: true } }, { labels: { align: 'left' }, top: '80%', height: '20%', offset: 0 }], plotOptions: { column: { stacking: 'normal' } }, series: [ { name: 'Unit Price (Buy)', data: JSON.parse("[[1585902517017,187893.6],[1585906117013,193975.7],[1585909717026,189253.9],[1585913317001,195890.9],[1585916917027,197659.8],[1585920516999,201482.1],[1585924117021,198212.5],[1585927716997,208305.0],[1585929517008,207305.0],[1585933117021,193561.7],[1585936716979,199070.6],[1585938517019,195450.9],[1585942117009,195527.4],[1585945717007,195877.6],
You can parse the data with re/json modules.For example:import reimport jsonimport requestsurl = 'https://stonks.gg/products/search?input=Superior%20Fragment'html_data = requests.get(url).textd1 = json.loads(re.search(r'Unit Price \(Buy\).*?(\[\[.*?\]\])', html_data, flags=re.S).group(1))d2 = json.loads(re.search(r'Unit Price \(Sell\).*?(\[\[.*?\]\])', html_data, flags=re.S).group(1))d3 = json.loads(re.search(r'Instant Buy Volume.*?(\[\[.*?\]\])', html_data, flags=re.S).group(1))d4 = json.loads(re.search(r'Instant Sell Volume.*?(\[\[.*?\]\])', html_data, flags=re.S).group(1))print(d1)print(d2)print(d3)print(d4)Prints:[[1585902517017, 187893.6], [1585906117013, 193975.7], [1585909717026, 189253.9], [1585913317001, 195890.9], [1585916917027, 197659.8], [1585920516999, 201482.1], [1585924117021, 198212.5], [1585927716997, 208305.0], [1585929517008, 207305.0], [1585933117021, 193561.7], [1585936716979, 199070.6], [1585938517019, 195450.9], [1585942117009, 195527.4], [1585945717007, 195877.6], [1585949317016, 198097.6], [1585952917006, 200590.3], [1585956517023, 198363.7], [1585958317074, 193681.3], [1585961917009, 199628.0], [1585967317017, 197546.9], [1585969117024, 195719.5], [1585972716979, 198053.2], [1585974516979, 195370.3], [1585976317029, 194257.0], [1585979917012, 195980.4], [1585981717045, 199915.4], [1585985316979, 199097.0], [1585987117024, 199425.4], [1585990717024, 198317.1], [1585994316979, 207382.3], [1585996117030, 199845.9], [1585999717009, 200711.5], ...and so on.
How to swap items in the list at any point? l = [ 1 ,2 ,3, 4, 5 ,6 , 7,8, 9,10,11,12, 13,14,15,16, 17,18,19,20, 21,22,23,24 ]When swapping with next line is done at the middle.Intended Output: l = [ 1 ,2 ,7,8, 5 ,6 ,3,4, 9,10,15,16, 13,14,11,12, 17,18,23,24, 21,22,19,20 ]Working code: n = len(l) #length of listc = 4 # column lengthh =int(c/2) #middle crossover point for i in range(int(c/2) , n+1, int(2*c) ): l[i:i+h], l[i+c:i+(c+h)] = l[i+c:i+(c+h)],l[i:i+h]print (l)Now my code works only when crossover point is middle. I want to scale it to any crossover point . How do I do that ? For ex. if the crossover point is 2nd element , output should be:l = [ 1 ,6,7,8, 5 ,2,3,4, 9,14,15,16, 13,10,11,12, 17,22,23,24, 21,18,19,20 ]Also note that the length of column can be anything , in this example it is 4.
Here is one way to do that: a = int(input()) # Swap indexcl = 4 # Amount of columnsl = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]l2 = l.copy()for i,v in enumerate(l): if i in range(a,len(l),cl*2): l2[i] = l[i+cl] l2[i+cl] = l[i] l2[i+1] = l[i+cl+1] l2[i+cl+1] = l[i+1]print(l2)Input:0Output:[5, 6, 3, 4, 1, 2, 7, 8, 13, 14, 11, 12, 9, 10, 15, 16, 21, 22, 19, 20, 17, 18, 23, 24]Input:1Output:[1, 6, 7, 4, 5, 2, 3, 8, 9, 14, 15, 12, 13, 10, 11, 16, 17, 22, 23, 20, 21, 18, 19, 24]Input:2Output:[1, 2, 7, 8, 5, 6, 3, 4, 9, 10, 15, 16, 13, 14, 11, 12, 17, 18, 23, 24, 21, 22, 19, 20]
Guess Counting Control I'm starting my studies in Python and I was assigned the Task to write code for a guessing game in which I have to control the total tries the player will have. I've described the functions, they're working (I believe...haha) but I can't make to "reset" the game when a wrong guess is input...I wrote this:guess_count = []count_control = 1def check_guess(letter,guess): if guess.isalpha() == False: print("Invalid!") return False elif guess.lower() < letter: print("Low") return False elif guess.lower() > letter: print("High") return False elif guess.lower() == letter: print("Correct!") return True else: print("anything")def letter_guess(guess): check_guess ('a',guess) while len(guess_count) <= 3: if check_guess == True: return True elif check_guess == False: guess_count.append(count_control) guess = input("Try again \n")letter_guess(input("test: "))UPDATE: I rewrote the code after some insights from other users and readings and came up with this:class Game:number_of_attempts = 3no_more_attempts = "Game Over"def attempt_down(self): #This will work as the counter of remaining lives. self.number_of_attempts -= 1 print('Remaining Lives:',self.number_of_attempts)def check_guess(self,letter): """ Requires letter - a letter that has to be guessed guess - a input from the user with the guessed letter """ while self.number_of_attempts > 0: guess = input ("Guess the letter: ") if guess.isalpha() == False: print("Invalid!") elif guess.lower() < letter: self.attempt_down() print("Low") print("Try Again!") elif guess.lower() > letter: self.attempt_down() print("High") print("Try Again!") elif guess.lower() == letter: print("Correct!") return True print (self.no_more_attempts) return False game = Game()""" This is used to run the game. Just insert the letter that has to be guessed."""teste1 = game.check_guess('g')teste2 = game.check_guess('r')
The rub is in that you have a state of a game that you are tracking as global variables guess_count and count_controlThis is an example of why python and other languages provide classes and objects:class Game: def __init__(self): self.guess_count = [] self.count_control = 1 @staticmethod def check_guess(letter, guess): if guess.isalpha() == False: print("Invalid!") return False elif guess.lower() < letter: print("Low") return False elif guess.lower() > letter: print("High") return False elif guess.lower() == letter: print("Correct!") return True else: print("anything") def letter_guess(self, guess): self.check_guess('a', guess) while len(self.guess_count) <= 3: if self.check_guess('a', guess) == True: return True elif self.check_guess('a', guess) == False: self.guess_count.append(self.count_control) guess = input("Try again \n")game = Game()game.letter_guess(input("test: "))game = Game()game.letter_guess(input("test: "))
How does Python compare two lists of unequal length? I am aware of the following:[1,2,3]<[1,2,4] is True because Python does an element-wise comparison from left to right and 3 < 4[1,2,3]<[1,3,4] is True because 2 < 3 so Python never even bothers to compare 3 and 4My question is how does Python's behavior change when I compare two lists of unequal length? [1,2,3]<[1,2,3,0] is True[1,2,3]<[1,2,3,4] is TrueThis led me to believe that the longer list is always greater than the shorter list. But then:[1,2,3]<[0,0,0,0] is FalseCan someone please explain how these comparisons are being done by Python? My hunch is that element-wise comparisons are first attempted and only if the first n elements are the same in both lists (where n is the number of elements in the shorter list) does Python consider the longer list to be greater. If someone could kindly confirm this or shed some light on the reason for this behavior, I'd be grateful.
The standard comparisons (<, <=, >, >=, ==, !=, in , not in ) work exactly the same among lists, tuples and strings. The lists are compared element by element.If they are of variable length, it happens till the last element of the shorter listIf they are same from start to the length of the smaller one, the length is compared i.e. shorter is smaller
Python: How to get all of the local variables defined in another module Is there a way to get a reference to the local variables defined in a different module?for example, I have two files: framework.py and user_code.py:framework.py:from kivy.app import Appclass BASE_A: passclass MyApp(App): def on_start(self): '''Here I'd like to get a reference to sub-classes of BASE_A and instantiated objects of these sub-classes, defined in the file "user_code.py" such as a1, a2, as well as the class A itself, without explicitly passing them to MyApp's instance. '''user_code.py:from framework import MyAppclass A(BASE_A): passapp = MyApp()a1 = A()a2 = A()app.run()What I'd like to do is to somehow get a reference to the objects a1 and a2, as well as the class A, that were all defined in user_code.py. I'd like to use them in the method on_start, which is invoked in app.run().Is it possible, for example, to get a reference to the scope in which the MyApp object was defined (user_code.py)?Some background for anyone who's interested:I know it's a bit of an odd question, but the reason is:I'm writing a python framework for creating custom-made GUI control programs for self-made instruments, based on Arduino. It's called Instrumentino (sitting in GitHub) and I'm currently developing version 2.For people to use the framework, they need to define a system description file (user_code.py in the example) where they declare what parts they're using in their system (python objects), as well as what type of actions the system should perform (python classes).What I'm trying to achieve is to automatically identify these objects and classes in MyApp's on_start without asking the user to explicitly pass these objects and classes, in order to make the user code cleaner. Meaning to avoid code such as:app.add_object(a1)app.add_object(a2)app.add_class(A)
New-style classes in Python have a method named __subclasses__ which returns a list of all direct subclasses that have been defined so far. You can use that to get a hold of the A class in your example, just call BASE_A.__subclasses__() (if you're using Python 2, you'll also need to change BASE_A to inherit from object). See this question and its answers for more details (especially the functions to recursively get all subclasses).As for getting access to the instances, for that you probably should add some code to the base class, perhaps saving the instances created by __new__ into some kind of data structure (e.g. a weakset). See this question and its answers for more on that part. Actually, now that I think about it, if you put your instances into a centralized data structure somewhere (e.g. not in an attribute of each subclass), you might not need the function to search for the classes, since you can just inspect the type of the instances and find the subclasses that are being used.
Why can't my two docker containers communicate? I have two docker containers running Flask, just a simple backend/frontend example bit I was running through to learn docker and flask.My frontend is:from flask import Flask, jsonifyimport requestsimport simplejsonimport jsonapp = Flask(__name__)@app.route('/')def hello(): uri = "http://127.0.0.1:5000/tasks" try: uResponse = requests.get(uri) except requests.ConnectionError: return "Connection Error" Jresponse = uResponse.text data = json.loads(Jresponse) fullString = "" title = data['tasks'][0]['title'] description = data['tasks'][0]['description'] fullString += title fullString += "<br/>" fullString += description return fullStringif __name__ == '__main__': app.run(debug=True,host="0.0.0.0", port=2000)This works fine when I run it locally, and it also works when I run it in my docker. I'm able to go to the frontend at localhost:2000 and I see valid data.My backend code is:from flask import Flask,request, jsonify, make_responsefrom flask import abortimport csvimport jsonimport flaskapp = Flask(__name__)tasks = [ { 'id': 1, 'title': u'example title 1', 'description': u'example description 1', 'done': False }, { 'id': 2, 'title': u'example title 2', 'description': u'example description 2', 'done': False }]@app.route('/tasks', methods=['GET'])def get_tasks(): return jsonify({'tasks': tasks})if __name__ == '__main__': app.run(debug=True,host="0.0.0.0", port=5000)If I run both of these without docker, and I go to the frontend at localhost:2000 I get what I expect, the 0th entry of tasks description and title.When I run these in docker, everything seems to work, but I get the Connection Error from the frontend. So, the requests.get(uri) seems to not be working once dockerized. Here's my docker filesFor BackendFROM ubuntu:latestMAINTAINER meRUN apt-get update -yRUN apt-get install -y python3-pip python3-dev build-essentialCOPY . /appWORKDIR /appRUN pip3 install -r requirements.txtENTRYPOINT ["python3"]CMD ["backend.py"]For FrontendFROM ubuntu:latestMAINTAINER meRUN apt-get update -yRUN apt-get install -y python3-pip python3-dev build-essentialCOPY . /appWORKDIR /appENV FLASK_APP=/app/frontend.pyENV FLASK_ENV=developmentRUN pip3 install -r requirements.txtENTRYPOINT ["python3"]CMD ["frontend.py"]So, it appears they both work individually, but can't communicate. Why is that? As if it isn't obvious, I'm new to docker.EDIT1Command for frontend: sudo docker run -d -p 2000:2000 frontendCommand for backend:sudo docker run -d -p 5000:5000 backendEDIT2I moved this to docker compose, and have the same issue it appears.docker-compose.ymlversion: '3'services: backend: build: context: backend/ dockerfile: compose/lib/backend/Dockerfile ports: - "5000:5000" frontend: build: context: lib/frontend/ dockerfile: compose/lib/frontend/Dockerfile ports: - "2000:2000"No changes to Docker files. According to the docs here it is correct that I don't need specific networking, and there is a default network created. However, this still works fine under just flask, but I can't get them to attach using docker or docker-compose.
Can you share the docker run command you're using?Are you exposing the ports with the -p flag?docker run -p 5000:5000 ...[Update]: Depending on your docker install and config, you may not be able to use that IP. Docker considers the 127.0.0.1 IP to mean "this container," not "this machine."A bridged network may address this, but I'd recommend using docker-compose instead (details below).If, for the purposes of this experiment, you are planning on running these two containers at the same time on the same machine always, you might want to look into docker-compose, which lets you run multiple containers from the same master command, and gives you nice bonus features like creating a virtual network for the two to connect to each other on, without external networks being able to access them. e.g. your data server can be visible only to your other flask container, but that one can be publicly exposed on your host machine's network interface.In the case of docker-compose, you can use the following (untested) docker-compose.yml as a start:version: 3services: backend: build: path/to/dockerfile ports: - 5000:5000 frontend: build: path/to/dockerfile ports: - 2000:2000
cx_oracle insert with select query not working I am trying to insert in database(Oracle) in python with cx_oracle. I need to select from table and insert into another table. insert_select_string = "INSERT INTO wf_measure_details(PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS) \ select PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS from wf_measure_details_stag where oozie_job_id = '{0}'.format(self.DAG_id)" conn.executemany(insert_select_string) conn.commit() insert_count = conn.rowcountBut I am getting below error. I do not have select parameter of data as data is getting from select query.Required argument 'parameters' (pos 2) not foundPlease suggest how to solve this
As mentioned by Chris in the comments to your question, you want to use cursor.execute() instead of cursor.executemany(). You also want to use bind variables instead of interpolated parameters in order to improve performance and reduce security risks. Take a look at the documentation. In your case you would want something like this (untested):cursor.execute(""" INSERT INTO wf_measure_details(PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS) select PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS from wf_measure_details_stag where oozie_job_id = :id""", id=self.DAG_id)
Python, Tkinter, Can not insert text in textbox Everything works except for the insertion of the text.Does anyone have an idea what the reason can be?def appendLog(txt): logOutput.insert("end", txt)root = tk.Tk()root.title("Blablabla")root.minsize(width=850, height=400)root.maxsize(width=850, height=400)root.configure(background="#181818")logOutput = tk.Text(root)logOutput.configure(fg="#aaaaaa", bg="#181818", font=("Courier", 13), state="disabled")logOutput.pack(fill="both")appendLog("\n--- START --- \n")
I think logOutput = tk.Text(root) should change to logOutput = tk.Entry(root) and you should add root.mianloop() at the end of the code.You aslo should delete state='disable' in the end of logOutput.configure(fg="#aaaaaa",bg="#181818",font=("Courier",13),state="disabled").By the way logOutput = tk.Text(root) aslo should change to logOutput = tk.Entry(root)Here is my code:import tkinter as tkdef appendLog(txt): logOutput.insert("end", txt)root = tk.Tk()root.title("Blablabla")root.minsize(width=850, height=400)root.maxsize(width=850, height=400)root.configure(background="#181818")logOutput = tk.Entry(root)logOutput.configure(fg="#aaaaaa", bg="#181818", font=("Courier", 13))logOutput.pack(fill="both")appendLog("\n--- START --- \n")root.mainloop()
Can I know the actual exception caught by the method? I have a scenario where I call a library method that catches any exception that occurs and re-throws another exception. Is there a way I can get the original exception? Please find a minimal reproducible code below:def f(): raise KeyError("key not found")def g(): try: f() except Exception as e: raise Exception(f'{e}') try: g()except KeyError: # Does not work print('excepted')Can I get that KeyError exception occurred?
When you raise an exception in an except clause, the new exception's __context__ attribute is set to the original exception.try: g()except Exception as e: print(type(e.__context__)) print(e.__context__)outputs<class 'KeyError'>'key not found'
How to combine np string array with float array python I would like to combine an array full of floats with an array full of strings. Is there a way to do this?(I am also having trouble rounding my floats, insert is changing them to scientific notation; I am unable to reproduce this with a small example)A=np.array([[1/3,257/35],[3,4],[5,6]],dtype=float)B=np.array([7,8,9],dtype=float)C=np.insert(A,A.shape[1],B,axis=1)print(np.arround(B,decimals=2))D=np.array(['name1','name2','name3'])How do I append D onto the end of C in the same way that I appended B onto A (insert D as the last column of C)?I suspect that there is a type issue between having strings and floats in the same array. It would also answer my questions if there were a way to change a float (or maybe a scientific number, my numbers are displayed as '5.02512563e-02') to a string with about 4 digits (.0502).I believe concatenate will not work, because the array dimensions are (3,3) and (,3). D is a 1-D array, D.T is no different than D. Also, when I plug this in I get "ValueError: all the input arrays must have same number of dimensions."I don't care about accuracy loss due to appending, as this is the last step before I print.
Use dtype=object in your numpy array; like bellow:np.array([1, 'a'], dtype=object)
How to load module into python with argv I want to load oneRunParams.py into my current program but won't know where it is till I run it. I want to have it as an input argument, accessed through argv. I was using:from oneRunParams import *I now want to replace this with something that will do the same only with the path to oneRunParams specified.
You can use __import__:Here is test.py:# test.pyimport sysfilename = sys.argv[1]f = __import__(filename[:-3]) # This removes the `.py` extensionf.test()Here is test2.py:# test2.pydef test(): print('hello world')Running the following the command line:python test.py test2.pyGives the following output:hello worldIf you really want to load everything in local scope, you have to do the following:filename = sys.argv[1]f = __import__(filename[:-3], globals(), locals(), ['*'])for k in dir(f): locals()[k] = getattr(f, k)test()
Python Error - TypeError: bad operand type for unary -: 'NoneType' I have the next for loop inside a functiondef Cost_F(Y, Ypred, m): for i in range(0,m): # Y and Ypred X = np.matmul(-Y, np.log10(Ypred))Dimensions for Y and Ypred are both (10,1).Type of Y and Ypred => class 'numpy.matrixlib.defmatrix.matrix'Error from cmd => TypeError: bad operand type for unary -: 'NoneType'
-Ydoes not work as you are trying to use it. What you mean is:-1*YWhat python is trying to do in your case is:None - Ywhich will obviously not work. That is, beacuse it interprets - to be an operand with a left and a right side. In your case you provide no left side, so it assumes None and then can't find any implementation for the - where the left is None and the right is a matrix
Google App Engine 413 error (Request Entity Too Large) I've implemented an app engine server in Python for processing html documents sent to it. It's all well and good when I run it locally, but when running off the App engine, I get the following error:"413. That’s an error. Your client issued a request that was too large. That’s all we know."The request is only 155KB, and I thought the app engine request limit was 10MB. I've verified that I haven't exceeded any of the daily quotas, so anyone know what might be going on?Thanks in advance!-Saswat
Looks like it was because I was making a GET request. Changing it to POST fixed it.
Regular expression to extract all sentences that start and end with the same word Given a string of sentences, I need to extract a list of all of the sentences which start and end with the same word.e.g.# sample texttext = "This is a sample sentence. well, I'll check that things are going well. another sentence starting with another. ..."# required result[ "well, I'll check that things are going well", "another sentence starting with another"]How can I make the match using back references and also capture the full sentence?I have tried the following regex but it's not working.re.findall("^[a-zA-Z](.*[a-zA-Z])?$", text)
text = "This is a sample sentence. going to checking whether it is well going. another sentence starting with another."sentences = re.split('[.!?]+', text)result = []for s in sentences: words = s.split() if len(words) > 0 and words[0] == words[-1]: result.append(s.strip())print(result)
Debugging python using Textmate? I'd like to use TextMate for debugging python scripts. I'm looking for suggestions on the best way to accomplish this. I found these "solutions" -- is there a better approach?http://www.libertypages.com/clarktech/?p=192 Using Python 3.1 with TextMate I'd really like to find something as usable as the Eclipse PyDev plugin, if at all possible, with these key features:current debug line selected in TextMatevariable inspection in TextMatedebug commands as buttons in TextMate (step, stop, ...) customizable PYTHONPATH and/or launch script per projectThe last feature is to support my app engine testing, where I frequently launch a python shell bypython2.5 appengine_console.py my-app-id localhost:8080Finally, I am open to writing a plugin as a last resort if it is possible to achieve good integration. If you suggest custom development and have pointers on to assess the effort and get started, please include that in your answer.Thanks!
TextMate is not an IDE, it's just an awsome editor.Therefore you'll find that these features might not be available, and that you should just debug from the provided command line.
Pyspark - reducer task iterates over values I am working with pyspark for the first time.I want my reducer task to iterates over the values that return with the key from the mapper just like in java.I saw there is only option of accumulator and not iteration - like in add function add(data1,data2) => data1 is the accumulator.I want to get in my input a list with the values that belongs to the key.That's what i want to do. Anyone know if there is option of doing that?
Please use reduceByKey function. In python, it should look likefrom operator import addrdd = sc.textFile(....)res = rdd.map(...).reduceByKey(add)Note: Spark and MR has fundamental diffrences, so it is suggested not to force-fit one to another. Spark also supports pair functions pretty nicely, look for aggregateByKey if you want something fancier. Btw, word count problem is discussed in depth (esp usage of flatmap) in spark docs, you may want to have a look
Templates while exporting Jupyter notebook to PDF with nbconvert Someone knows how to use templates while exporting Jupyter notebook to PDF with nbconvert? Where did I get templates?Thanks
A few built-in formats are available by default: html, pdf, webpdf, script, latexYou can use the below code to export your code to a pdf:$ jupyter nbconvert --to FORMAT notebook.ipynbThis generates notebook.ipynb, with the FORMAT you want.Also, you can also extract the code from a notebook into an executable script, i.e., for an iPython notebook extract the Python code cells into a Python script:$ jupyter nbconvert --to script my_notebook.ipynb
Convert 2020-09-01T00:00:00-05:00 timestamp to dd-mm-yyyy Hi I am trying to convert the following time format2020-08-28T13:42:00.298363-05:00to28-Sept-2020I am using the following code but it does not work.from datetime import datetime start_time = "2020-08-28T13:42:00.298363-05:00"start_period_obj = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%f.%-s-%z) print(start_period_obj)and the output isFile "time conveter.py", line 19 start_period_obj = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%f.%-s-%z) ^SyntaxError: EOL while scanning string literal
Your code is missing the closing quote at the end of the datetime format string, which is causing the error message you see. You also have an issue with the actual format string as well, as pointed out by @ChrisCharleyThis start_period_obj = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%f.%-s-%z)Should be start_period_obj = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%S.%f%z")To then generate your desired format you can use:start_period_obj.strftime("%d-%b-%Y")See here for full details https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior
Is there a faster way I can count the number of occurrences of a number in a list? I am trying to write a function to count the occurrences of a number in a list, and the order is ascending according to the number (from 0 to the maximum value in the list), not the occurrences. Here's the function I wrote:def sort_counts(sample): result = [] for i in range(max(sample)+1): result.append(sample.count(i)) return resultFor example:>>> sort_counts([1,2,2,3,3,4,1,1,1,1,2,5])>>> [0, 5, 3, 2, 1, 1]I learned that sample.count would work slowly if there are more numbers in the list. Is there a faster/simpler way I can write this function?
Counter from collections module is a nice way to count the number of occurrences of items in a listfrom collections import Counterlst = [1,2,2,3,3,4,1,1,1,1,2,5]# create a counter objectc = Counter(lst)# get the counts[c[i] for i in range(max(c)+1)]# [0, 5, 3, 2, 1, 1]
Calculate average of extreme values in Netcdf - Python I have started working with large datasets from Copernicus Marine Service.I am downloading the netcdf files through motuclient and then i can process (using xarray) the data to calculate the mean value for each position of the grid. I would like to calculate the average of the 20 highest values (extremes). How can i accomplish that? Can i use xarray or should i look for something else?My code for calculating the average of all values is:ds = xr.open_mfdataset(file, engine="rasterio")yearly_data = (ds).mean("time")
dask.array has topk and argtopk methods which you can use to find the largest (or smallest) k values across a chunked array. You could adapt this to xarray using the following:In [52]: def topk_xr(da, n, dim): ...: """get the largest n (or smallest if n is negative) along dim""" ...: axis = da.get_axis_num(dim) ...: largest = da.data.topk(n, axis=axis) ...: dims = [d for d in da.dims if d != dim] ...: dims.insert(axis, 'rank') ...: res = xr.DataArray( ...: largest, ...: dims=dims, ...: coords={ ...: 'rank': range(0, abs(n)), ...: **{d: da.coords[d] for d in da.dims if d != dim} ...: }, ...: ) ...: ...: return res ...:You can then call this on a DataArray to get the top k values along whichever dimension you'd like:In [54]: topk_xr(ds['myvar'], 20, dim='time')Out[54]:<xarray.DataArray 'topk_aggregate-aggregate-858fdf' (rank: 20, y: 10, x: 10)>dask.array<topk_aggregate-aggregate, shape=(20, 10, 10), dtype=float64, chunksize=(20, 10, 10), chunktype=numpy.ndarray>Coordinates: * rank (rank) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 * y (y) int64 20 21 22 23 24 25 26 27 28 29 * x (x) int64 -110 -109 -108 -107 -106 -105 -104 -103 -102 -101Similarly, you can map it across all arrays in a Dataset, assuming they are similarly shaped:In [57]: ds.map(topk_xr, n=20, dim='time')Out[57]:<xarray.Dataset>Dimensions: (rank: 20, y: 10, x: 10)Coordinates: * rank (rank) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 * y (y) int64 20 21 22 23 24 25 26 27 28 29 * x (x) int64 -110 -109 -108 -107 -106 -105 -104 -103 -102 -101Data variables: myarr (rank, y, x) float64 dask.array<chunksize=(20, 10, 10), meta=np.ndarray>If you wanted to find the positional indices of these maxima/minima, you could use argtopk instead of topk in the function.
Trying to use NER (Named Entity Recognition), but I can't get the server running I've been following the instructions from this github repo: https://github.com/caihaoyu/sner. I installed NER from the official website: https://nlp.stanford.edu/software/CRF-NER.html, and then installed the latest version of Java (JRE). However, when I try to get the NER server up and running, using the command in the sner repo readme, I get this error: Could not find or load main class .ext.dirs=..lib.
Please see this documentation for using the Stanford CoreNLP server:Overall info: https://stanfordnlp.github.io/CoreNLP/index.htmlServer info: https://stanfordnlp.github.io/CoreNLP/corenlp-server.htmlSample command to start server:java -Xmx12g edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000 -serverProperties myProperties.propsin myProperties.props you should setannotators = tokenize,ssplit,pos,lemma,nerSample server request:wget --post-data 'The quick brown fox jumped over the lazy dog.' 'localhost:9000/?properties={"outputFormat":"json"}' -O -
Web development with just Python Flask Is it just fine to build a website with a python backend that interacts with the database and use flask to display it on html? flask can also get inputs from the html form and from there python can manipulate it. Is there any security concern with it?
Yes it is perfectly safe, provided you take safeguards and use best practices. Flask is my favorite Python web server framework - extremely lightweight and flexible, makes the fewest assumptions about your application.
What's the proper way to use the Python module scholar.py? I'm kind of new to both Python and the command line, but I'm trying to use the Python module https://github.com/ckreibich/scholar.py/blob/master/README.md in order to fetch certain results from Google Scholar. After a few changes (it couldn't find the module) think I succeeded with the import, at least I didn't get any error message (but no confirmation).But then what to do? I tried writing scholar.py -c 1 --author "albert einstein" --phrase "quantum theory" both inside and outside of Python, but only get error messages such as: File "", line 1 scholar.py -c 1 --author "albert einstein" --phrase "quantum theory" ^ SyntaxError: invalid syntax(The ^points to 1). What is the proper way to use the module? Have I missed something?
The problem here is that you're trying to write a command intended for the command line inside of python, you can't do that, and that's why you're setting `SyntaxError'The problem you are having at the command line as specified in your comment: "-bash: scholar.py: command not found" Is due to the fact that linux can't run commands like that that don't have executable permissions and are not in PATH. The easiest solution is to run it with python, but obviously make sure first that you're in the same folder as the scholar.py file, and then:python scholar.py -c 1 --author "albert einstein" --phrase "quantum theory"If that fails, perhaps the code only runs with python3, in which case try: python3 scholar.py -c 1 --author "albert einstein" --phrase "quantum theory"If you insist on running just the script without the python or python3 commands, you should normally add the "python shebang" at the start of the file, with #! /usr/bin/env python or #! /usr/bin/env python3 but I see that is already in the file. The next step is to set the file as executable:chmod 770 scholar.pyOr if that fails, use sudo permissions to change the file permissions and ownership (requires root permissions, replace "youruser" with your actual username):sudo chown youruser scholar.pysudo chmod 770 scholar.pyAnd then you can run it like this, from the command line:./scholar.py -c 1 --author "albert einstein" --phrase "quantum theory"
matplotlib + locale de_DE + LaTeX = space btw. decimal separator and number if I run the following code with enabled LaTeX (usetex=True), then I get a strange spacing between the decimal comma and the first following number. Has anyone an idea how to fix this?import matplotlib.pyplot as pltimport localeplt.style.use('classic')locale.setlocale(locale.LC_NUMERIC, 'de_DE')plt.rc('text', usetex=False)font = {'family':'serif','size':14}plt.rc('font',**font)plt.rcParams['axes.formatter.use_locale'] = Truea=[.1,.2,.3,.4,.5]b=[.1,.2,.3,.4,.5]plt.plot(a,b)plt.show()See also the attached picture for clarification: Thanks!
Using the LaTeX-Package icomma solves the problem!import matplotlib.pyplot as pltimport localeplt.style.use('classic')locale.setlocale(locale.LC_NUMERIC, 'de_DE')plt.rc('text', usetex=True)font = {'family':'serif','size':14}plt.rc('font',**font)# Add the following two lines to the initial code:params= {'text.latex.preamble' : [r'\usepackage{icomma}']}plt.rcParams.update(params)plt.rcParams['axes.formatter.use_locale'] = Truea=[.1,.2,.3,.4,.5]b=[.1,.2,.3,.4,.5]plt.plot(a,b)plt.show()
export dataframe excel directly to sharepoint (or a web page) I created a dataframe to jupyter notebook and I would like to export this dataframe directly to sharepoint as an excel file. Is there a way to do that?
You can add the SharePoint site as a network drive on your local computer and then use a file path that way. Here's a link to show you how to map the SharePoint to a network drive. From there, just select the file path for the drive you selected.df.to_excel(r'Y:\Shared Documents\file_name.xlsx')
Improve speed parsing XML with elements and namespace, into Pandas So I have a 52M xml file, which consists of 115139 elements.from lxml import etreetree = etree.parse(file)root = tree.getroot()In [76]: len(root)Out[76]: 115139I have this function that iterates over the elements within root and inserts each parsed element inside a Pandas DataFrame.def fnc_parse_xml(file, columns): start = datetime.datetime.now() df = pd.DataFrame(columns=columns) tree = etree.parse(file) root = tree.getroot() xmlns = './/{' + root.nsmap[None] + '}' for loc,e in enumerate(root): tot = [] for column in columns: tot.append(e.find(xmlns + column).text) df.at[loc,columns] = tot end = datetime.datetime.now() diff = end-start return df,diffThis process works but it takes a lot of time. I have an i7 with 16GB of RAM.In [75]: diff.total_seconds()/60 Out[75]: 36.41769186666667In [77]: len(df) Out[77]: 115139I'm pretty sure there is a better way of parsing a 52M xml file into a Pandas DataFrame.This is an extract of the xml file ...<findToFileResponse xmlns="xmlapi_1.0"> <equipment.MediaIndependentStats> <rxOctets>0</rxOctets> <txOctets>0</txOctets> <inSpeed>10000000</inSpeed> <outSpeed>10000000</outSpeed> <time>1587080746395</time> <seconds>931265</seconds> <port>Port 3/1/6</port> <ip>192.168.157.204</ip> <name>RouterA</name> </equipment.MediaIndependentStats> <equipment.MediaIndependentStats> <rxOctets>0</rxOctets> <txOctets>0</txOctets> <inSpeed>100000</inSpeed> <outSpeed>100000</outSpeed> <time>1587080739924</time> <seconds>928831</seconds> <port>Port 1/1/1</port> <ip>192.168.154.63</ip> <name>RouterB</name> </equipment.MediaIndependentStats></findToFileResponse>any ideas on how to improve speed? For the extract of the above xml, the function fnc_parse_xml(file, columns) returns this DF....In [83]: df Out[83]: rxOctets txOctets inSpeed outSpeed time seconds port ip name0 0 0 10000000 10000000 1587080746395 931265 Port 3/1/6 192.168.157.204 RouterA1 0 0 100000 100000 1587080739924 928831 Port 1/1/1 192.168.154.63 RouterB
You declare an empty dataframe, so you might get a speedup if you specify the index ahead of time. Otherwise, there is constant expansion of the dataframe. df = pd.DataFrame(index=range(0, len(root)))You could also create the dataframe at the end of the loop.vals = [[e.find(xmlns + column).text for column in columns] for e in root]df = pd.DataFrame(data=vals, columns=['rxOctets', ...])
How to replace/overwrite default header of EmailMultiAlternatives Environment: Ubuntu 18.10, Python 2.7.15, Django 1.11.16I'm trying to send an email containing an inline image. I have the following code:msg = EmailMultiAlternatives(some_subject, some_body, 'from@some-domain.com', ['to@some@domain'])img_data = open('path/to/image.png', 'rb').read()img = MIMEImage(img_data)msg.attach(img)msg.send()(I've only included the code that I think is relevant but I can add more on demand.)The above properly works and the image is properly displayed on most of the email clients (about 7 of them, both mobile, desktop or webmail ones) that I tested on, with two exceptions: Mozilla Thunderbird 60 and some macOS native email client.On Thunderbird the image is not displayed inline but at the very end of the message. On the macOS client, the image is displayed inline but additionally it is also displayed at the very end of the message.I composed and sent a test message from another email client, containing an inline image which was properly displayed on both Thunderbird and macOS. I compared the headers of this message with the headers of the message generated by my code. I noticed that the faulty message has the 'Content-Type' set to 'multipart/mixed' while the properly displayed message had the same header set to 'multipart/related'. I saved the faulty message in an eml file and manually changed the value of that header and then loaded the message in Thunderbird. The message was properly displayed and the image was in the right place. If I could set that header to the proper value, the problem would be solved. So, my question is: is there any possibility to tell EmailMultiAlternatives to set 'Content-Type' : 'multipart/related' instead of the default value of 'multipart/mixed'?I tried to add the header like this but it is not working:msg = EmailMultiAlternatives(some_subject, some_body, 'from@some-domain.com', ['to@some@domain'], headers={'Content-Type' : 'multipart/related'})I got the following error ( I use Amazon SES):400 Bad Request<ErrorResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/"> <Error> <Type>Sender</Type> <Code>InvalidParameterValue</Code> <Message>Duplicate header 'Content-Type'.</Message> </Error> <RequestId>xxxxxxxxxx</RequestId></ErrorResponse>If I can't modify that header, do you suggest any alternatives?
If you look at the source code, you'll see that EmailMultiAlternatives is a subclass of EmailMessage, which itself has a class attribute: mixed_subtype = 'mixed'So if you create your own subclass to override this, you should get what you need:class EmailMultiAlternativesRelated(EmailMultiAlternatives): mixed_subtype = 'related'That's it, now you just use this new class, and it will use "multipart/related".(the _create_attachments() method passes this subtype to python's SafeMIMEMultipart which creates the actual headers for each attachment.)