questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Show usages of module in Pycharm I have a module and am trying to find all the places any method defined in that module is used anywhere outside of it. Is there a function to do this? I'm aware of the call hierarchy tool window, but haven't managed to accomplish this exact use case.
In project view (where you see the file and directory structure), right click on the module you want to check, and click the Find usages on context menu.Because you are using Python, you find where such module is imported (so not really where you use the functions), and from there you can find if the module is used (else the import line will be in gray, or the functions that are not used will be in grey, depending if you are using import module or from module import function1, function2, etc).This is not 100% your use case (you will not find directly the function call), but it is powerful enough to get a quick overview.
swapping between menus in python How should I write this so I could constantly move between "menus". #!/bin/env python import osclass Menu: def __init__(self): self.menu = '1' def Main(self): os.system('clear') print "main menu" test = raw_input() if test == '2': self.menu = '2' def Sub(self): os.system('clear') print "sub menu" test = raw_input() if test == '1': self.menu = '1'menu = Menu()while menu.menu == '1': menu.Main()while menu.menu == '2': menu.Sub()At the moment I can swap once. ie I start with menu.Main(), enter '2' and menu.Sub() is shown. But then when I enter '1' the program quits. why does it not go back to showing menu menu.Main() ? Any thoughts welcome!EDIT:just needed to put them in a main while loop
The first while loop runs, and when you enter '2', finishes. Therefore, the second while loop will begin to loop.In the second while loop, you enter '1', which causes the second while loop to finish (because menu.menu is now == '1'). Thus, the program finishes.Instead, you'll probably want one value for menu (that is neither '1' nor '2') to act as the exit state. For example, 'E'. Then, you can replace your two while loops with the following:while menu.menu != 'E': menu.Do()The "Do" method will handle the menu state if it's 1 or 2.class Menu: def __init__(self): self.menu = '1' def Do(self): if self.menu == '1': self.Main() elif self.menu == '2': self.Sub() def Main(self): os.system('clear') print "main menu" test = raw_input() if test == '2': self.menu = '2' def Sub(self): os.system('clear') print "sub menu" test = raw_input() if test == '1': self.menu = '1'You will still need to make it so that you can actually get to the 'E' case. I'll leave that as a task for you to finish.
Getting specific indexed distinct values in nested lists I have a nested list of around 1 million records like:l = [['a', 'b', 'c', ...], ['d', 'b', 'e', ...], ['f', 'z', 'g', ...],...]I want to get the distinct values of inner lists on second index, so that my resultant list be like:resultant = ['b', 'z', ...]I have tried nested loops but its not fast, any help will be appreciated!
Since you want the unique items you can use collections.OrderedDict.fromkeys() in order to keep the order and unique items (because of using hashtable fro keys) and use zip() to get the second items.from collections import OrderedDictlist(OrderedDict.fromkeys(zip(my_lists)[2]))In python 3.x since zip() returns an iterator you can do this:colls = zip(my_lists)next(colls)list(OrderedDict.fromkeys(next(colls)))Or use a generator expression within dict.formkeys():list(OrderedDict.fromkeys(i[1] for i in my_lists))Demo:>>> lst = [['a', 'b', 'c'], ['d', 'b', 'e'], ['f', 'z', 'g']]>>> >>> list(OrderedDict().fromkeys(sub[1] for sub in lst))['b', 'z']
In Tensorflow How can I add a Tensor to another Tensor with different shape? I use Tensorflow.I want to add a tensor A whose shape is [64,64] (=[Batch size,values])to another tensor B whose shape is [64,7,7,64].I reshaped the tensor A, but it should have same number of elements as tensor B.So, how can I reshape or expand tensor A. Or is there any other way to add A to B? Specifically, I want to add 64 values of A to all 64 values of B 7*7 times.I am sorry to my poor English. I cannot explain well but want some of you to understand what I want to say. Thank you.
Use broadcasting. Here you have an example:import tensorflow as tfimport numpy as npA = tf.constant(np.arange(64*64), shape=(64, 64), dtype=tf.int32)B = tf.ones(shape=(64, 7, 7, 64), dtype=tf.int32)A_ = A[:, None, None, :] # Shape=(64, 1, 1, 64)result = A_ + Bwith tf.Session() as sess: print(sess.run(result))
In Python, is it a bad practice to "reset" an iterator when __iter__ is called? For example, let's say I have a class for iterating the records inside a file:class MySpecialFile: ... def reset(self): self._handle.seek(0) def __iter__(self): self.reset() return selfEDIT: I just read this question months later I asked it and felt a little stupid :).As written in the answers below, a __iter__ method with side effects is a bad thing. If you want to iterate your object multiple times, well, you just need to return a new iterator object every time you call __iter__:class IterableFile: def __iter__(self): return FileIterator(self)In retrospect, this is quite obvious, which is the reason why I feel stupid now, I guess. Not sure what was on my mind, but I think the reason I was reluctant to have two separate classes was that each new iterator required to create a new file handler (my SpecialFile class was just an interface on top of a text/binary file) and this felt "excessive" and weird to me at that time.
iter is expected to have no side effects. By violating this assumption, your code breaks all sorts of things. For example, the standard test for whether a thing is iterable:try: iter(thing)except TypeError: do_whatever()will reset your file. Similarly, the itertools consume recipe:def consume(iterator, n=None): "Advance the iterator n-steps ahead. If n is None, consume entirely." # Use functions that consume iterators at C speed. if n is None: # feed the entire iterator into a zero-length deque collections.deque(iterator, maxlen=0) else: # advance to the empty slice starting at position n next(islice(iterator, n, n), None)will produce an incorrect file position instead of advancing n records after consume(your_file, n). Skipping the first few records with next before a loop will also fail:f = MySpecialFile(whatever)next(f) # Skip a header, or try, anyway.for record in f: # We get the header anyway. uhoh()
Pythonic way to handle function overloading I'm trying to write a function to invert a dictionary but I'm having troubles finding the proper way to do it without rewriting code, using different methods and avoiding if/else at each iteration. What's the most pythonic way to do it?def invert_dict(dic, type=None): if type == 'list': return _invert_dict_list(dic) return _invert_dict(dic)# if there's only one value per keydef _invert_dict(dic): inverted = defaultdict() for k,v in dic.items(): for item in v: inverted[item]=k return dict(inverted)# if there are multiple values for the same keydef _invert_dict_list(dic): inverted = defaultdict(list) for k,v in dic.items(): for item in v: inverted[item].append(k) return dict(inverted)
I won't comment on the actual impementation, but for the type based branching there is functools.singledispatch:import functools@functools.singledispatchdef inv_item(value, key, dest): < fallback implementation ># special case based on type@inv_item.register(list)@inv_item.register(tuple)def inv_sequence(value, key, dest): < handle sequence values >...def invert_dict(In): Out = {} for k, v in In.items(): inv_item(v, k, Out) return Out
Python Tkinter Clear frame and associate user's answer to a value I am trying to clear the frame between each question that is asked in the multiple choice survey but I would like the frame itself to stay. I tried to created a next button that will allow the user to skip to the next question but the frame doesn't appear when I run my code (No error message is appering either). And in order to store the answers of the user to the questions, is it better to use the values (1,2,3,4,5) of the radiobuttons or the variables of the radiobuttons. Here is my code:from Tkinter import *fenetre = Tk()fenetre.title("Life Quiz")from random import shuffle#questions quest_1 = "What is your favorite color?"quest_2 = "Which of these animals do you like the most?"quest_3 = "Which of these superpowers would you want to have the most?"quest_4 = "Which food would you want to eat right now?"quest_5 = "If you could only bring one thing on an island, what would it be?"#liste de réponselist_reponse = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"]#Associé liste_reponse avec IntVar()list_reponse[0] = IntVar()list_reponse[1] = IntVar()list_reponse[2] = IntVar()list_reponse[3] = IntVar()list_reponse[4] = IntVar()class Questions(object):#fonction pour afficher les questions et les choix de réponses def __init__(self, master, n_question): self.master = master self.liste_choix =[ [quest_1, "Blue", "White", "Yellow", "RAINBOWS", "Red"], [ quest_2, "Blue Whale", "Pig", "Cat", "Unicorns", "Dog"], [quest_3, "Make it rain whenever you want", "Transform yourself into a ladder", "Invisibility when no one's around", "Read the mind of unicorns", "Ability to fly" ], [quest_4, "Ice cream", "Sushi", "Salad", "Cupcakes", "Pizza"] ,[quest_5, "Nothing...", "A fishing rod", "A book", "A leprechaun", "A knife"] ] self.liste_choix_questions = self.liste_choix [n_question] shuffle(self.liste_choix_questions) self.display_frame = None self.next_question = 0 def display_next(self): if self.next_question < len(self.liste_choix_questions): if self.display_frame: self.display_frame.destroy() self.display_frame=Frame(self.master) self.display_frame.grid(row=0, column=0) Label(self.display_frame, text=self.liste_choix_questions[0], bg="lightblue").grid(row=0, column=0) display_next(self) R1 = Radiobutton(fenetre, text= self.liste_choix[n_question][1], variable= list_reponse[n_question], value=1) R1.pack( anchor = W ) R2 = Radiobutton(fenetre, text= self.liste_choix[n_question][2], variable= list_reponse[n_question], value=2) R2.pack( anchor = W ) R3 = Radiobutton(fenetre, text= self.liste_choix[n_question][3], variable= list_reponse[n_question], value=3) R3.pack( anchor = W) R4 = Radiobutton(fenetre, text= self.liste_choix[n_question][4], variable= list_reponse[n_question], value=4) R4.pack( anchor = W ) R5 = Radiobutton(fenetre, text= self.liste_choix[n_question][5], variable= list_reponse[n_question], value=5) R5.pack( anchor = W ) Button(self.display_frame, text="next", command=display_next).grid(row=1, column=0) n_question += 1Q = Questions(fenetre,1)fenetre.mainloop()
One solution is to use an inner frame to hold your radiobuttons. Then, you can delete the frame which will delete all of the radiobuttons. Finally, you can recreate the frame with new radiobuttons.Another solution is to simply iterate over the radiobuttons, reconfiguring them for the current question (assuming all questions have the same number of choices).
c++ loop through registry recursively is slow Have an annoying problem with my code, probably I am doing something wrong because my Pythonimplementation is much faster!C++ implementation problems:Iterating over "HKEY_CLASSES_ROOT" takes a lot of ram, I assume it's because c++ implementation uses lots of variables. FixedIt's also slow, much slower than python implantation of the code FixedThe code even slower when trying to iterate over HKEY_CLASSES_ROOT FixedNew Questions:Thanks to Nam Nguyen i understood what causing the leaks in my code, and directly effecting execution time, the code below is the fixed one. how come c++ implementation runs as fast as my python implementation?C++ implementation:#include <iostream>#include <windows.h>#include <stdio.h>#include <tchar.h>#include <string>using namespace std;#define MAX_KEY_LENGTH 255int RecurseOpenRegEx(HKEY hkey, string subKey = "",string search = "nothing", DWORD sum = 0){ TCHAR csubKey[MAX_KEY_LENGTH]; DWORD nSubKeys = 0; DWORD pathLength = MAX_PATH; TCHAR storeKeyName[MAX_KEY_LENGTH]; DWORD keyLength; HKEY hKey = hkey; //somehow i need to reassign HKEY, otherwise it won't pass it with the function, this is bigger than me tough... const char * ccsearch = search.c_str(); const char * ccsubKey; if (subKey != "") { ccsubKey = subKey.c_str(); copy(subKey.begin(), subKey.end(),csubKey); //convert string to TCHAR } if (RegOpenKeyEx(hkey, ccsubKey, 0, KEY_READ, &hkey) == ERROR_SUCCESS) { if (RegQueryInfoKey(hkey, csubKey, &pathLength, NULL,&nSubKeys, NULL, NULL, NULL, NULL, NULL, NULL, NULL) == ERROR_SUCCESS) { sum += nSubKeys; for (DWORD subKeyIndex = 0; subKeyIndex < nSubKeys; subKeyIndex++) { keyLength = MAX_KEY_LENGTH; if (RegEnumKeyEx(hkey, subKeyIndex, storeKeyName, &keyLength, NULL, NULL, NULL, NULL) == ERROR_SUCCESS) { string sKeyName = storeKeyName; //Convert TCHAR to string explicitly if (subKey != "") { sKeyName = subKey + "\\" + sKeyName; } sum += RecurseOpenRegEx(hKey, sKeyName); } } } } RegCloseKey(hkey); //Now closing the right key return sum;}int main(){ cout << "sum of all keys: " << RecurseOpenRegEx(HKEY_LOCAL_MACHINE); return 0;}Python implementation:import winregdef recurseRegistrySearch(key, keySearch = "",subkey = "", subkeypath = "", x = 0): key = winreg.OpenKey(key, subkey, 0) y = winreg.QueryInfoKey(key)[0] x += y for items in range(x): try: subkey = winreg.EnumKey(key, items) if ((keySearch.lower() in subkey.lower()) and (keySearch != "")): print(subkeypath + "\\" + subkey) x += recurseRegistrySearch(key, keySearch, subkey, subkeypath = subkeypath + "\\" + subkey) except WindowsError: pass return xprint("sum of all keys: {0}".format(recurseRegistrySearch(winreg.HKEY_CLASSES_ROOT)))
There is leak of resource in your code. You open hkey but you close hKey (note the difference in case of k and K).On a side note, you store the opened registry key into hkey itself. And it happens that hkey is the passed in parameter, shared among all calls to RecurseOpenRegEx. That is why "somehow i need to reassign HKEY".Basically, what I can advise you now is immediately clean up your code. Bugs like these are hard to spot when your code is a too difficult to read. Once done, I believe you will find it easier to debug/trace.
cant automate calendar in selenium python I 'm trying to automate a calendar with selenium, but cant find a way through, structure of the calendar html is like this:6 divs containing a tags of each day, some days are disabledlike this:my take on this is that i'll select all non disabled a tags and look over then and if elem.text is equal to the certain date which i'll provide in function as a parameter, selenium will select that datethis is my code for this: def pick_date(self, date, month, year): month_status = WebDriverWait(self.driver, 10).until( EC.visibility_of_element_located((By.XPATH, '//*[@id="caltitle"]'))) _month, _year = (month_status.text).split() next_month_btn = WebDriverWait(self.driver, 10).until( EC.visibility_of_element_located((By.XPATH, '//*[@id="next-month"]'))) all_active_dates = self.driver.find_elements(By.XPATH, '//*[@id="calweeks"]/div') print(_month, _year) for i, week_row in enumerate(all_active_dates): for day in week_row.find_elements(By.XPATH, f'//*[@id="calweeks"]/div[{i + 1}]/a[not(@attr="caldisabled") and not(@attr="caloff")]'): if day.text == date: day.click() print(day.text)however for day in week_row.find_elements(By.XPATH, f'//*[@id="calweeks"]/div[{i + 1}]/a[not(@attr="caldisabled") and not(@attr="caloff")]')this is selecting all the a tags and not only the non disablesoutput is something like this:June 2022293031123456789101112131415161718192021222324252627282930123456789any help would be appreciated, coz i am bashing my head on this issue for soo long.
Get all active elements:all_active_dates = self.driver.find_elements(By.XPATH, f'//*[@id="calweeks"]//a[not(contains(@class,'caldisabled')) and not(contains(@class,'caloff'))]') and then just enumerate through all_active_dates and simply print(active_date.text)
MySQL On Update not triggering for Django/TastyPie REST API We have a resource table which has a field last_updated which we setup with mysql-workbench to have the following properties:Datatype: TIMESTAMPNN (NotNull) is checkedDefault: CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMPWhen I modify a row through the workbench and apply it, the last_updated field properly updates.When I use the REST api we've setup, and issue a put:update = requests.put('http://127.0.0.1:8000/api/resources/16', data=json.dumps(dict(status="/api/status/4", timeout=timeout_time)), headers=HEADER)I can properly change any of the values (including status and timeout, and receive a 204 response), but last_updated does not update.Django's model documentation says in this case it should be sending an UPDATE.Anyone have and ideas on why it's missing these updates? I can provide further details regarding our specific Django/tastypie setup, but as long as they are issuing an UPDATE, they should be triggering the databases ON UPDATE.
I suspect that the UPDATE statement issued by Django may be including an assignment to the last_updated column. This is just a guess, there's not enough information provided.But if the Django model contains the last_updated column, and that column is fetched from the database into the model, I believe a save() will assign a value to the last_updated column, in the UPDATE statement. https://docs.djangoproject.com/en/1.9/ref/models/instances/#specifying-which-fields-to-saveConsider the behavior when we issue an UPDATE statement like this: UPDATE mytable SET last_updated = last_updated , some_col = 'some_value' WHERE id = 42 Because the UPDATE statement is assigning a value to the last_updated column, the automatic assignment to the timestamp column won't happen. The value assigned in the statement takes precedence.To get the automatic assignment to last_updated, that column has to be omitted from the SET clause, e.g. UPDATE mytable SET some_col = 'some_value' WHERE id = 42 To debug this, you'd want to inspect the actual SQL statement.
How to check input against a huge list? This is my code:while True: print(vehiclelist) reg = input('Enter registration number of vehicle: ') if reg in vehiclelist: break else: print("Invalid")But it keeps showing its invalid, this is the output: [Car('SJV1883R', 'Honda', 'Civic', 60.00), Car('SJZ2987A', 'Toyota', 'Altis', 60.00), Car('SKA4370H', 'Honda', 'Accord', 80.00), Car('SKD8024M', 'Toyota', 'Camry', 80.00), Car('SKH5922D', 'BMW', '320i', 90.00), Car('SKM5139C', 'BMW', '520i', 100.00), Car('SKP8899H', 'Mercedes', 'S500', 300.00), Truck('GB3221K', 'Tata', 'Magic', 200.00), Truck('YB8283M', 'Isuzu', 'NPR', 250.00), Truck('YK5133H', 'Isuzu', 'NQR', 300.00)] Enter registration number of vehicle: SJZ2987A InvalidAny idea how I can check the input?This is my vehicle class:class Vehicle(): def __init__(self, regNo, make, model, dailyRate, available): self.regNo = regNo self.make = make self.model = model self.dailyRate = dailyRate self.available = available @property def dailyRate(self): return self.__dailyRate @dailyRate.setter def dailyRate(self, dailyRate): if dailyRate < 0: self.__dailyRate = 0 else: self.__dailyRate = dailyRate def __repr__(self): return "Vehicle('{:s}', '{:s}', '{:s}', {:.2f}, '{:s}')".format(self.regNo, self.make, self.model, self.dailyRate, self.available)
Here the problem is vehicle_list is a list of Vehicle objects and you can not directly search for a registration number inside a list of vehicle object.A better design pattern is to use a dictionary in which regNo will appear as key and vehicle object will appear as value.You may change your code as follows:vehicle_details = {vehicle.regNo : vehicle for vehicle in vehiclelist}while True: reg = input('Enter registration number of vehicle: ') if reg in vehicle_details: break else: print("Invalid")
Centroid of a set of positions on a toroidally wrapped (x- and y- wrapping) 2D array? I have a flat Euclidean rectangular surface but when a point moves to the right boundary, it will appear at the left boundary (at the same y value). And visa versa for moving across the top and bottom boundaries. I'd like to be able to calculate the centroid of a set of points. The set of points in question are mostly clumped together.I know I can calculate the centroid for a set of points by averaging all the x and y values. How would I do it in a wrap-around map?
If cluster size is relatively small (smaller than half of grid), you can use simple approach:Let's surface width and height are W and H. Imagine that the surface dimensions are tripled, so you have -W..2*W and -H..2*H axis ranges. Unroll wrapped values. XMin = X[0]; XMax = X[0] the same for Y for i = 1..N-1 Check distance from X[i] to XMax and XMin Get largest of them D If D is larger than W/2, shift X[i] by W //example1: W=100, Xmin = 70, XMax = 90, X[i]=10 => X[i]=20+100 = 120 //example2: W=100, Xmin = 5, XMax = 12, X[i]=98 => X[i]=98-100 = -2 the same for Y update Min/Max Calc (W + Average(X[i])) %% W //modulo operation
Test method with a mock response, without create data I'm testing with unittest a method, createData, which create something in my database.def createData(self, content): logging.info("Creating data...") request = requests.post(self.url, data=content) if request.status_code == 201: logging.info("Data created") else: logging.error("Data not created") return requestSo I created two tests : one where I fail in creating data, with self.assertNotEqual(201, badRequest.status_code) and another where I succeed, with self.assertEqual(201, goodRequest.status_code). Of course, after, I delete this data.I want to make this test without create any data. So I mock the response like that :import unittest, loggingfrom data import Data as datafrom unittest.mock import Mockclass TestData(unittest.TestCase): def testCreateDataSuccess(self): mock_response = Mock() mock_response.status_code = 201 with self.assertLogs() as captured: data.createData(data, goodContent).return_value = mock_response self.assertEqual(201, mock_response.status_code) self.assertEqual(captured.records[1].levelname, 'INFO')However, despite mock, a data is created in my database. Could you tell me what I didn't understand ?Thank you for your help !
Well, I found how to resolve this problem : using patch decorator.I guess it "defuses" requests post in data, substituting response with the configured mockimport unittest, loggingfrom data import Data as datafrom unittest.mock import patchclass TestData(unittest.TestCase): @patch('data.requests.post') def testCreateDataSuccess(self, mock_post): mock_post.return_value.status_code = 201 with self.assertLogs() as captured: response = data.createData(data, goodContent) self.assertEqual(201, response.status_code) self.assertEqual(captured.records[1].levelname, 'INFO')
Error while trying to compile Python script to exe I'm trying to compile python script to exe.My script - hello.py:print("hello")My setup.py:from distutils.core import setupimport py2exe, sys, ossys.argv.append('py2exe')setup( name = 'hello', description = 'hello script', version = '1.0', options = {'py2exe': {'bundle_files': 1, 'compressed': True,'dist_dir': ".",'dll_excludes':['w9xpopen.exe']}}, console = [{'script': r"hello.py"}], zipfile = None,)I run:py -3 setup.py installThe error:py -3 setup.py installrunning installrunning buildrunning install_egg_infoRemoving C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\hello-1.0-py3.7.egg-infoWriting C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\hello-1.0-py3.7.egg-inforunning py2exeTraceback (most recent call last): File "setup.py", line 19, in <module> zipfile = None, File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\distutils_buildexe.py", line 188, in run self._run() File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\distutils_buildexe.py", line 267, in _run builder.analyze() File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\runtime.py", line 160, in analyze self.mf.import_hook(modname) File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\mf3.py", line 120, in import_hook module = self._gcd_import(name) File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\mf3.py", line 274, in _gcd_import return self._find_and_load(name) File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\mf3.py", line 357, in _find_and_load self._scan_code(module.__code__, module) File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\mf3.py", line 388, in _scan_code for what, args in self._scan_opcodes(code): File "C:\Users\alonat\AppData\Local\Programs\Python\Python37-32\lib\site-packages\py2exe\mf3.py", line 417, in _scan_opcodes yield "store", (names[oparg],)IndexError: tuple index out of rangeDo you know how to resolve this error?
py2exe seems to support up to Python 3.4 (thanks Michael Butscher)However, there are other libraries such as Pyinstaller which work just fine, and are compatible with a variety of Python versions (from Python 2.7 to 3.5+)Check it out, it's actually really easy :)https://pyinstaller.readthedocs.io/en/stable/
How to call functions from a python module in a different folder? My project structure is something like this:/project-name app.py /python-modules login.py home.pyI want to import some functions present in login.py and home.py into app.py.I tried to run from python-modules.login import *, but no luck.
Simple solution Just create blank (empty) __init__.py file in both folders i.e. project-name and python modules.
using numpy repeat simultaneously on arrays with distinct multiplicities but same dimension I have two trival arrays of the same length, tmp_reds and tmp_blues: npts = 4tmp_reds = np.array(['red', 'red', 'red', 'red'])tmp_blues = np.array(['blue', 'blue', 'blue', 'blue'])I am using np.repeat to create multiplicity: red_occupations = [1, 0, 1, 2]blue_occupations = [0, 2, 0, 1]x = np.repeat(tmp_reds, red_occupations)y = np.repeat(tmp_blues, blue_occupations)print(x)['red' 'red' 'red' 'red']print(y)['blue' 'blue' 'blue']What I want is the following composite of x and y:desired_array = np.array(['red', 'blue', 'blue', 'red', 'red', 'red', 'blue'])So, desired_array is defined in the following manner:(1) Multiplicity from the first element of red_occupations is applied(2) Multiplicity from the first element of blue_occupations is applied(3) Multiplicity from the second element of red_occupations is applied(4) Multiplicity from the second element of blue_occupations is applied... (2*npts-1) Multiplicity from the npts element of red_occupations is applied (2*npts) Multiplicity from the npts element of blue_occupations is appliedSo this seems like a straightforward generalization of the normal usage of np.repeat. Normally, np.repeat does exactly the above, but with a single array. Does anyone know some clever way to use multidimensional arrays that are then flattened, or some other similar trick, that can accomplish this with np.repeat?I could always create desired_array without use of numpy, using a simple zipped for loop and successive list appends. However, the actual problem has npts ~ 1e7, and speed is critical.
For a generic case -# Two 1D color arraystmp1 = np.array(['red', 'red', 'red', 'green'])tmp2 = np.array(['white', 'black', 'blue', 'blue'])# Multiplicity arrayscolor1_occupations = [1, 0, 1, 2]color2_occupations = [0, 2, 0, 1]# Stack those two color arrays and two multiplicity arrays separatelytmp12 = np.column_stack((tmp1,tmp2))color_occupations = np.column_stack((color1_occupations,color2_occupations))# Use np.repeat to get stacked multiplicities for stacked color arraysout = np.repeat(tmp12,color_occupations.ravel())giving us -In [180]: outOut[180]: array(['red', 'black', 'black', 'red', 'green', 'green', 'blue'], dtype='|S5')
Iterate excel files and output in one folder in Python I have a folder and subfolders structure as follows:D:/src├─ xyz.xlsx├─ dist│ ├─ xyz.xlsx│ ├─ xxx.zip│ └─ xxy.xlsx├─ lib│ ├─ xy.rar│ └─ xyx.xlsx├─ test│ ├─ xyy.xlsx│ ├─ x.xls│ └─ xyz.xlsxI want to extract all excel files (xls or xlsx) from source directory and subdirectories, drop duplicates based on excel file names and put all the unique files in D:/dst directory. How can I the following result in Python? Thanks.Expected result:D:/dst├─ xyz.xlsx├─ xxy.xlsx├─ xyx.xlsx├─ xyy.xlsx├─ x.xlsHere is what I have tried:import osfor root, dirs, files in os.walk(src, topdown=False): for file in files: if file.endswith('.xlsx') or file.endswith('.xls'): #print(os.path.join(root, file)) try: df0 = pd.read_excel(os.path.join(root, file)) #print(df0) except: continue df1 = pd.DataFrame(columns = [columns_selected]) df1 = df1.append(df0, ignore_index = True) print(df1) df1.to_excel('test.xlsx', index = False)
I think this will do what you want:import osimport shutilsrc = os.path.abspath(r'.\_src')dst = os.path.abspath(r'.\_dst')wanted = {'.xls', '.xlsx'}copied = set()for root, dirs, filenames in os.walk(src, topdown=False): for filename in filenames: ext = os.path.splitext(filename)[1] if ext in wanted and filename not in copied: src_filepath = os.path.join(root, filename) shutil.copy(src_filepath, dst) copied.add(filename)
Implementing ping using Python I'm trying to ping a range of servers and I want to store the output of the ping. This is as far as I have got. import subprocessstring_part = 'ping -W 2 -c 2 64.233.'for i in range(160,165): for j in range(0,5): prompt = string_part + str(i) + '.' + str(j) result = subprocess.call(prompt, shell = True) I thought that if I give "print(result)" after this it would print the result. However, it only returns 1. I don't want to use threads as of now. I think I'm missing something! :(
subprocess.call() returns the exit code of the process. To get the stdout output of the ping command, use a pipe and Popen.communicate() instead:string_part = 'ping -W 2 -c 2 64.233.{}.{}'for i in range(160, 165): for j in range(5): prompt = string_part.format(i, j) proc = subprocess.Popen(prompt, shell=True, stdout=subprocess.PIPE) result, _ = proc.communicate()You can and should avoid using the shell; just pass in the arguments in a list:command = ['ping', '-W', '2', '-c', '2'] ip_template = '64.233.{}.{}'for i in range(160, 165): for j in range(5): ip_address = ip_template.format(i, j) proc = subprocess.Popen(command + [ip_address], stdout=subprocess.PIPE) result, _ = proc.communicate()
How can I execute Python scripts using Anaconda's version of Python? I recently downloaded the Anaconda distribution for Python. I noticed that if I write and execute a Python script (by double-clicking on its icon), my computer (running on Windows 8) will execute it using my old version of Python rather than Anaconda's version. So for example, if my script contains import matplotlib, I will receive an error. Is there a way to get my scripts to use Anaconda's version of Python instead?I know that I can just open Anaconda's version of Python in the command prompt and manually import it, but I'd like to set things us so that I can just double-click on a .py file and Anaconda's version of Python is automatically used.
I know this is old, but none of the answers here is a real solution if you want to be able to double-click Python files and have the correct interpreter used without modifying your PYTHONPATH or PATH every time you want to use a different interpreter. Sure, from the command line, activate my-environment works, but OP specifically asked about double-clicking. In this case, the correct thing to do is use the Python launcher for Windows. Then, all you have to do is add #! path\to\interpreter\python.exe to the top of your script. Unfortunately, although the launcher comes standard with Python 3.3+, it is not included with Anaconda (see Python & Windows: Where is the python launcher?), and the simplest thing to do is to install it separately from here.
how to implement CURL cli using python to consume REST services There are lot of questions posted how to consume the REST services with python, but none of them worked for me, currently with the below curl cli i can get the authentication token.curl clicurl -v --user username:pass1234 -H "content-type: application/json" -X POST -d "" https://mywebsite/api/v1/auth/token-services --insecurewhen i execute the above cli i get the json response as below :output snip from above curl cli< HTTP/1.1 200 OK< Server: nginx/1.4.2< Date: Mon, 14 Apr 2014 23:22:41 GMT< Content-Type: application/json< Content-Length: 201< Connection: keep-aliveConnection #0 to host <ipaddress> left intact* Closing connection #0* SSLv3, TLS alert, Client hello (1):{"kind": "object#auth-token", "expiry-time": "Mon Apr 14 23:37:41 2014", "token-id": "l3CvWcEr5rKvooOaCymFvy2qp3cY18XCs4JrW4EvPww=", "link": "https://mywebsite/api/v1/auth/token-services/1634484805"}NOW MY QUESTION is, how to achieve this using python. WHAT libraries i should use? i need to extract the token-id from the json response. so that i will use that token for further request to consume REST services.if some one can post the PYTHON code snippet for this that would be great.
Have a look at the following HOWTO of the python documentation: HOWTO Fetch Internet Resources Using urllib2. There you also find a section with an code example for Basic authentication. The HOWTO describes how you can use the module urllib2.Other useful libraries:requestsmechanize
ipython: %paste over ssh connection In ipython >=0.11, the %paste command is required to paste indented commands. However, if I run an ipython shell in a remote terminal, the buffer %paste refers to is on the remote machine rather than the local machine. Is there any way around this?
I think this is exactly what %cpaste is for (I am always forgetting about all the things IPython does). %cpaste enters a state allowing you to paste already formatted or indented code, and it will strip leading indentation and prompts, so you can copy/paste indented code from files, or even from an interactive Python session including leading >>> or In [1] which will be stripped.
Smart date interpretation I can't remember which application I was using, but I do recall it having really neat date parsing/interpretation.For example, you could type in 'two days ago' or 'tomorrow' and it would understand.Any libraries to suggest? Bonus points if usable from Python.
Perhaps you are thinking of PHP's strtotime() function, the Swiss Army Knife of date parsing:Man, what did I do before strtotime(). Oh, I know, I had a 482 line function to parse date formats and return timestamps. And I still could not do really cool stuff. Like tonight I needed to figure out when Thanksgiving was in the US. I knew it was the 4th Thursday in November. So, I started with some math stuff and checking what day of the week Nov. 1 would fall on. All that was making my head hurt. So, I just tried this for fun.strtotime("thursday, november ".date("Y")." + 3 weeks")That gives me Thanksgiving. Awesome.Sadly, there does not appear to be a Python equivalent. The closest thing I could find is the dateutil.parser module.
TypeError: grid_configure() missing 1 required positional argument: 'self' prompt = ">>"from tkinter import *root = Tk()userName = Entry()myLabel = Label(root, text="UserName")userName.grid(row=0)myLabel = Label.grid(row=0, column=1)root.mainloop()TypeError: grid_configure() missing 1 required positional argument: 'self'
This statement is incorrect:myLabel = Label.grid(row=0, column=1)At the very least it needs to be this:myLabel = Label().grid(row=0, column=1)Though, if you want mayLabel to be anything other than None you need to use two lines:myLabel = Label()myLabel.grid(row=0, column=1)Though, if you want to use the previous definition of myLabel, maybe you need to simply omit myLabel = Label(), since that creates a new empty label.
Looping through Where statement until a result is found (SQL) Problem Summary:I'm using Python to send a series of queries to a database (one by one) from a loop until a non-empty result set is found. The query has three conditions that must be met and they're placed in a where statement. Every iteration of the loop changes and manipulates the conditions from a specific condition to a more generic one.Details:Assuming the conditions are keywords based on a pre-made list ordered by accuracy such as:Option KEYWORD1 KEYWORD2 KEYWORD3 1 exact exact exact # most accurate! 2 generic exact exact # accurate 3 generic generic exact # close enough 4 generic generic generic # close 5 generic+ generic generic # almost there .... and so on. On the database side, I have a description column that should contain all the three keywords either in their specific form or a generic form. When I run the loop in python this is what actually happens:-- The first sql statement will be likeSelect * From MyTableWhere Description LIKE 'keyword1-exact$' AND Description LIKE 'keyword2-exact%' AND Description LIKE 'keyword3-exact%'-- if no results, the second sql statement will be like Select * From MyTableWhere Description LIKE 'keyword1-generic%' AND Description LIKE 'keyword2-exact%' AND Description LIKE 'keyword3-exact%'-- if no results, the third sql statement will be like Select * From MyTableWhere Description LIKE 'keyword1-generic%' AND Description LIKE 'keyword2-generic%' AND Description LIKE 'keyword3-exact%'-- and so on until a non-empty result set is found or all keywords were usedI'm using the approach above to get the most accurate results with the minimum amount of irrelevant ones (the more generic the keywords, the more irrelevant results will show up and they will need additional processin)Question:My approach above is doing exactly what I want but I'm sure that it's not efficient. What would be the proper way to do this operation in a query instead of Python loop (knowing that I only have a read access to the database so I can't store procedures)?
Here is an ideaselect top 1 * from( select MyTable.*, accuracy = case when description like keyword1 + '%' and description like keyword2 + '%' and description like keyword3 + '%' then accuracy end -- an example of data from MyTable from (select description = 'exact') MyTable cross join (values -- generate full list like this in python -- or read it from a table if it is in database (1, ('exact'), ('exact'), ('exact')), (2, ('generic'), ('exact'), ('exact')), (3, ('generic'), ('generic'), ('exact')) ) t(accuracy, keyword1, keyword2, keyword3)) twhere accuracy is not nullorder by accuracy
Patch all functions in module with decorator I have a python module with lots of functions and I want to apply a decorator for all of them. Is there a way to patch all of them via monkey-patching to apply this decorator for every function without copy-pasting on line of applying decorator?In other words, I want to replace this:@logging_decorator(args)func_1(): pass@logging_decorator(args)func_2(): pass@logging_decorator(args)func_3(): pass@logging_decorator(args)func_n(): passWith this:patch_func(): # get all functions of this module # apply @logging_decorator to all (or not all) of themfunc_1(): passfunc_2(): passfunc_3(): passfunc_n(): pass
I'm really not certain that this is a good idea. Explicit is better than implicit, after all.With that said, something like this should work, using inspect to find which members of a module can be decorated and using __dict__ to manipulate the module's contents.import inspectdef decorate_module(module, decorator): for name, member in inspect.getmembers(module): if inspect.getmodule(member) == module and callable(member): if member == decorate_module or member == decorator: continue module.__dict__[name] = decorator(member)Sample usage:def simple_logger(f): def wrapper(*args, **kwargs): print("calling " + f.__name__) f(*args, **kwargs) return wrapperdef do_something(): passdecorate_module(sys.modules[__name__], simple_logger)do_something()
Access to a dict's values I'm currently building small manager (learning python) using Pythons Flask, which uses the Jinja2 template engine. I'm using Peewee to talk to my database.I have a dictionary of pins, which all contains info on the given pin. The info on the pins comes directly from Peewee, like so:pins = {}pins[3] = Pin.get(Pin.id == 3)pins[5] = Pin.get(Pin.id == 5)pins[7] = Pin.get(Pin.id == 7)pins[8] = Pin.get(Pin.id == 8)(Only using these four as an example)In my template I'm iterating through these four pins and would like to display my DB information (e.g. Description, state and id), so I've written this code: {% for pin in pins %} {% if pin.state %} <input type="checkbox" checked="checked" data-toggle="toggle" data-pin="{{ pin.id }}"> {% else %} <input type="checkbox" data-toggle="toggle" data-pin="{{ pin.id }}" ?> {% endif %} {{ pin.description }} <br>{% endfor %}According to the jinja2 website and this question the code should work, since I'm accessing pin.state, pin.id and pin.description inside the loop. But it doesn't - No matter which property I try to show, it just gives me nothing when using {{ pin.description }} to access the property. I've noticed the following, which may give some helpIf I make a {{ pin }} inside the loop it prints the current key ofthe dictionary.If I make a {{ pins[pin].description }} inside the loop it prints thecorrect description. According to my understanding of the documentation and the linked question, it should be possible to show the current values, using {{ pin.description }} inside the loop.Can anyone shed some light on what my mistake is?
The issue is that you have made a dictionary, not a list. When you iterate through a dictionary via for pin in pins, you are iterating through the keys, not the values - so in each iteration you get one of 3, 5, 7 etc. Those values obviously don't have properties like description.Instead, use the values() method:{% for pin in pins.values() %}Or you might consider just using a list in the first place.
save a html string to PDF in python I have a html string which I want to store as a PDF file in python. I am using PDFkit for that purpose. Below is the code which I tried for the purpose. In the below code I am also trying to serve the image via tornado server.class MainHandler(RequestHandler): def get(self): self.write('hello!')class ImageHandler(RequestHandler): def get(self): d={} d["mov1"]=1 d["mov2"]=10 d["mov3"]=40 d["mov4"]=3 py.bar(range(len(d)),d.values(),align="center") py.xticks(range(len(d)),d.keys()) io=StringIO() py.savefig(io,format='svg') self.set_header("Content-Type", "image/svg+xml") print io.getvalue() config = pdfkit.configuration(wkhtmltopdf='E:\\wkhtmltopdf\\bin') pdfkit.from_string(io.getvalue(),"E:\\hello.pdf",configuration=config) #Error here self.write(io.getvalue())app = Application([ url(r"/", MainHandler), url(r"/Image",ImageHandler) ])if __name__=="__main__": app.listen(8888) tornado.ioloop.IOLoop.instance().start()I have installed wkhtmltopdf in E drive. I am getting an exception,ERROR:tornado.application:Uncaught exception GET /Image (::1)HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/Image', version='HTTP/1.1', remote_ip='::1', headers={'Accept-Language': 'en-US,en;q=0.8', 'Accept-Encoding': 'gzip, deflate, sdch', 'Host': 'localhost:8888', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'Connection': 'keep-alive', 'Cookie': '_ga=GA1.1.359367893.1418721747', 'If-None-Match': '"ee884f005691a9736e5e380cc68cd4c9679bf2a7"'})Traceback (most recent call last): File "C:\Python27\lib\site-packages\tornado\web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "E:\eclipse_workspace\Visualisation\module1\mod1.py", line 61, in get config = pdfkit.configuration(wkhtmltopdf='E:\\wkhtmltopdf\\bin') File "C:\Python27\lib\site-packages\pdfkit\api.py", line 79, in configuration return Configuration(**kwargs) File "C:\Python27\lib\site-packages\pdfkit\configuration.py", line 27, in __init__ 'https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdf' % self.wkhtmltopdf)IOError: No wkhtmltopdf executable found: "E:\wkhtmltopdf\bin"If this file exists please check that this process can read it. Otherwise please install wkhtmltopdf - https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdfERROR:tornado.access:500 GET /Image (::1) 246.00msWhile I am open to using any other packages. I also want to know what is the mistake I am doing.
This link says that given the above error the path should be set to location of wkhtmltopdf binary. Which I thought I did. But it worked for me when the changed the config toconfig = pdfkit.configuration(wkhtmltopdf='E:\\wkhtmltopdf\\bin\\wkhtmltopdf.exe')
Reading Regular Expressions from a text file I'm currently trying to write a function that takes two inputs:1 - The URL for a web page2 - The name of a text file containing some regular expressionsMy function should read the text file line by line (each line being a different regex) and then it should execute the given regex on the web page source code. However, I've ran in to trouble doing this:exampleSuppose I want the address contained on a Yelp with URL = http://www.yelp.com/biz/liberty-grill-corkwhere the regex is \<address\>\s*([^<]*)\\b\s*<. In Python, I then run:address = re.search('\<address\>\s*([^<]*)\\b\s*<', web_page_source_code)The above will work, however, if I just write the regex in a text file as is, and then read the regex from the text file, then it won't work. So reading the regex from a text file is what is causing the problem, how can I rectify this?EDIT: This is how I'm reading the regexes from the text file:with open("test_file.txt","r") as file: for regex in file: address = re.search(regex, web_page_source_code)Just to add, the reason I want to read regexes from a text file is so that my function code can stay the same and I can alter my list of regexes easily. If anyone can suggest any other alternatives that would be great.
Your string has some backlashes and other things escaped to avoid special meaning in Python string, not only the regex itself.You can easily verify what happens when you print the string you load from the file. If your backslashes doubled, you did it wrong.The text you want in the file is:File\<address\>\s*([^<]*)\b\s*<Here's how you can check itIn [1]: a = open('testfile.txt')In [2]: line = a.readline()-- this is the line as you'd see it in python code when properly escapedIn [3]: lineOut[3]: '\\<address\\>\\s*([^<]*)\\b\\s*<\n'-- this is what it actually means (what re will use)In [4]: print(line)\<address\>\s*([^<]*)\b\s*<
N by N spiral matrix (1 to square(N)) - Unexpected Output While trying to create a N by N spiral matrix for number 1 to square(N) , using the usual algorithm for spiral matrix , there is an unexpected output in one of the rows which cannot be found even on rechecking.def getSpiralOrder(N): matrix = [ [ 0 for i in range(N) ] for j in range(N) ] c = 1 rS = 0 rE = len(matrix) cS = 0 cE = len(matrix[0]) while(rS < rE and cS < cE): for i in range(cS , cE ): matrix[rS][i]=c c = c + 1 rS += 1 for i in range(rS , rE): matrix[i][cE - 1]=c c = c + 1 cE -= 1 for i in range(cS , cE): matrix[rE - 1][cE - i - 1]=c c =c + 1 rE -= 1 for i in range(rS,rE): matrix[rE - i ][cS]=c c = c + 1 cS += 1 return(matrix)n = int(input())print(getSpiralOrder(n))Output should be: [[1, 2, 3, 4], [12, 13, 14, 5], [11, 16 , 15, 6], [10, 9, 8, 7]]But output coming is: [[1, 2, 3, 4], [12, 13, 14, 5], [16, 0, 15, 6], [10, 9, 8, 7]]Turns out all rows are correct except the third one.
Your last two for loops are wrong:def getSpiralOrder(N): matrix = [ [ 0 for i in range(N) ] for j in range(N) ] c = 1 rS = 0 rE = len(matrix) cS = 0 cE = len(matrix[0]) while(rS < rE and cS < cE): for i in range(cS , cE ): matrix[rS][i]=c c = c + 1 rS += 1 for i in range(rS , rE): matrix[i][cE - 1]=c c = c + 1 cE -= 1 # should go from cE - 1 not cE - cS - 1 for i in range(cE-cS): matrix[rE - 1][cE - i - 1]=c c =c + 1 rE -= 1 # similar here for i in range(rE-rS): matrix[rE - i -1][cS]=c c = c + 1 cS += 1 return(matrix)
Stack dimensions of numpy array I have a numpy array of shape (2500, 16, 32, 24), and I want to make it into a ( something, 24) array, but I don't want numpy to shuffle my values. The 32 x 24 dimension at the end represent images and I want the corresponding elements to be consistent. Any ideas?EDIT: Ok , I wasn't clear enough. (something, 24) = (1280000, 24).
Use arr.reshape(-1,arr.shape[-1]) or if you know it will be 24 arr.reshape(-1,24)
How to get basic interactive pyqtgraph plot to work in IPython REPL or Jupyter console? Typingimport pyqtgraph as pgpg.plot([1,2,3,2,3])into the standard Python REPL opens a window containing a plot of the data. Typing exactly the same code into the IPython REPL or the Jupyter console, opens no such window.[The window can be made to appear by typing pg.QtGui.QApplication.exec_(), but then the REPL is blocked.Alternatively, the window appears when an attempt is made to exit the REPL, and confirmation is being required.Both of these are highly unsatisfactory.]How can basic interactive pyqtgraph plotting be made to work with the IPython REPL?[The described behaviour was observed in IPython 5.1.0 and Jupyter 5.0.0 with Python 3.5 and 3.6 and PyQt4 and PyQt5 (no PySide)]
As sugested by @titusjan, the solution is to type%gui qt(or some variation on that theme: see what other options are available with %gui?) in IPython before issuing any pyqtgraph (or PyQt in general) commands.
Swap indexes using slices? I know that you can swap 2 single indexes in Pythonr = ['1', '2', '3', '4', '5', '6', '7', '8'] r[2], r[4] = r[4], r[2]output:['1', '2', '5', '4', '3', '6', '7', '8']But why can't you swap 2 slices of indexes in python?r = ['1', '2', '3', '4', '5', '6', '7', '8'] I want to swap the numbers 3 + 4 with 5 + 6 + 7 in r:r[2:4], r[4:7] = r[4:7], r[2:4]output:['1', '2', '5', '6', '3', '4', '7', '8']expected output:['1', '2', '5', '6', '7', '3', '4', '8']What did I wrong?output:
The slicing is working as it should. You are replacing slices of different lengths. r[2:4] is two items, and r[4:7] is three items.>>> r = ['1', '2', '3', '4', '5', '6', '7', '8']>>> r[2:4]['3', '4']>>> r[4:7]['5', '6', '7']So when ['3', '4'] is replaced, it can only fit ['5', '6'], and when ['5', '6', '7'] is replaced, it only gets ['3', '4']. So you have ['1', '2',, then the next two elements are the first two elements from ['5', '6', '7'] which is just ['5', '6', then the two elements from ['3', '4' go next, then the remaining '7', '8'].If you want to replace the slices, you have to start slices at the right places and allocate an appropriate size in the array for each slice:>>> r = ['1', '2', '3', '4', '5', '6', '7', '8']>>> r[2:5], r[5:7] = r[4:7], r[2:4]>>> r['1', '2', '5', '6', '7', '3', '4', '8'] old index: 4 5 6 2 3 new index: 2 3 4 5 6
win32 ExportAsFixedFormat I was trying to change the footers of my excel file then convert it to pdf with win32 package in Python3.6.It actually work with my home pc and at work pc only the pdf exporting part is giving me error. I am wondering if the MS Office version matters, since I have the newest at home and Excel 2007 at work.Here is mycode :import win32com.client as win32excel = win32.gencache.EnsureDispatch('Excel.Application')file_path = r"C:\Mydir\DSexample\Myfile.xlsx"wb = excel.Workbooks.Open(file_path)ws = wb.ActiveSheetws.PageSetup.CenterFooter = '&"Arial,Regular"&8new address'ws.PageSetup.LeftFooter = '&"Arial,Regular"&8new date'path_to_pdf = list()excel.Visible = Trueif ws.Cells(24,7).Value[-2]=="R": print(type(str(ws.Cells(24,7).Value[:-3]))) path_to_pdf.append("C:\\Mydir\\DSexample\\"+str(ws.Cells(24,7).Value[:-3]).strip()+".pdf") path_to_pdf.append("C:\\Mydir\\DSexample\\" + str(ws.Cells(24, 7).Value[:-3]).strip() + "R.pdf")wb.SaveAs(r"C:\Mydir\DSexample\new.xlsx")for i in range(0,len(path_to_pdf)): ws.ExportAsFixedFormat(0, path_to_pdf[i])wb.Close()The Error I got is:Traceback (most recent call last):File "C:/Users/Guest/PycharmProjects/DScreator/DScreater.py", line 27, in ws.ExportAsFixedFormat(0, path_to_pdf[i],From = 1,To = 1,OpenAfterPublish=False)File "C:\Users\Guest\AppData\Local\Programs\Python\Python36\lib\site-packages\win32com\gen_py\00020813-0000-0000-C000-000000000046x0x1x6_Worksheet.py", line 115, in ExportAsFixedFormat, To, OpenAfterPublish, FixedFormatExtClassPtr)pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2147024809), None)
I just find out the answer. For the Excel 2007 on my work desktop, I need to download the add-in for ExportAsFixedFormat(). Here is the link:https://www.microsoft.com/en-us/download/confirmation.aspx?id=7Hope this help you as well.:)
How to bundle Python for AWS Lambda I have a project I'd like to run on AWS Lambda but it is exceeding the 50MB zipped limit. Right now it is at 128MB zipped and the project folder with the virtual environment sits at 623MB and includes (top users of space):scipy (~187MB)pandas (~108MB)numpy (~74.4MB)lambda_packages (~71.4MB)Without the virtualenv the project is <2MB. The requirements.txt is:click==6.7cycler==0.10.0ecdsa==0.13Flask==0.12.2Flask-Cors==3.0.3future==0.16.0itsdangerous==0.24Jinja2==2.10MarkupSafe==1.0matplotlib==2.1.2mpmath==1.0.0numericalunits==1.19numpy==1.14.0pandas==0.22.0pycryptodome==3.4.7pyparsing==2.2.0python-dateutil==2.6.1python-dotenv==0.7.1python-jose==2.0.2pytz==2017.3scipy==1.0.0six==1.11.0sympy==1.1.1Werkzeug==0.14.1xlrd==1.1.0I deploy using Zappa, so my understanding of the whole infrastructure is limited. My understanding is that some (very few) of the libraries do not get uploaded so for e.g. numpy, that part does not get uploaded and Amazon's version gets used that is already available in that environment.I propose the following workflow (without using S3 buckets for slim_handler):delete all the files that match "test_*.py" in all packagesmanually tree shake scipy as I only use scipy.minimize, by deleting most of it and re-running my testsminify all the code and obfuscate using pyminifierzappa deployOr:run compileall to get .pyc files delete all *.py files and let zappa upload .pyc files insteadzappa deployI've had issues with slim_handler: true, either my connection drops and the upload fails or some other error occurs and at ~25% of the upload to S3 I get Could not connect to the endpoint URL. For the purposes of this question, I'd like to get the dependencies down to manageable levels.Nevertheless, over half a gig of dependencies with the main app being less than 2MB has to be some sort of record.My questions are:What is the unzipped limit for AWS? Is it 250MB or 500MB?Am I on the right track with the above method for reducing package sizes?Is it possible to go a step further and use .pyz files?Are there any standard utilities out there that help with the above?Is there no tree shaking library for python?
The limit in AWS is for unpacked 250MB of code (as seen here https://hackernoon.com/exploring-the-aws-lambda-deployment-limits-9a8384b0bec3)I would suggest going for second method and compile everything.I think you should also consider using serverless framework. It does not force you to create virtualenv which is very heavy.I've seen that all your packages can be compressed up to 83MB (just the packages).My workaround would be:use serverless framework (consider moving from flask directly to API Gateway)install your packages locally on the same folder using:pip install -r requirements.txt -t .try your method of compiling to .pyc files, and remove others.Deploy:sis deployHope it helps.
Is there a way to continue my script after running an if statement to catch anomaly in Python? I have tried searching online but I can't seem to find anything that would answer this question.I current have a script that is running, and I am using an if statement to catch anomaly. if test <= limit: return TrueThis works as intended, but I am looking to reduce said code into a single line like return True if test <= limit else [continue with script]But I have no idea what to put in the [continue with script] part.
Sure, you can do that, at least in principle, although it's not really a single line of code. At all. And it's almost certainly not what you want:class LimitException(Exception): passdef LimitExceptionRaiser(msg): raise LimitException(msg)def f(test, limit): try: return True if test < limit else LimitExceptionRaiser("Must continue") except LimitException: pass return Falseif __name__ == "__main__": print(f(1, 2)) # -> True print(f(2, 1)) # -> FalseIs this solution pretty? Will the next guy who has to maintain your code praise you? Seems unlikely.Whatever follows else must be a value, so I can't simply put raise Exception() there, but have to trick the interpreter into thinking that it gets a value, namely, the return value of some other function LimitExceptionRaiser. But this function doesn't return a value, it stops the current line of execution and yanks you forcefully out of it.Once again,def f(test, limit): if test < limit: return True return Falseseems like a much saner solution (to clarify, I assume that you do some additional computation inside the function, otherwise, you could simple do return test < limit).
File Counter in python or last 5 files I am trying to get a list of the 5 most recently imported pictures into a folder to feed into a function loading them into a contact sheet.I have a program bringing in files and it uses a counter to name them 00001, 00002 etc... I was thinking something like while blah:N = '0000'xx = x+1but this breaks down when you get above 10, because now it's 00010.I feel like I should be able to figure this out, but it's totally escaping me.on the other hand, if I could just have the program load the X newest files into the function, that would also work great.Image names are run through this formula. imgs = [Image.open(fn).resize((photow,photoh)) for fn in fnames]so I think they need to be in the format ('00001.jpg','00002.jpg','00003.jpg'.....)thanks for help.
>>> '{:05}'.format(10)00010As for sorting files based on this numbering:import os# List all files in current directoryfiles = os.listdir('.')recent_images = sorted(files)[-5:]
Modifying number of ticks on Pandas hourly time axis If I have the following example Python code using a Pandas dataframe:import pandas as pdfrom datetime import datetimets = pd.DataFrame(randn(1000), index=pd.date_range('1/1/2000 00:00:00', freq='H', periods=1000), columns=['Data'])ts['Time'] = ts.index.map(lambda t: t.time())ts = ts.groupby('Time').mean()ts.plot(x_compat=True, figsize=(20,10))The output plot is: What is the most elegant way to get the X-Axis ticks to automatically space themselves hourly or bi-hourly? x_compat=True has no impact
You can pass to ts.plot() the argument xticks. Giving the right interval you can plot hourly our bi-hourly like:max_sec = 90000ts.plot(x_compat=True, figsize=(20,10), xticks=arange(0, max_sec, 3600))ts.plot(x_compat=True, figsize=(20,10), xticks=arange(0, max_sec, 7200))Here max_sec is the maximum value of the xaxis, in seconds.
Binary Search Tree Python, Implementing Delete I am trying to implement a binary search tree in python, but I can't find a solution for delete. If the item is in a leaf, that is simple, but what if the item I want to delete has 2 children which also have other children and so on ? How can in find its successor, so that I can replace it ? Are there any simple recursive solutions ?class Node: def __init__(self, data=None, left=None, right=None): self.data = data self.left = left self.right = right class BinarySearchTree: def __init__(self, root=None): self.root = Node(root) def add(self, data, node): if node == None: node = Node(data) return True if data < node.data: if node.left == None: node.left = Node(data) return True else: self.add(data, node.left) elif data > node.data: if node.right == None: node.right = Node(data) return True else: self.add(data, node.right) def preorder(self, node): if node != None: print(node.data) self.preorder(node.left) self.preorder(node.right) def inorder(self, node): if node != None: self.inorder(node.left) print(node.data) self.inorder(node.right) def postorder(self, node): if node != None: self.postorder(node.left) self.postorder(node.right) print(node.data) def retreive(self,item): node = self.root while node != None: if node.data == item: break elif item < node.data: if node.left != None: if node.left.data == item: node.left = None return True node = node.left else: if node.right != None: if node.right.data == item: node.right= None return True node = node.right if node == None: return Falsetree = BinarySearchTree()root=Node(3)tree.add(55,root)tree.add(5,root)tree.add(13,root)tree.add(2,root)tree.add(3,root)tree.preorder(root)tree.postorder(root)tree.inorder(root)Also if you have any suggestions for what I've written so far, I'd really appreciate it. Thanks in advance.
If this isn't homework, you might use one of these:https://pypi.python.org/pypi/treap/https://pypi.python.org/pypi/red-black-tree-modBoth implement deletion. Both deal well with sorted or unsorted inputs.The red-black tree module has a BinaryTree class that RedBlackTree inherits from.
finding cosine using python I must write a function that computes and returns the cosine of an angle using the first 10 terms of the following series: cosx = 1 - (x**2)/2! + (x**4)/4! - (x**6)/6!....I can't use the factorial function, but i can use the fact that if the previous denominator was n!, the current denominator would be n!(n+1)(n+2). I'm trying to use an accumulator loop, but i'm having a hard time with the fact that it alternates from positive to negative and also having trouble with the denominator. This is what I have thus far. Any help with the denominator and accumulator loop?def factorial(x): if(x == 0): return 1 return x * factorial(x-1)def cosine(angle): cosx = 1 sign = -1 for i in range(2, 20, 2): cosx = cosx + (sign*(angle**i))/factorial(i) sign = -sign return cosx
Maybe something like this:#! /usr/bin/python3.2def cos (a): d = 1 c = 1 for i in range (2, 20, 2): d *= i * (i - 1) sign = -1 if i % 4 else 1 print ('adding {} * a ** {} / {}'.format (sign, i, d) ) c += sign * a ** i / d print ('cosine is now {}'.format (c) ) return ccos (1.0)Basically d (as in Denominator) is your accumulator.
ndarray.view() doesn't work if entries are multidimensional? I have a multidimensional NumPy array of shape (n, i, j, k, ...) and I'd like to view it as a list of length n with entries of shape (i, j, k, ...). (Makes subsequent computations easier.) Now, view() works well, but only if the entries are scalars or vectors. It fails if the entries are matrices:import numpy as np# okaya = np.random.rand(7)b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * 1)))# okaya = np.random.rand(7, 2)b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * 2)))# faila = np.random.rand(7, 2, 3)b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * 2 * 3)))ValueError: When changing to a larger dtype, its size must be a divisor of the total size in bytes of the last axis of the array.Any hint on what's going wrong here?
What you are doing with view as type np.void? Are you trying to manipulate the bytes?If you just want to convert a shape (n, i, j, k, ...) array into a python list of n elements of shape (i, j, k, ...) why not just use np.split?a = np.random.uniform(size=(7, 2, 3))new_a_shape = (-1,) + a.shape[2:] # to get rid of the first dimensionb = np.split(a.reshape(new_a_shape), a.shape[0])results in[array([[0.07452901, 0.4068812 , 0.87074721], [0.23739783, 0.67298745, 0.96651022]]), array([[0.94348732, 0.004426 , 0.75326683], [0.35342782, 0.98006333, 0.46106085]]), array([[0.92510793, 0.90671794, 0.34370929], [0.42120306, 0.41411307, 0.94946278]]), array([[0.49097638, 0.53814849, 0.10396104], [0.19402855, 0.31811608, 0.13399161]]), array([[0.91071595, 0.3167224 , 0.42786695], [0.62646702, 0.68093573, 0.24839687]]), array([[0.95266909, 0.13838363, 0.10488899], [0.40992171, 0.29816304, 0.31875 ]]), array([[0.03371908, 0.29406525, 0.65801023], [0.93566766, 0.84455742, 0.40625077]])]
Image on a button I expect the same output for both of the scripts below. But I don't get the image on the button when I execute Script 1. However, Script 2 works well.Script 1from Tkinter import * class fe: def __init__(self,master): self.b=Button(master,justify = LEFT) photo=PhotoImage(file="mine32.gif") self.b.config(image=photo,width="10",height="10") self.b.pack(side=LEFT)root = Tk()front_end=fe(root)root.mainloop()Script 2from Tkinter import *root=Tk()b=Button(root,justify = LEFT)photo=PhotoImage(file="mine32.gif")b.config(image=photo,width="10",height="10")b.pack(side=LEFT)root.mainloop()
The only reference to the image object is a local variable. When __init__ exits, the local variable is garbage collected so the image is destroyed. In the second example, because the image is created at the global level it never goes out of scope and is therefore never garbage collected.To work around this, save a reference to the image. For example, instead of photo use self.photo.
Python Git and Git-Lab create project and push code/commit We are building an automation process using python in which we clone a base source code repository and add necessary changes to it and add the new code to a new git repository and push it to our private gitlab server.So far I'm using git library to clone, and initial a new repository and make an initial commit. however I'm not able to figure out "how to create and push the new repository to our private gitlab server.import gitlabimport gitimport jsonimport shutil"""GitLab API"""gl = gitlab.Gitlab('gitlab url of company', private_token='the token', api_version=4)gl.auth()"""Cloning the Base Code"""git.Repo.clone_from("url","path to save")"""Getting data"""data_json = getData()""" Copy Base Code to New Folder with Doctor Name"""repo_name = "new_repo"shutil.copytree("./base_code/public", "{}/public".format(repo_name))shutil.copytree("./base_code/src", "{}/src".format(repo_name),dirs_exist_ok=True)shutil.copy("./base_code/.gitignore", "{}/".format(repo_name))shutil.copy("./base_code/package-lock.json", "{}/".format(repo_name))shutil.copy("./base_code/package.json", "{}/".format(repo_name))shutil.copy("./base_code/README.md", "{}/".format(repo_name))"""Generate JSON File and save it new folder"""with open("./{}/src/data.json".format(repo_name), 'w') as fout: json_dumps_str = json.dumps(data_json, indent=4) print(json_dumps_str, file=fout)"""Create new git repo in the new folder"""new_repo = git.Repo.init('{}'.format(repo_name))"""Adding all the files to Staged Scenario""" new_repo.index.add(['.'])"""Commit the changes"""new_repo.index.commit('Initial commit.')"""Create Project of the new Repository"""How to create project and push the code to the new project?
This is how I created a new project in gitlab in python using python-gitlab libary.gl = gitlab.Gitlab('gitlab website url', private_token='token', api_version=4)gl.auth()gl.projects.list()"""Create Project of the new Repository"""response = gl.projects.create({"name":"project name","namespace_id":"group-id if required"})The documentation of python-gitlab doesnt have any direct sample codefor this use-case.
Why is Image not shown properly as a Button background in Kivy? I'm trying to convert this kv code into my own class<BaseScreen>: # This is GridLayout cols: 4 rows: 4 padding: 25 Button: size_hint_x: None size_hint_y: None Image: source: "business_bookcover.png" x: self.parent.x y: self.parent.y width: self.parent.width height: self.parent.height keep_ratio: FalseThe problem is that I'm trying to make "clickable image", but when I append widget to the button, image is on the default position (0,0), totally out of the button position. Is there any workaround how to do it?This is my tryclass Book(Button): def __init__(self, **kwargs): super().__init__(**kwargs) self.size_hint = (None, None) book_cover_image_source = kwargs.get('cover') or BLANK_BOOK_COVER book_cover = Image(source=book_cover_image_source) book_cover.pos = self.pos book_cover.width = self.width book_cover.height = self.height book_cover.allow_stretch = True book_cover.keep_ratio = False self.add_widget(book_cover)
In kv languaje properties used in the expression (x:, y:, width:) will be observed. When the parent's size/pos change, child widget change accordingly. You must suply this event bindings in the Python class:class Book(Button): def __init__(self, cover=BLANK_BOOK_COVER, **kwargs): super(Book, self).__init__(**kwargs) self.size_hint = (None, None) self.book_cover = Image(source=cover) self.book_cover.allow_stretch = True self.book_cover.keep_ratio = False self.add_widget(self.book_cover) def on_size(self, *args): self.book_cover.size = self.size def on_pos(self, *args): self.book_cover.pos = self.posA simpler option to get a clickable image is to have your class Book inherit from ButtonBehabior and Image classes:from kivy.uix.behaviors import ButtonBehaviorfrom kivy.uix.image import Imageclass Book(ButtonBehavior, Image): def __init__(self, cover=BLANK_BOOK_COVER, **kwargs): super(Book, self).__init__(**kwargs) self.source = cover self.size_hint = (None, None) self.allow_stretch = True self.keep_ratio = False
How to receive a message from a certain user in discord, then wait for another certain user's reply, then send a custom message in python I'm new to python and discord bots. I created a bot for my friend's server and I'm stuck on a certain event.I want to be able to:Wait for a certain user to send a random messageWait for a certain user to respond to that message (if the someone else replies before the user I want, I want the code to stop runningHave the bot send a messageHere's my code, it may be incorrect, but I have never done something like this before. I would love it if someone replied with the full event function. Thanks!@client.eventasync def on_message(message): if (message.author.id == "User who sends the first message"): async def on_message(message): if (message.author.id == "User who replies"): await message.channel.send("Test worked") print(on_message) else: return else: return
there is an inbuilt function for thatdef check(user): return user.channel.id == message.channel.id and user.author == message.authorreply = await bot.wait_for('message', check=check())
How to fix Index value 1 is out of range. Can't get variable to pass properly through twitter get status function Appended variable not working with get status to retrieve tweet textsI have a list of tweets id's, probably around 50,000 in an excel file on my computer. I want to create a piece of code that will allow me to extract the text from the tweets so I can then analyse...I have created a variable 'tweetref' to store the tweet id's that I can pass to get status etc to get the tweet text. I am told many of these tweets might not exist anymore and I can't tell which one from the id which is why I have done 'pass' on the except, hoping to ignore all the fails and just get the ones that work. Using firehose api to gather is too expensive for me.It didn't spit out any text even though manually replacing tweetref' in 'tweet = api.get_status(tweetref)' - with the commented number below (38387433561128960), prints an actual tweetI tried to get the 2nd index from tweet ref which resulted in a 'list index out of range' - not sure why since there should by over twenty variables in the list. Not sure what i've done wrong?EDIT - Have changed "tweetref.append(datalist[30:50])"to "tweetref.extend(datalist[30:50])" This helpfully results in all ID's becoming individual elements in "tweetref" and allows me to call on indexes properly. However, despite this, the second "for" loop with get status still isn't printing any text from the tweets# Import twitter related packagesimport jsonimport tweepyfrom tweepy.streaming import StreamListenerfrom tweepy import OAuthHandlerfrom tweepy import Stream#import request style packagesimport requestsfrom urllib.request import urlopen, Request# Import excel related packagesimport xlrdimport openpyxl# Import visualisation packagesimport matplotlib.pyplot as pltimport seaborn as snsimport pandas as pd# Store OAuth authentication credentials in relevant variablesaccess_token = "private"access_token_secret = "private"consumer_key = "private"consumer_secret = "private"# Pass OAuth details to tweepy's OAuth handlerauth = tweepy.OAuthHandler("private", "private")auth.set_access_token("private", "private")api = tweepy.API(auth)# Read and write to exceldataFileUrl = R"C:/Users/ebaba/Desktop/algeria1.xlsx"# Create pandas data frame out of Tweet ID Column of filedata = pd.read_excel(dataFileUrl, usecols = ['Tweet'])# Convert data frame into a listdatalist = data.values.tolist()tweetref = []for t in range (0,20): tweetref.append(datalist[30:50]) print(tweetref[1])for i in range (0,1): try: tweet = api.get_status(tweetref)#38387433561128960 - Example Working Tweet - N.44 print(tweet.text) except: passExpected result would include the tweet 'RT @mattseaton: Another fascinating dispatch from inside the pro-democracy movement in Algiers, from Karima Bennounewhich is in the datalist[30:50] rangeActual ResultTraceback (most recent call last): File "C:\Users\ebaba\Desktop\example6.py", line 56, in <module> print(tweetref[1])IndexError: list index out of range[Finished in 16.902s]
The first time through this for loop:tweetref = []for t in range (0,20): tweetref.append(datalist[30:50]) print(tweetref[1])your code appends a list to tweetref that was previously empty. So that list of (maybe) 20 items becomes element 0 of tweetref. That is why you get index out of range when your code tries to access tweetref[1].If you want all (maybe) 20 elements from datalist to become individual elements of tweetref then you need to do one of tweetref.extend(datalist[30:50])or tweetref += datalist[30:50]Appending a list to the previously empty tweetref results in a list with one element which is itself a list of (maybe) 20 elements.
Odoo 10 - Cannot unblock a work center in dashboard, neither on individual work orders my work center is blocked :But when I try to unblock, I always get this error : (the same error happens if I try to unblock the work order).Please let me know if more details needed, I am new in stack overflow.Thanks in advanceOdoo Server ErrorTraceback (most recent call last):File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\http.py", line 642, in \_handle_exceptionreturn super(JsonRequest, self).\_handle_exception(exception)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\http.py", line 684, in dispatchresult = self.\_call_function(\*\*self.params)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\http.py", line 334, in \_call_functionreturn checked_call(self.db, \*args, \*\*kwargs)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\service\\model.py", line 101, in wrapperreturn f(dbname, \*args, \*\*kwargs)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\http.py", line 327, in checked_callresult = self.endpoint(\*a, \*\*kw)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\http.py", line 942, in __call__return self.method(\*args, \*\*kw)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\http.py", line 507, in response_wrapresponse = f(\*args, \*\*kw)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\addons\\web\\controllers\\main.py", line 899, in call_buttonaction = self.\_call_kw(model, method, args, {})File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\addons\\web\\controllers\\main.py", line 887, in \_call_kwreturn call_kw(request.env\[model\], method, args, kwargs)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\api.py", line 689, in call_kwreturn call_kw_multi(method, model, args, kwargs)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\api.py", line 680, in call_kw_multiresult = method(recs, \*args, \*\*kwargs)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\addons\\mrp\\models\\mrp_workcenter.py", line 160, in unblocktimes.write({'date_end': fields.Datetime.now()})File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\models.py", line 3592, in writeself.\_write(old_vals)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\models.py", line 3823, in \_writeself.recompute()File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\models.py", line 5378, in recomputerecs.browse(ids).\_write(dict(vals))File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\models.py", line 3693, in \_writecr.execute(query, params + (sub_ids,))File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\sql_db.py", line 154, in wrapperreturn f(self, \*args, \*\*kwargs)File "D:\\Dropbox\\myProjects\\Python\\Odoo10\\odoo\\sql_db.py", line 231, in executeres = self.\_obj.execute(query, params)NumericValueOutOfRange: integer out of rangeI was expecting the work center would became unblocked, and all the work orders related to it.I am not able to find the problem, the export of tables all seem OK
When unblocking a workcenter Odoo will set "now" as date_end to every productivity record (model mrp.workcenter.productivity) without an end date belonging to the workcenter. That's the part in your traceback with "unblock" at the end.That itself triggers a recomputation of the duration field on the productivity records. If i'm not wrong the duration is computed in minutes as difference of date_end and date_start. It seems there are records producing very high numbers.That could be because you either have very bad data on those records or you have implemented another computation method which is computing wrong.
Populate data from another panda df conditionally? So I have a df1 with string objects in its 'Name' column.Then there is a df2 with 'Categories' and 'Regex'.df2.Regex holds regular expressions.What I need to do is to:add a 'Category' column to df1;populate it with df2.Categories strings when their regex returns a match.I'm new to Pandas and I suppose I might be taking this matter from a completely wrong angle. Feel free to throw my approach in the bin and send me on a better course ;-)import pandas as pdimport numpy as npflowers = {'Name':['Blue rose', 'Red rose', 'White rose', 'Green tulip', 'Rosy tulip', 'Yellow tulip']}types = {'Categories':['Rose', 'Tulip'], 'Regex':[r'rose', r'tulip']df1 = pd.DataFrame(flowers)df2 = pd.DataFrame(types)df1['Category'] = ???I tried random things but none that produced any good results...For example:for x in df2.values: df['Portfolio'] = np.where(df.INSTRUMENT_NAME.str.contains(x[1]), x[0], 0)Does not work because the for loop rewrites all the data added by previous iterations. Also np.where does not allow to simply pass when its condition is not met (or at least I don't know how to get it to work like that)So to be clear, the expected result is:df1i Name Category0 Blue rose Rose1 Red rose Rose2 White rose Rose3 Green tulip Tulip4 Rosy tulip Tulip5 Yellow tulip Tulip
In your solution is possible use DataFrame.loc for set values only for matched rows by condition:for cat, reg in df2.values: mask = df1['Name'].str.contains(reg) df1.loc[mask, 'Category'] = catprint (df1) Name Category0 Blue rose Rose1 Red rose Rose2 White rose Rose3 Green tulip Tulip4 Rosy tulip Tulip5 Yellow tulip TulipOr is possible use Series.str.extract with all values of Regex and then using Series.map:s = df2.set_index('Regex')['Categories']df1['Category'] = df1['Name'].str.extract(f'({"|".join(s.index)})', expand=False).map(s)print (df1) Name Category0 Blue rose Rose1 Red rose Rose2 White rose Rose3 Green tulip Tulip4 Rosy tulip Tulip5 Yellow tulip Tulip
How to plot two different graphs on a single figure in pandas? I am trying to get two different plots as one plot. I will not write down my entire code (is so long), but based on the two small codes below, i get two different time series and I want to put these together in one figure. My code for the first plot:plt.figure(figsize=(15,4))i = plt.plot(july/july.mean(),label='G')my code for my second plot: spi3 = pd.read_csv('SPI3.csv',header=0,parse_dates=True)spi3.plot(y='spi',figsize=(16,4))
Quick dirty fix would be to plot dictionaries at first, only then plot with plt.plot. Also, if you want to plot in the same figure, define figsize only in the first figure you are plotting. (Therefore plt.figure is ommitted completely.)spi3.plot(y='spi',figsize=(16,4))plt.plot(july/july.mean(),label='G')
Grade not displayed successfully using python I am a beginner in python programming. I scripted a system to compute student marks.Everything works as intended, but I get fail displayed once. Also, if average is more than 50 I also get a fail message. I can't understand why. Here is my code from tkinter import * def Ok(): result = int(e1.get()) + int(e2.get()) + int(e3.get()) totText.set(result) average = result/3 avgText.set(average) if (average > 50) : grade = "pass" else : grade = "fail" gradeText.set(grade) root = Tk() root.title("Calculator") root.geometry("300x400") global e1 global e2 global e3 global totText global avgText global gradeText totText = StringVar() avgText = StringVar() gradeText = StringVar() Label(root, text="Marks1").place(x=10, y=10) Label(root, text="Marks2").place(x=10, y=40) Label(root, text="Marks3").place(x=10, y=80) Label(root, text="Total:").place(x=10, y=110) Label(root, text="Avg:").place(x=10, y=140) Label(root, text="Grade:").place(x=10, y=180) e1 = Entry(root) e1.place(x=100, y=10) e2 = Entry(root) e2.place(x=100, y=40) e3 = Entry(root) e3.place(x=100, y=80) result = Label(root, text="", textvariable=totText).place(x=100, y=110) avg = Label(root, text="", textvariable=avgText).place(x=100, y=140) grade = Label(root, text="", textvariable=gradeText).place(x=100, y=180) Button(root, text="Cal", command=Ok ,height = 1, width = 3).place(x=10, y=220) marks1 = Entry(root) marks2 = Entry(root) marks3 = Entry(root) root.mainloop()
Format your code so: if (average > 50): grade = "pass" else: grade = "fail" gradeText.set(grade)Instead of: if (average > 50): grade = "pass" else: grade = "fail" gradeText.set(grade)As u can see now u set de gradeText outside the else condition.Edit:Format code in python is so important (as in every other language) be careful.
How to define custom function to generate summary stats in pydatatable? I'm trying to build a custom function to generate a summary stats for a given field as showed in the code snippet.def estadistica_dt_summario(dt,col,por): dt_summary= dt[{'mean_of_specific_col':mean(col),'median_of_specific_col':median(col)},by(por)] return dt_summaryWhere:dt - datatable frame objectcol - field to be calculated (mean,median etc etc)por - field to be aggregatedHere I'm calling on the function.estadistica_dt_summario(comida_dt,"co2_emission","food_category")It's not working as expected and could any one of yours please let me know how to get it achieved in pydatatable way?
you can try this out:def estadistica_dt_summario(DT, col, por): dt_summary = DT[{'mean_of_specific_col': mean(f[col]), 'median_of_specific_col': median(f[col])}, by(f[por])] return dt_summaryRemember to make use of f expressions when you are passing in the fields to a function
In pandas dataframe keep repeated values that are only in a group, if value is repeated after other value then print some message Example Dataframe:A1A1A1 #these values are ok because these are repeated continuouslyA2A3A4A1 #this is duplicate value as this is not in continuationA5
Use:#test if duplciated, first dupe is Falsedf['dup'] = df['col'].duplicated()#consecutive groupsdf['g'] = df['col'].ne(df['col'].shift()).cumsum()#test if not all Trues per groupsdf['new'] = ~df.groupby('g')['dup'].transform('all')print (df) col dup g new0 A1 False 1 True1 A1 True 1 True2 A1 True 1 True3 A2 False 2 True4 A2 True 2 True5 A3 False 3 True6 A4 False 4 True7 A1 True 5 False8 A5 False 6 TrueIf need test only alone repeated values:print (df) col0 A11 A12 A13 A24 A25 A36 A47 A18 A19 A510 A1#same like first solutiondf['dup'] = df['col'].duplicated()df['g'] = df['col'].ne(df['col'].shift()).cumsum()df['rem1'] = ~df.groupby('g')['dup'].transform('all')#test if all dupes by groups gdf['rem2'] = df['g'].duplicated(keep=False)#chain by | for bitwise ORdf['new'] = df['rem1'] | df['rem2']print (df) col dup g rem1 rem2 new0 A1 False 1 True True True1 A1 True 1 True True True2 A1 True 1 True True True3 A2 False 2 True True True4 A2 True 2 True True True5 A3 False 3 True False True6 A4 False 4 True False True7 A1 True 5 False True True8 A1 True 5 False True True9 A5 False 6 True False True10 A1 True 7 False False False
Query function not working with spaces and parenthesis in column names I have a dataframe with spaces and parenthesis in column names.I am trying to use query method to get the results. It is working fine with target_names column but getting error for sepal length (cm).import pandas as pdfrom sklearn import datasetsiris = datasets.load_iris()x = pd.DataFrame(iris['data'], columns=iris['feature_names'])y = pd.DataFrame(iris['target'], columns=['target_names'])data1 = pd.concat([x,y], axis=1)data1.query('`sepal length (cm)` > 5')For this I am getting this error: File "<unknown>", line 1 petal_length_(_cm_)_BACKTICK_QUOTED_STRING >5 ^SyntaxError: invalid syntax
It is a known issue in the pandas library. It accepts spaces but not special characters like $, ( etc. One possible solution is to rename the columns and then use the query function call.
How to make a discord bot find, and move to the current voice channel of a specific user I've been playing around with making fun discord bots for my friends, and we had the idea to create a bot that every 10 seconds checks the location of one of our friends and follows him to whatever voice chat he joins.I've been unable to sort through the discord.py documentation to figure out even how to join a voice channel.Please help, the documentation is absolutely horrid to search through compared to any other I've used.
To join a VoiceChannel, you can just use the VoiceChannel.connect function.Bot accounts cannot have friends on Discord. What do you call "friends"? Are there just user accounts you defined in your code? Your question isn't very clear.Here is my answer to your question, as I understand it:import discordimport asynciofriends = [] # Here you put the IDs of the people you consider as friendsclient = discord.Client()@client.eventasync def on_voice_state_update(member, before, after): channel = after.channel # Voice channel bot_connection = member.guild.voice_client # Bot connection if channel and member.id in friends: # If a friend connected to a voice channel if bot_connection: # Move to new channel if bot was connected to a previous one await bot_connection.move_to(channel) else: # If bot was not connected, connect it await channel.connect() if not channel and bot_connection: # Disconnect if member has left await bot_connection.disconnect()I here used the on_voice_state_update event, and I checked if the id of the member is in the friends list, and if the member is in a voice channel now.
How to extract specific attributes value from multiple tags in xml using python xml:<?xml version="1.0" encoding="UTF-8"?><Page xmlns="http://gigabyte.com/documoto/Statuslist/1.6" xmlns:xs="http://www.w3.org/2001/XMLSchema" hashKey="MDAwNTgxMzQtQS0xLjEuc3Zn" pageFile="status-1.1.svg" tenantKey="Staus"> <Stage description="SPREADER,GB/DD" locale="en" name="SPREADER,GB/DD"/> <File Price="0.0" Id="1" item="1" stage_status="true" ForPage="true" Number="05051401"> <Stage description="" locale="n" name="DANGER"/> </File> <File Price="0.0" Id="2" item="2" stage_status="true" ForPage="true" Number="05051402"> <Stage description="" locale="n" name="SPINNERS"/> </File> <File Price="0.0" Id="3" item="3" stage_status="true" ForPage="true" Number="05051404"> <Stage description="" locale="n" name="CAUTION"/> </File></Page>Expected Output in table format is:Id,item,stage_status,Number1,1,True,05051401, ,DANGER1,1,True,05051402, ,SPINNERS1,1,True,05051404, ,CAUTIONI tried this code:import csvimport xml.etree.ElementTree as ETtree = ET.parse("status-1.1.xml")root = tree.getroot()with open('Data.csv', 'w') as f: w = csv.DictWriter(f, fieldnames=('Id', 'item', 'stage_status', 'Number','description','name')) w.writerheader() w.writerows(e.attrib for e in root.findall('.//Page/File/Stage'))I'm trying to get values from both File and stage tags.
from bs4 import BeautifulSoup as Soupimport pandas as pdxml = '''<?xml version="1.0" encoding="UTF-8"?><Page xmlns="http://gigabyte.com/documoto/Statuslist/1.6" xmlns:xs="http://www.w3.org/2001/XMLSchema" hashKey="MDAwNTgxMzQtQS0xLjEuc3Zn" pageFile="status-1.1.svg" tenantKey="Staus"> <Stage description="SPREADER,GB/DD" locale="en" name="SPREADER,GB/DD"/> <File Price="0.0" Id="1" item="1" stage_status="true" ForPage="true" Number="05051401"> <Stage description="" locale="n" name="DANGER"/> </File> <File Price="0.0" Id="2" item="2" stage_status="true" ForPage="true" Number="05051402"> <Stage description="" locale="n" name="SPINNERS"/> </File> <File Price="0.0" Id="3" item="3" stage_status="true" ForPage="true" Number="05051404"> <Stage description="" locale="n" name="CAUTION"/> </File></Page>'''xml_data = Soup(xml, features="lxml")params = ['id','item','stage_status','number']all_data = []for i in xml_data.findAll("file"): tmp_dict = dict(zip(params,[i['id'],i['item'],i.find('stage')['name'],i['number']])) all_data.append(tmp_dict)df = pd.DataFrame(all_data)dfOutput: id item stage_status number0 1 1 DANGER 050514011 2 2 SPINNERS 050514022 3 3 CAUTION 05051404
Applying a function to each couple of elements of a column in a pandas data frame I have the need to create a new column of a pandas data frame applying a function to each couple of consecutive elements.The first element if the new column has to be a nan.Let's assume that the function is the sum of the elements divided by 3.Here's an example to clarify what I need:a b new_column1 2 nan3 4 25 5 3the operation is on column b:first is nan, 2 = (2 + 4) / 3 == f(2,4), 3 = (4 + 5) / 3 == f(4,5)Can anyone help me? Thank you really much
I believe this solution should now give you the desired result:# We are going to assign a new columndf = df.assign( # based on a function that we will apply new_column=df.apply( # If our row index is not 0: --> if row.name !=0 # we take the value of column["b"] --> row["b] # we add the value located at the current row index -1 --> df["b"].iat[row.name -1] # then we divide by 3 without rest --> //3 lambda row: (row["b"] + df["b"].iat[row.name - 1])//3 if row.name != 0 else "", axis=1 ))
How to add 2dp to Plotly Go Sunburst Objective of this Task: 1)Plotting a hierarhical sunburst (year -> product category -> product subcategory) 2)Label showing percentage with 1/2 d.p. 3)Continous colour scale based on total amount of salesI was using Plotly Express to create a sunburst initially but I realised that the percentage shown in the chart does not sum up to 100% as shown below(33 + 33 + 30 + 5 = 101%)Plotly express sunburst chartThen I tried using Plotly Go to plot the sunburst, I first define a function to create a dataframe, then plotting the sunburst with the newly created df. The function works fine but I do not know why does the figure not showing up. I am stucked with .Function code:levels = ['prod_subcat', 'prod_cat', 'year'] # levels used for the hierarchical chart#color_columns = 'total_amt'value_column = 'total_amt'def build_hierarchical_dataframe(valid_trans, levels, value_column, color_column = None): """ Build a hierarchy of levels for Sunburst or Treemap charts. Levels are given starting from the bottom to the top of the hierarchy, ie the last level corresponds to the root. """ df_all_trees = pd.DataFrame(columns=['id', 'parent', 'value']) for i, level in enumerate(levels): df_tree = pd.DataFrame(columns=['id', 'parent', 'value']) dfg = valid_trans.groupby(levels[i:]).sum() dfg = dfg.reset_index() df_tree['id'] = dfg[level].copy() if i < len(levels) - 1: df_tree['parent'] = dfg[levels[i+1]].copy() else: df_tree['parent'] = 'total' df_tree['value'] = dfg[value_column] df_all_trees = df_all_trees.append(df_tree, ignore_index=True) total = pd.Series(dict(id='total', parent='', value=valid_trans[value_column].sum())) df_all_trees = df_all_trees.append(total, ignore_index=True) return df_all_treesDataframe for plotting sunburst:DataFrameCode for plotting Plotly Go Sunburst:fig.add_trace(go.Sunburst( labels=df_all_trees['id'], parents=df_all_trees['parent'], values=df_all_trees['value'], branchvalues='total', marker=dict( colorscale='RdBu'), hovertemplate='<b>%{label} </b><br> Percent: %{value:.2f}', maxdepth=2 ))fig.show()Result of Plotly Go: Missing FigureCode of Subset Dataframe for this task:c_names = ['year','prod_cat','prod_subcat','total_amt']var = { 'year': [2011,2011,2011,2011,2011,2011,2012,2012,2012,2012,2012,2012,2012,2012,2012,2012,2013,2013,2013,2013,2013,2013,2014,2014], 'prod_cat': ['Bags','Books','Books','Clothing','Clothing','Home and kitchen','Books','Books','Clothing','Clothing','Electronics','Electronics','Footwear','Footwear','Home and kitchen','Home and kitchen','Books','Books','Clothing','Electronics','Home and kitchen','Home and kitchen','Bags','Bags'], 'prod_subcat': ['Mens','Academic','Fiction','Mens','Women','Furnishing','Non-Fiction','Non-Fiction','Kids','Women','Audio and video','Computers','Mens','Women','Furnishing','Kitchen','Academic','Non-Fiction','Women','Mobiles','Bath','Furnishing','Mens','Women'], 'total_amt': [3443.18,5922.8,1049.75,1602.25,6497.4,3287.375,6342.7,2243.15,4760.34,2124.915,5878.6,1264.12,433.16,287.3,1221.025,3867.5,2897.31,2400.06,285.09,5707.325,5585.775,2103.92,3391.245,281.775]}valid_trans = pd.DataFrame(data = var, columns = c_names)
To achieve 2dp percentages it's a simple case of updating the trace. You can use plotly express or graph objects. If using graph objects, using plotly express to structure inputs to go makes coding far simplerplotly express does structuringpxfig = px.sunburst(valid_trans, path=['year','prod_cat']#,'prod_subcat'] , values='total_amt')2dp percent...pxfig.update_layout(margin=dict(t=0, l=0, r=0, b=0)).update_traces(texttemplate="%{label}<br>%{percentEntry:.2%}")graph_objectsuse structuring from plotly expressig =go.Figure(go.Sunburst( ids=pxfig.data[0]["ids"], labels= pxfig.data[0]["labels"], parents= pxfig.data[0]["parents"], values=pxfig.data[0]["values"], branchvalues="total", texttemplate="%{label}<br>%{percentEntry:.2%}"))fig.update_layout(margin = dict(t=0, l=0, r=0, b=0))
How could I create many radio inputs with for loop? Here's my code:{% for answer in value %} <div class="answer"> <input type="radio" name="answer-checkbox" value="{{ answer.id }}"> {{ answer }} </div>{% endfor %}I want to create multiple sets of questions and answers, within each set only one answer could be selected at a time, how could I do this?
I assume that value is an object list, and for each object answer there is answer.id representing the answer you use, and answer.label you use for display.try the following:<div class="answer">{% for answer in value %} <input type="radio" id="{{ answer.id }}" name="answer-checkbox" value="{{ answer.id }}"> <label for={{ answer.id }}>{{ answer.label }}</label><br>{% endfor %}</div>Good Luck
Distinguish Person's names from Organization names in structured table column Are there any solutions to distinguish person names from organization names?I was thinking NER, however the data are stored in a structured table (and are not unstructured sentences). Specifically, the NAME column lists person and organization names (which I'm trying to distinguish). In the below example, I would like to produce the values listed within the PERSON column, based on the values listed within the NAME column.NAMEPERSONTom HanksTRUENissan MotorsFALSERyan ReynoldsTRUETeslaFALSEJeff's CafeFALSE
If you have reason to believe that all entries are sufficiently known (i.e., common brands and celebrities), you could utilize distant learning approaches with Wikipedia as a source.Essentially, you search for each entry on Wikipedia, and utilize results from unique search results (i.e., searching for "Tesla" would lead to at least two uniquely different pages, namely the auto company, and the inventor). Most importantly, each page has assigned category tags at the bottom, which make it trivial to classify a certain article by using string matches with a few synonyms (e.g., Apple has the category tag "Companies liste on NASDAQ", so you could just say all categories that have "company"/"companies" in the name are referring to articles that are companies.With the results from the unique search terms, you should be able to construct a large enough training corpus with relatively high certainty of having accurate ground truth data, which you can in turn use to train a "conventional" ML model on the task.Caveat:I should note that it gets significantly harder if you are doing this with names that are relatively unknown and likely won't have a Wikipedia article. In that case, I could only think of utilizing dictionaries of frequently used first/last names in your respective country. It is certainly more prone to false positive/negatives (e.g., "Toyota" is also a very common last name in Japan, although Westerners likely refer to the car maker). Similarly, names that are less common (or have a different spelling) will also not be picked up and left out.
How to calculate the correlation between two cols of dataframe in pandas I am working on a method for calculating the correlation between to columns of data from a dataset. The dataset is constructed of 4 columns A1, A2, A3, and Class. My goal is remove A3 if the correlation between A1 & A3 greater than 0.6 or if the correlation between A1 & A3 is less than 0.6.A sample of the data set is given below:A1,A2,A3,Class2,0.4631338,1.5,38,0.7460648,3.0,36,0.264391038,2.5,25,0.4406713,2.3,12,0.410438159,1.5,32,0.302901816,1.5,26,0.275869396,2.5,38,0.084782428,3.0,3The python program that I am using for this project is written like sofrom numpy.core.defchararray import countimport pandas as pdimport numpy as npimport numpy as npdef main(): s = pd.read_csv('A1-dm.csv') print(calculate_correlation(s))def calculate_correlation(s): # if correlation > 0.6 or correlation < 0.6 remove A3 s = s['A1','A3'] print(s) # return s.corr()main()When I run my code I get the following error:File "C:\Users\physe\AppData\Roaming\Python\Python36\site-packages\pandas\core\indexes\base.py", line 2897, in get_loc raise KeyError(key) from errKeyError: ('A1', 'A3')I've reviewed the documentation here. The issue that I'm facing is constructing a dataframe out 'A1' and 'A3'. How can this be done in pandas? Thanks in advance.
It was pretty straight forward after I gave I found this.def calculate_correlation(s): # if correlation > 0.6 or correlation < 0.6 remove A3 s = s[['A1','A3']] return s.corr()
Find if the docker daemon is running I need execute from the Python code to see if the docker` daemon is running OS independently. Is it possible to achieve? Otherwise, it will be also okay to read the OS and execute for each platform individually.
If it was some linux system i would try to launch systemctl status docker to check of if service is running.To make this platform independent you can make call to some docker function which needs docker daemon running like docker ps. It should return table of running processes when daemon is running otherwise it will show message: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?To launch this commands use Popen from subprocess library. About running commands and retrieving output you can read here.
Replace end string text with new string text How to replace just end of text with new text?From:/var/www/html/file.php /home/www/html/data.phpTo:/var/www/html/module.php /home/www/html/module.php
Another possibility with str.rsplit():l = ['/var/www/html/file.php','/home/www/html/data.php']for item in l: print( item.rsplit('/', 1)[0] + '/module.php' )Prints:/var/www/html/module.php/home/www/html/module.phpOr using re:import refor item in l: print( re.sub(r'(.*/)(.*?)$', r'\1module.php', item) )EDIT: To write to file:l = ['/var/www/html/file.php','/home/www/html/data.php']import rewith open('data.txt', 'w') as f_out: for item in l: f_out.write( re.sub(r'(.*/)(.*?)$', r'\1module.php', item) + '\n')
Modularizing python code I have around 2000 lines of code in a python script. I decided to cleanup the code and moved all the helpers in a helpers.py file and all the configs and imports in a config.py file Here my main file: from config import *from helpers import *from modules import * And in my config file I've writtedimport threading as thAnd then in modules I am extending a thread classclass A(th.Thread):...I get an error that th is not defined.And when I import config in my modules class, it works fine. I don't have a clear picture on how imports work here.Also, is there any best practice to do it?
Read import threading as th as th = __import__("threading"): it's an assignment first and foremost. Thus, you have to do the import in every file where you're using the variable.PS: import * is best avoided.
How can I check a file has been copied fully to a folder before moving it using python I'm currently working on a project that adds images to a folder. As they're added they also need to be moved (in groups of four) to a secondary folder overwriting the images that are already in there (if any). I have it sort of working using watchdog.py to monitor the first folder. When the 'on_created' event fires I take the file path of the newly added image and copy it to the second folder using shutil.copy(), incrementing a counter and using the counter value to rename the image as it copies (so it becomes folder/1.jpg). When the counter reaches 4 it resets to 0 and the most recent 4 images are displayed on a web page. All these folders are in the local filesystem on the same drive.My problem is that sometimes it seems the event fires before the image is fully saved in the first folder (the images are around 1Mb but vary slightly so I can't check file size) which results in a partial or corrupted image being copied to the second folder. At worst it throws an IOError saying the file isn't even there.Any suggestions. I'm using OSX 10.11, Python 2.7. The images are all Jpegs.
I see multiple solutions :When you first create your images in the first folder, add a suffix to their name, for instance, filexxx.jpg.part and when they are fully written just rename them, removing the .part.Then in your watchdog, be sure not to work on files ending with .partIn your watchdog, test the image file, like try to load the file with an image library, and catch the exceptions.
Analytics: split-summarize records Consider the following hypothetical accounting records for staff activities in a publishing company:Name Activity Begin-date End-date---------------------------------------------------------Hasan Proofreading 2015-01-27 2015-02-09Susan Writing 2015-02-01 2015-02-15Peter Editing 2015-01-01 2015-02-21Paul Editing 2015-01-24 2015-01-30Stefan Proofreading 2015-01-08 2015-01-08...These represent activities that each person is doing, including the beginning and ending dates (inclusive dates). Let's say that this company's exec wants to know how many man-days were spent in different activities for each month. The desired report may look like this:Month Activity Man-hours----------------------------------------2015-01 Proofreading 7202015-01 Editing 12832015-01 Writing 4732015-02 Proofreading 11012015-02 Editing 8932015-02 Writing 573...Assuming python Pandas analytics framework, can we do this relying (mostly) on pandas' API, rather than doing a low level, "bit-by-bit" programming? The issue with this query is that the "begin" and "end" times of each record can straddle over several months (not just one month), so those records will need to be "split", or "exploded" into multiple records (each covering a period of one month), then we can use the usual "groupby & sum" aggregation to do the final reduction.Having never been formally trained in SQL or database, I don't know if there is such a concept in data analytics, so I don't know the proper name. In Spark, I think this can be done, because RDD flatMap can return multiple elements out of a single element.Thanks,Wirawan
First, create a dense long dataframe with each day between each begin date and end date. To do so, Pandas has pd.date_range that generate a DatetimeIndex from two dates. Assuming people dans work on weekends, let's use a business day frequency, but you can use any useful frequency for your case in it.From this ranges we do a bit of reformatting with stack and some index reset. It leads to:df =(df.set_index(['name', 'activity']) .apply(lambda r : pd.Series(pd.date_range(r['begindate'],r['enddate'], freq='B')), axis=1) .stack() .rename('date') .reset_index(level=-1, drop=True) .reset_index())Out[73]: name activity date0 Hasan Proofreading 2015-01-271 Hasan Proofreading 2015-01-282 Hasan Proofreading 2015-01-293 Hasan Proofreading 2015-01-304 Hasan Proofreading 2015-02-02.. ... ... ...10 Susan Writing 2015-02-0211 Susan Writing 2015-02-03.. ... ... ...Now you ca do your monthly aggregation. Convert the dates to monthly periods and group against it:df.groupby(['activity',df.date.dt.to_period('M')]).size()Out[97]: activity date Editing 2015-01 27 2015-02 15Proofreading 2015-01 5 2015-02 6Writing 2015-02 10
Django internal API Client/Server Authentication or not? I have a django project, in which i expose a few api endpoints (api endpoint = answers to get/post, returns json response, correct me if im wrong in my definition). Those endpoints are used by me on front end, like update counts or get updated content, or a myriad other things. I handle the representation logic on server side, in templates, and in some cases send a rendered to string template to the client.So here are the questions im trying to answer:Do i need to have some kind of authentication between the clients and the server?Is django cross origin protection enough?Where, in this picture, fit such packages like django-oauth-toolkit? And django-rest-framework?if i don't add any authentication between clients and server, am i leaving my server open for attacks?Furthermore, what goes for server-to-server connection? Both servers under my control.
I would strongly recommend using django-tastypie for server to client communication.I have used it in numerous applications both server to server or server to client.This allows you to apply the django security as well as some more logic regarding the authorization process.It offers also out of the box:throttlingserialization in json, xml, and other formatsauthentication (basic, apikey, customized and other)validationauthorizationpaginationcachingSo, as an overall overview i would suggest on building on such a framework that would make your internal api more interoperable for future extensions and more secure.To specifically now answer your question, i would never enable any server api without at least some basic authentication/authorization.Hopefully i answer your questions on how you can deliver all of your above worries with a framework.The django-rest-framework that you ask for, is also really advanced and easy to use, but i prefer tastypie for the reasons i explain.I hope i helped a bit!
TypeError: unhashable type: 'list' when creating a new column import pandas as pd data = {'A': [1,2],'B':[[1,1,1,2,2,4,4,4,4],[5, 4, 8, 1, 1, 1, 3, 2, 4, 2, 2, 2, 1, 1, 1]]}df = pd.DataFrame(data)AB1[1, 1, 1, 2, 2, 4, 4, 4, 4]2[5, 4, 8, 1, 1, 1, 3, 2, 4, 2, 2, 2, 1, 1, 1]def top_frequent(a): import numpy k = {} for j in a: if j in k: k[j] +=1 else: k[j] =1 occ = [] for key, val in k.items(): occ.append(val) Z = numpy.percentile(occ, 75, interpolation='higher') print(Z) bucket = [[] for l in range(len(a)+1)] for key, val in k.items(): if val >= Z : if val != 1 : bucket[val].append(key) res = [] for i in reversed(range(len(bucket))): if bucket[i]: res.extend(bucket[i]) return resdf['C'] = df.apply(top_frequent(df['B']))---------------------------------------------------------------------------TypeError Traceback (most recent call last)~\AppData\Local\Temp/ipykernel_13728/2052560572.py in <module> 28 return res 29 ---> 30 df['C'] = df.apply(top_frequent(df['B']))~\AppData\Local\Temp/ipykernel_13728/2052560572.py in top_frequent(ids) 4 k = {} 5 for j in ids:----> 6 if j in k: 7 k[j] +=1 8 else:TypeError: unhashable type: 'list'When I apply the function on just one row it works fineBut when I apply it for all lines I get this error :TypeError: unhashable type: 'list'
The problem is that when you pass df['B'] into top_frequent(), df['B'] is a column of list, you can view is as a list of list.So in your for j in a:, you are getting item from outer list. For list of list, what you get is a list.Then in k[j], you are using a list as key which is not supported by Python list. So it gives you the error TypeError: unhashable type: 'list'.You can trydf['C'] = df['B'].apply(top_frequent)# ordf['C'] = df.apply(lambda row: top_frequent(row['B']), axis=1)Besides you can use a more pandas way to do thisdf['C'] = df['B'].apply(lambda x: (lambda y: (y[y==y.max()].index.tolist()))(pd.Series(x).value_counts()))
"AttributeError: partially initialized module 'pytube' has no attribute 'YouTube' (most likely due to a circular import)" Here is the code:import pytube as pvideo_url = input("Enter the link: ")youtube = p.YouTube(video_url)filters = youtube.streams.filter(progressive=True, file_extension="mp4")filters.get_highest_resolution().download("MyPath")I tried to write a code to download a youtube video. But it's throwing me an error saying:AttributeError: partially initialized module 'pytube' has no attribute 'YouTube' (most likely due to a circular import)`I even copy-pasted codes from the internet, re-installed Python, and re-installed pytube but none worked. What's even more frustrating is, that it was working fine when I executed it a few months before.
AttributeError: partially initialized module 'pytube' has no attribute 'YouTube' (most likely due to a circular import)Did you notice how is your file named? pytube.py. This may have caused the circular import, since Python is trying to import the pytube.py file itself.I would suggest you to read this and this, the first one is exactly your case.So the short answer is:Change your file name!And when I say change I mean that you have to rename your file, and not to create a new one.
Python-2.x: list directory without os.listdir() With os.listdir(some_dir), we can get all the files from some_dir, but sometimes, there would be 20M files(no sub-dirs) under some_dir, these would be a long time to return 20M strings from os.listdir().(We don't think it's a wise option to put 20M files under a single directory, but it's really there and out of my control...)Is it any other generator-like method to do the list operation like this: once find a file, yield it, we fetch it and then the next file.I have tried os.walk(), it's really a generator-style tool, but it also call os.listdir() to do the list operation, and it can not handle unicode file names well (UTF-8 names along with GBK names).
If you have python 3.5+ you can use os.scandir() see documentation for scandir
error with python thread I try to use thread in my script but i get this error: Unhandled exception in thread started by sys.excepthook is missing lost sys.stderrMy script:# -*- coding: utf-8 -*-import tweepyimport threadconsumer_key = ""consumer_secret = ""access_key = ""access_secret = ""def deleteThread(api, objectId): try: api.destroy_status(objectId) print "Deleted:", objectId except: print "Failed to delete:", objectIddef oauth_login(consumer_key, consumer_secret): """Authenticate with twitter using OAuth""" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth_url = auth.get_authorization_url() verify_code = raw_input("Authenticate at %s and then enter you verification code here > " % auth_url) auth.get_access_token(verify_code) return tweepy.API(auth)def batch_delete(api): print "You are about to Delete all tweets from the account @%s." % api.verify_credentials().screen_name print "Does this sound ok? There is no undo! Type yes to carry out this action." do_delete = raw_input("> ") if do_delete.lower() == 'yes': for status in tweepy.Cursor(api.user_timeline).items(): try: thread.start_new_thread( deleteThread, (api, status.id, ) ) except: print "Failed to delete:", status.idif __name__ == "__main__": auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) print "Authenticated as: %s" % api.me().screen_name batch_delete(api)How to solve this problem?
UpdateI just solve my problem by adding time.sleep(1) beforethread.start_new_thread( deleteThread, (api, status.id, ) )
python: how to get the source from a code object? let's suppose we have a python string(not a file,a string,no files)TheString = "k=abs(x)+y"ok? Now we compile the string into a piece of python bytecodeBinary = compile( TheString , "<string>" , "exec" )now the problem: how can i get from Binary , supposing i don't know TheString , a string that represents the original string object?shortly: what is the function that is opposite to compile() ?
Without the source code, you can only approximate the code. You can disassemble the compiled bytecode with the dis module, then reconstruct the source code as an approximation:>>> import dis>>> TheString = "k=abs(x)+y">>> Binary = compile( TheString , "<string>" , "exec" )>>> dis.dis(Binary) 1 0 LOAD_NAME 0 (abs) 3 LOAD_NAME 1 (x) 6 CALL_FUNCTION 1 9 LOAD_NAME 2 (y) 12 BINARY_ADD 13 STORE_NAME 3 (k) 16 LOAD_CONST 0 (None) 19 RETURN_VALUE From the disassembly we can see there was 1 line, where a function named abs() is being called with one argument named x. The result is added to another name y, and the result is stored in k.Projects like uncompile6 (building on top of the work of many others) do just that; decompile the python bytecode and reconstruct Python code from that.
Successive matrix multiplication in R using TensorFlow A simple question.Suppose I have an m x m matrix (mat) which I'd like to raise to the power of n, meaning mat %*% mat %*% mat %*% ... (and suppose I'd like to keep all the intermediate products)To to this using tensorflow (since I want to make it efficient using GPUs), would the code be:library(tensorflow)set.seed(1)m <- 9 #in my real data m is pretty big thoughmat <- matrix(rnorm(m*m),m,m)tf.mat <- tf$constant(mat)tf.prod.mat <- tf$constant(mat)n <- 4prod.list <- vector(mode="list",n)for(i in 1:n){ mat.mat.prod <- tf$matmul(tf.prod.mat,tf.mat) sess <- tf$Session() prod.list[[i]] <- sess$run(mat.mat.prod) tf.prod.mat <- tf$constant(prod.list[[i]])}sess$close()Or am I not using tensorflow appropriately?
Have you tried the expm library?> library(expm)> mat %^% 2 # raise to power of 2
parsing JSON which contains "objects" I'm getting data from an application that returns what seems to be an JSON, but with some "objects". For instance:{"rgEvtData":[new VisData(0,0,1,0,1,0,0,0,0,-1),new VisData(0,1,1,1,1,0,0,0,0,-1),new VisData(0,2,1,2,1,0,0,0,0,-1),new VisData(0,3,2,0,1,0,0,0,0,-1),new VisData(0,4,2,1,1,0,0,0,0,-1),new VisData(0,5,2,2,1,0,0,0,0,-1),new VisData(0,6,2,3,1,0,0,0,0,-1),new VisData(0,7,3,0,1,0,0,0,0,-1),new VisData(0,8,3,1,1,0,0,0,0,-1)]}any idea if I can parse it on python without dirty workarounds (ie, replace() or regexp)?
No, you can't.Even if python could parse it, what would it do with the VisDatas? I think your only option (except the stick approach mentioned), to translate this string into valid JSON somehow. For example, replacing new VisData(...) with [...], or {"class": "VisData", "args": [...]} if you have multiple classnames. But you said you don't want that.UpdateI have an example, I think it is what you need.It handles custom classes in the format you provided.It would also handle multiple classes and any number/type of constructor arguments.import reimport json# our python VisData classclass VisData(object): def __init__(self, *args): self.args = args# object hook to convert our {"class":"VisData","args":[...]} dict to VisData insancesdef object_hook(obj): # if we recognize our object describer dict if len(obj) == 2 and "class" in obj and "args" in obj: # instantiate our classes by name clazz = globals()[obj["class"]] args = obj["args"] return clazz(*args) return obj# inputinput_string = '{"rgEvtData":[new VisData(0,0,1,0,1,0,0,0,0,-1),new VisData(0,1,1,1,1,0,0,0,0,-1)]}'# make it jsonjson_string = re.sub(r'new (\w+)\(([^\)]*)\)', r'{"class":"\1","args":[\2]}', input_string)# parse it with our object hookdata = json.loads(json_string, object_hook=object_hook)# resultprint(data) # -> {u'rgEvtData': [<__main__.VisData object at 0x1065d8210>, <__main__.VisData object at 0x1065d8250>]}print(data["rgEvtData"][0]) # -> <__main__.VisData object at 0x1065d8210>print(data["rgEvtData"][0].args) # -> (0, 0, 1, 0, 1, 0, 0, 0, 0, -1)
How to get arguments string in a function? In Python, is it possible to define a function get_arg_str that implements this:def get_arg_str(arg): # do something heremydict = {'key': 3}str_arg = get_arg_str(mydict['key'])# then str_arg should be string "mydict['key']"
Yes it's possible using introspection.Something alone the lines of:def get_arg_str(object, namespace): return [name for name in namespace if namespace[name] is object]mydict = 'foo'print(get_arg_str(mydict, globals()))['mydict']You can build out from there using string interpolation.
Python blocked thread termination method? I have a question in Python programming. I am writing a code that has a thread. This thread is a blocked thread. Blocked thread means: a thread is waiting for an event. If the event is not set, this thread must wait until the event is set. My expectation that block thread must wait the event without any timeout for waiting!After starting the blocked thread, I write a forever loop to calculate a counter. The problem is: When I want to terminate my Python program by Ctrl+C, I can not terminate the blocked thread correctly. This thread is still alive! My code is here.import threadingimport timedef wait_for_event(e): while True: """Wait for the event to be set before doing anything""" e.wait() e.clear() print "In wait_for_event"e = threading.Event()t1 = threading.Thread(name='block', target=wait_for_event, args=(e,))t1.start()# Check t1 thread is alive or notprint "Before while True. t1 is alive: %s" % t1.is_alive()counter = 0while True: try: time.sleep(1) counter = counter + 1 print "counter: %d " % counter except KeyboardInterrupt: print "In KeyboardInterrupt branch" breakprint "Out of while True"# Check t1 thread is aliveprint "After while True. t1 is alive: %s" % t1.is_alive()Output:$ python thread_test1.pyBefore while True. t1 is alive: Truecounter: 1counter: 2counter: 3^CIn KeyboardInterrupt branchOut of while TrueAfter while True. t1 is alive: TrueCould anyone give me a help? I want to ask 2 questions.1. Can I stop a blocked thread by Ctrl+C? If I can, please give me a feasible direction.2. If we stop the Python program by Ctrl+\ keyboard or reset the Hardware (example, PC) that is running the Python program, the blocked thread can be terminated or not?
Ctrl+C stops only the main thread, Your threads aren't in daemon mode, that's why they keep running, and that's what keeps the process alive. First make your threads to daemon.t1 = threading.Thread(name='block', target=wait_for_event, args=(e,))t1.daemon = Truet1.start()Similarly for your other Threads. But there another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:import timewhile True: time.sleep(1)Please have a look at this, I hope you will get your other answers.
Constrain logic in Linear programming I'm trying to build a linear optimization model for a production unit. I have Decision variable (binary variable) X(i)(j) where I is hour of J day. The constrain I need to introduce is a limitation on downtime (minimum time period the production unit needs to be turned off between two starts).For example:Hours: 1 2 3 4 5 6 7 8 9 10 11 12On/off: 0 1 0 1 1 0 1 1 1 0 0 1I cannot run the hour 4 or 7 because time period between 2 and 4 / 5 and 7 is one. I can run hour 12 since I have two hour gap after hour 9. How do I enforce this constrain in Linear programming/ optimization?
I think you are asking for a way to model: "at least two consecutive periods of down time". A simple formulation is to forbid the pattern:t t+1 t+21 0 1 This can be written as a linear inequality:x(t) - x(t+1) + x(t+2) <= 1One way to convince yourself this is correct is to just enumerate the patterns:x(t) x(t+1) x(t+2) LHS 0 0 0 0 0 0 1 1 0 1 0 -1 0 1 1 0 1 0 0 1 1 0 1 2 <--- to be excluded 1 1 0 0 1 1 1 1 With x(t) - x(t+1) + x(t+2) <= 1 we exactly exclude the pattern 101 but allow all others.Similarly, "at least two consecutive periods of up time" can be handled by excluding the patternt t+1 t+20 1 0 or-x(t) + x(t+1) - x(t+2) <= 0Note: one way to derive the second from the first constraint is to observe that forbidding the pattern 010 is the same as saying y(t)=1-x(t) and excluding 101 in terms of y(t). In other words: (1-x(t)) - (1-x(t+1)) + (1-x(t+2)) <= 1This is identical to -x(t) + x(t+1) - x(t+2) <= 0In the comments it is argued this method does not work. That is based on a substantial misunderstanding of this method. The pattern 100 (i.e. x(1)=1,x(2)=0,x(3)=0) is not allowed because -x(0)+x(1)-x(2) <= 0Where x(0) is the status before we start our planning period. This is historic data. If x(0)=0 we have x(1)-x(2)<=0, disallowing 10. I.e. this method is correct (if not, a lot of my models would fail).
Setting Jupyter Notebook Server Using Mac I have a mac and want to set it as a Jupyter Notebook server so that I could connect it using browser through the internet when I'm not at home. But I cannot find instruction online. Is my idea feasible and easy to realize? Thanks!
Set host to 0.0.0.0 to expose the server to your internal network. Then follow these instructions to open it up externally.https://medium.com/botfuel/how-to-expose-a-local-development-server-to-the-internet-c31532d741cc
Vectorized-oriented definition of functions (Python) I want to integrate a function using quadpy. I noticed that quadpy passes numpy arrays as arguments to the function. For example, if I define f = lambda x: x**2, then quadpy will integrate passing a vector like x = [0, 0.3, 0.7, 1].The function I want to integrate is long (300 lines of code) and when I coded it I was thinking in passing two real numbers as arguments, not vectors. Now that I want to integrate or plot, it seems very important that my function can handle vectors. Which methods or tricks do you know to vectorize a function? numpy.vectorize does not seem to work in every case.In my case, one of the problems is:def f(t): U = np.array([[1, 0, 0 ], [0, np.cos(t), np.sin(t)], [0, -np.sin(t), np.cos(t)]]) V = np.array([[np.cos(t), 0, np.sin(t)], [0, 1, 0 ], [-np.sin(t), 0, np.cos(t)]]) return U @ VWhen I run the code, quadpy replaces, say, np.cos(t) by an array, so the system throws an error telling me that I should specify 'dtype=object', and then the program collapses when it tries to operate U @ V over vectors.How can I deal with these kind of problems? Other problem is that I multiply, say, constant(t)*vector(l),so the constant becomes a vector, vector(l) becomes an array of arrays, and the system starts to complain again. Should I always define my functions bearing vectorization in mind?
Check https://github.com/sigma-py/quadpy/wiki/Dimensionality-of-input-and-output-arrays on what the dimensions of the input array mean, and how to construct an output array.
Color code a column based on values in another column in Excel using pandas I have a pandas data frame that I am then writing into an Excel sheet:Measurevaluelower limitupper limitA10.11.2B20.51.5C101100I would like to color code the column "value" based on the condition that "value" is contained between "lower limit" and "upper limit".What I have done so far is to create an extra column called "within limits" to check that the condition is true or false, but for this case I can only find solutions in pandas that are color coding the column "within limits" itself, and not the "value" column.Is there a way to color code based on another column value?
You can use a custom function:def color(df): out = pd.DataFrame(None, index=df.index, columns=df.columns) out['value'] = (df['value'] .between(df['lower limit'], df['upper limit']) .map({True: 'background-color: yellow'}) ) return outdf.style.apply(color, axis=None)With parameters:def color(df, value, low, high, color='red'): out = pd.DataFrame(None, index=df.index, columns=df.columns) out[value] = (df[value] .between(df[low], df[high]) .map({True: f'background-color: {color}'}) ) return outdf.style.apply(color, value='value', low='lower limit', high='upper limit', color='yellow', axis=None)output:
iterate over dictionaries in module I import a module that only contains several dictionaries. How can I iterate over those?Something along the lines ofimport moduleX as datafor d in data: do stuff with dthis obviously does not work as the module is not iterable. is there a way to extract all dicts from module as a collection and iterate through that collection?
My answer:import moduleX as datafor k, v in data.__dict__.iteritems(): if isinstance(v, dict) and not k.startswith('_'): # do something pass
django two foreign keys unique record I have three django models:class Item(models.Model): itemid = models.IntegerField(default=0, unique=True)class Region(models.Model): regionid = models.IntegerField(default=0, unique=True)class Price(models.Model): regionid = models.ForeignKey(Region) itemid = models.ForeignKey(Item)Now my issue is this: I need to have Price be unique for the Item and Region combination (e.g. itemid = 1 & regionid = a therefore there can only be one Price that can have foreign keys of itemid = 1 and regionid = a).Is there any way to enforce that relationship?
You should take a look at unique together! It may solve your issue.
looping through nodes and extract attributes in Networkx I defined in python some shapes with corresponding corner points, like this:square = [[251, 184], [22, 192], [41, 350], [244, 346]]triangle = [[250, 181], [133, 43], [21, 188]]pentagon = [[131, 37], [11, 192], [37, 354], [247, 350], [256, 182]]Then, I make use of NetworkX package to create a Graph:G = nx.DiGraph()Then, I create a node in the graph for each shape:G.add_node('square', points = square, center = (139, 265))G.add_node('triangle', points = triangle, center = (139, 135))G.add_node('pentagon', points = pentagon, center = (138, 223))Now is the problem, I have to create some edges connecting two nodes if a condition is satisfied. The condition to satisfy is if the center of a shape is inside or outside another shape, then create an edge like this:G.add_edge('triangle', 'pentagon', relation = 'inside')G.add_edge('triangle', 'square', relation = 'outside')To do so, I have to loop through the nodes, extract the center of a shape, extract the points of the other shapes (NOT themselves, it's useless) and make the pointPolygonTest.I've been trying quite much, but didn't came out with any solution. The closest (not really effective) solution I got is this:nodes_p=dict([((u),d['points']) for u,d in G.nodes(data=True)])nodes_c=dict([((u),d['center']) for u,d in G.nodes(data=True)])for z,c in nodes_c.items(): print z + ' with center', c for z,p in nodes_p.items(): p_array = np.asarray(p) if cv2.pointPolygonTest(p_array,c,False)>=0: print 'inside ' + z #create edge else: print 'outside ' + z #create edgeThis, gives me the following output, that is not optimal because there are some relation that should have been avoided (like triangle inside triangle) or some wrong relations (like pentagon inside square)triangle with center (139, 135)inside triangleoutside squareinside pentagonsquare with center (139, 265)outside triangleinside squareinside pentagonpentagon with center (138, 223)outside triangleinside squareinside pentagonHow can I solve this problem? Any suggestion is apreciated. Reminder: the main problem is how to loop through the nodes and extract the info. The packages I import for the whole script are:import numpy as npimport networkx as nximport cv2
Here is an image of your polygonsFirst, there is no need to cast the nodes as dictionaries, we can iterate on them directly. This code is based off of this examplefor u,outer_d in G.nodes(data=True): center = outer_d['center'] print u, "with center", center for v, inner_d in G.nodes(data=True): #Don't compare self to self if u != v: # Create a source image src = np.zeros((400,400),np.uint8) # draw an polygon on image src points = np.array(inner_d['points'],np.int0) cv2.polylines(src,[points],True,255,3) contours,_ = cv2.findContours(src,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) if cv2.pointPolygonTest(contours[0],center,True) <= 0: print 'outside',v else: print 'inside',vThe output ispentagon with center (138, 223)inside squareoutside trianglesquare with center (139, 265)inside pentagonoutside triangletriangle with center (139, 135)inside pentagonoutside squareSince the goal is to determine if one polygon is completely inside the other, we should check all of the vertices of one polygon are inside another. Here is a tentative (unfortunately untested) solution.def checkPoint(point, poly,r=400): ''' determine if point is on the interior of poly''' # Create a source image src = np.zeros((r,r),np.uint8) # draw an polygon on image src verts = np.array(poly,np.int0) cv2.polylines(src,[verts],True,255,3) contours,_ = cv2.findContours(src,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) return cv2.pointPolygonTest(contours[0],tuple(point),True) > 0:for u,outer_d in G.nodes(data=True): points = outer_d['points'] center = outer_d['center'] print u, "with center", center for v, inner_d in G.nodes(data=True): poly = inner_d['points'] if u != v: if all([checkPoint(point,poly) for point in points]): print 'inside',v else: print 'outside',vThe output for this example is as follows and now should be correct.pentagon with center (138, 223)outside squareoutside trianglesquare with center (139, 265)inside pentagonoutside triangletriangle with center (139, 135)inside pentagonoutside squareNote that I have made the assumption that the polygons will be convex. If this is not true, then you could check all the points on the contour instead of just the corner points. You could also build in a convexity check using cv2, see this blog for details.
Python SSL: CERTIFICATE_VERIFY_FAILED I'm getting an error when connecting to www.mydomain.com using Python 2.7.12, on a fairly new machine that uses Windows 8.1. The error is SSL: CERTIFICATE_VERIFY_FAILED on the ssl_sock.connect line of the code below. The code wraps an SSL connection in an context, and specifies I don't want to carry out certificate verification:ssl._create_default_https_context = ssl._create_unverified_contexts_ = socket.socket(socket.AF_INET, socket.SOCK_STREAM)context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)context.verify_mode = ssl.CERT_NONEcontext.check_hostname = Truecontext.load_default_certs()ssl_sock = context.wrap_socket(s_, server_hostname=myurl)ssl_sock.connect((myurl, int(myportno)))I've tried adding the plain text version of the security certificate from the server I'm trying to connect to, to the default certificate file that Python references - that didn't work (in any case, it doesn't make sense that I should need to do this)When I browse to the domain I'm trying to connect to, the browser also doesn't trust the remote server certificate, however I've examined the certificate that's bound to the domain and it's validating fine. What could be causing the mistrust? (I'm currently investigating removal of a Windows security patch from the machine where I'm getting the error, to see if that could be the cause)(this issue has occurred on other computers using the same code, however it seems to resolve after Windows retrieves a full set of updates. The machine where the problem is persisting also has a full set of updates however)
I resolved this issue, which seems to be related to a post Aug 29th 2016 security update for Windows that causes issues with certificate verification when using the TLS 1.0 protocol. Re-installing Windows without the security update at least allows things to work for now. Also I didn't get this issue when running under Windows 10
How do I pull out the first 3 lines of each ppm file? All I have done is opened up the small file and appended the contents to a list called files. Files contains a list of the tiny ppm lines. How do I remove the first three lines of this file from existence? Here's an example of what a .ppm file looks like, it's called, tiny.ppm.P35 5255255 0 0100 150 175100 150 175100 150 175100 150 175100 150 175255 0 0100 150 175100 150 175100 150 175100 150 175100 150 175255 0 0100 150 175100 150 175100 150 175100 150 175100 150 175255 0 0100 150 175100 150 175100 150 175100 150 175100 150 175255 0 0My code is below, however, I want 'files' to eventually contain a list of 9 lists containing 9 different files' information, and remove the first three lines of all of those too. def readFiles(): files = [] files.append(open('tiny.ppm','rU').readlines()) print(files)
If you want something more robust for reading images and performing various operations on them, I recommend the Pillow package in Python.from PIL import Imagefrom glob import globdef readFiles(): images = [] for f in glob("*.ppm"): image = Image.open(f) image_pix = list(image.getdata()) # retrieve raw pixel values as a list images.append(image_pix) return images # images is now a list of lists of pixel valuesFor example, you can crop an image:box = (100, 100, 400, 400)region = image.crop(box)More examples and tutorial here: http://pillow.readthedocs.org/en/latest/handbook/tutorial.html
How to get the name of parent directory in Python? I have a program in python that prints out information about the file system... I need to know to save the name of the parent directory to a variable...For example, if a tiny bit of the file system looked like this: ParentDirectoryChildDirectorySubDirectoryIf I was inside of ChildDirectory, I would need to save the name ParentDirectory to a certain string... here is some pseudo code:import osvar = os.path.parent()print varOn a side note, I am using Python 2.7.6, so I need to be compatible with 2.7.6...
You can get the complete file path for the script resides using __file__ variable , then you can use os.path.abspath() to get the absolute path of your file , and then use os.path.join() along with your current path , and the parent directory descriptor - .. in most operating systems , but you can use os.pardir (to get that from python , so that it is truly platform independent) and then again get its absolute path to get the complete path of current directory, then use it with same logic again to get its parent directory.Example code - import os,os.pathcurfilePath = os.path.abspath(__file__)# this will return current directory in which python file resides.curDir = os.path.abspath(os.path.join(curfilePath, os.pardir))# this will return parent directory.parentDir = os.path.abspath(os.path.join(curDir, os.pardir))
No such file or directory: 'geckodriver' for a simple Selenium application in Python I'm running a simple example of selenium on Linux:from selenium import webdriverfrom selenium.webdriver.common.keys import Keysdriver = webdriver.Firefox()driver.get("something")and get an error:FileNotFoundError: [Errno 2] No such file or directory: 'geckodriver'How to fix it?$ pythonPython 3.5.2 (default, Jun 28 2016, 08:46:01) [GCC 6.1.1 20160602] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import selenium>>> from selenium.webdriver.common.keys import Keys>>>
Downloading geckodriverThe geckodriver executable can be downloaded here. Python3 venvDownload the geckodriver executable from the above link and extract it to env/bin/ to make it accessible to only the virtual environment.In your python code, you will now be able to do the following:from selenium import webdriverbrowser = webdriver.Firefox()browser.get("https://stackoverflow.com/")LinuxIf you would like to make it available system wide, download the geckodriver executable from the above link and extract it to /usr/bin/ (or anything inside of your $PATH)WindowsNote: this needs a windows user to test and confirmDownload geckodriver from the above link and extract it to C:\Windows\System32\ (or anything inside your Path environment variable).Mac OS XNote: I took this from Vincent van Leeuwen's answer in this very question. Putting it here for the sake of lumping everything in one answerTo make geckodriver available system wide, open up your Terminal App and perform the following command:brew install geckodriverMore InfoMore info on selenium can be found here: Selenium requires a driver to interface with the chosen browser. Firefox, for example, requires geckodriver, which needs to be installed before the below examples can be run. Make sure it's in your PATH, e. g., place it in /usr/bin or /usr/local/bin. Failure to observe this step will give you an error selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
Open audio with callback using ALSA Is it possible in Python, using ALSA, to access the audio hardware for playback, with a callback function:def audiocallback(): # create some audio and return a buffer of 1024 samples (~23 ms @ 44.1khz) # that is going to be played on the device return bufferopenaudio(deviceid=1, type=OUTPUT, freq=44100, buffersize=1024, callback = audiocallback)Would this be possible with python-alsaaudio ? Or another module related with ALSA, like SDL? Or by using a specific thread with threading?Note: is there an official mailing-list, repo, forum, etc. for python-alsaaudio ? I didn't find any active development.Note 2: I don't want to use PyAudio, because when using it, I had various problems on Raspberry Pi: as it involves another layer, PortAudio, this probably increases the weight of this solution. I successfully used PyAudio on PC x86 projects, but on Pi, it seems too heavy for various reasons that would be too long to discuss here.
I've succeeded in getting it to work with python-alsaaudio. There is no from-the-box method for this purpose, but it's relatively easy to implement.import alsaaudioimport randomimport structBUFFER_SIZE = 1024def noise_callback(): return [random.randint(-20000, 20000) for i in range(BUFFER_SIZE)]def openaudio(card_name, freq, buffer_size, callback): device = alsaaudio.PCM(card=card_name) device.setchannels(1) device.setrate(freq) device.setformat(alsaaudio.PCM_FORMAT_S16_LE) device.setperiodsize(buffer_size) while True: samples = callback() data = struct.pack("h" * len(samples), *samples) device.write(data)openaudio(card_name="default", freq=44100, buffer_size=BUFFER_SIZE, callback=noise_callback)This example is continuously writing white noise to alsa default audio device until halted.If you want to do something in parallel, you need to launch openaudio in separate thread. As I understand it is the only way, because documentation says PCM_ASYNC mode haven't been implemented yet.Looks like the last commit to python-alsaaudio repository was in August of 2011. So hardly there can be any activities like forums or mail-lists. But its source code looks simple enough to quickly dig in. It's a plain wrapper.The only information I've used is documentation and playwav.py example from the python-alsaaudio package.
Uploading an image to the datastore in GAE I have this little set of Python code on the GAE, trying to upload an image to the datastore:class UploadPage(webapp.RequestHandler): def get(self): self.response.out.write("""<html><body> <form action="/addimg" enctype="multipart/form-data" method="post"> <div><label>Project Name</label></div> <div><textarea name="title" rows="2" columns "60"></textarea></div> <div><label>Despcription:</label></div> <div><textarea name="content" rows="3" cols="60"></textarea></div> <div><label>Image</label></div> <div><input type="file" name="img"/></div> <div><input type="submit" value="Upload" /></div> </form> </body> </html>""")class addimg(webapp.RequestHandler): def post(self): images = ImgUpload() imgtitle = self.request.get('title') imgcontent = self.request.get('content') headpic = self.request.get('img') images.headpic = db.Blob(headpic) images.imgtitle = imgtitle images.imgcontent = imgcontent images.put() self.redirect('/upload')When you go to the site, hit submit, it goes to the addimg and stops and doesn't complete the put or redirect, I am not sure where I may have missed it, any guidance is very appreciative.
I switched the addimg to a POST under the UploadPage, and it worked, not sure why it didnt work when coming frmo a class though
How to pickle a sklearn pipeline for multi label classifier/one vs rest classifier? I am trying to create a multi-label classifier using the one vs rest classifier wrapper. I used a pipeline for TFIDF and the classifier. When fitting the pipeline, I have to loop through my data by category and then fit the pipeline each time to make predictions for each category. Now, I want to export this like how one would usually export a fitted model using pickle or joblib. Example:pickle.dump(clf,'clf.pickle')How can I do this with the pipeline? Even if I pickle the pipeline, do I still need to fit the pipeline every time when I want to predict on a new keyword?Example: pickle.dump(pipeline,'pipeline.pickle')pipeline = pickle.load('pipeline.pickle')for category in categories: pipeline.fit(X_train, y_train[category]) pipeline.predict(['kiwi']) print (predict)If I skip the pipeline.fit(X_train, y_train[category]) after loading the pipeline, I only get a single value array in predict. If I fit the pipeline, I get a three value array.Also, how can I incorporate the grid search into my pipeline for export?raw_datakeyword class1 class2 class3"orange apple" 1 0 1"lime lemon" 1 0 0"banana" 0 1 0categories = ['class1','class2','class3']pipelineSVC_pipeline = Pipeline([ ('tfidf', TfidfVectorizer(stop_words=stop_words)), ('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)), ])Gridsearch (dont know how to incorporate this into the pipeline)parameters = {'tfidf__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'tfidf__max_df': [0.25, 0.5, 0.75, 1.0], 'tfidf__max_features': [10, 50, 100, 250, 500, 1000, None], 'tfidf__stop_words': ('english', None), 'tfidf__smooth_idf': (True, False), 'tfidf__norm': ('l1', 'l2', None), }grid = GridSearchCV(SVC_pipeline, parameters, cv=2, verbose=1)grid.fit(X_train, y_train)Fitting pipelinefor category in categories: print('... Processing {}'.format(category)) SVC_pipeline.fit(X_train, y_train[category]) # compute the testing accuracy prediction = SVC_pipeline.predict(X_test) print('Test accuracy is {}'.format(accuracy_score(y_test[category], prediction)))
OneVsRestClassifier internally fits one classifier per class. So you should not be fitting the pipeline for each class like you are doing in for category in categories: pipeline.fit(X_train, y_train[category]) pipeline.predict(['kiwi']) print (predict)You should be doing something like this SVC_pipeline = Pipeline([ ('tfidf', TfidfVectorizer()), #add your stop_words ('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)), ])SVC_pipeline.fit(["apple","boy","cat"],np.array([[0,1,1],[1,1,0],[1,1,1]]))You can now save the model using pickle.dump(SVC_pipeline,open('pipeline.pickle', 'wb')) Later you can load back the model and make predictions usingobj = pickle.load(open('pipeline.pickle', 'rb'))obj.predict(["apple","boy","cat"])You can binarise your multiclass labels using MultiLabelBinarizer before passing them to fit methodSample: from sklearn.preprocessing import MultiLabelBinarizery = [['c1','c2'],['c3'],['c1'],['c1','c3'],['c1','c2','c3']]mb = MultiLabelBinarizer()y_encoded = mb.fit_transform(y)SVC_pipeline.fit(["apple","boy","cat", "dog", "rat"], y_encoded)Using Grid Search (sample)grid = GridSearchCV(SVC_pipeline, {'tfidf__use_idf': (True, False)}, cv=2, verbose=1)grid.fit(["apple","boy","cat", "dog", "rat"], y_encoded)# Save the pipelinepickle.dump(grid,open('grid.pickle', 'wb'))# Later load it back and make predictionsgrid_obj = pickle.load(open('grid.pickle', 'rb'))grid_obj.predict(["apple","boy","cat", "dog", "rat"])
PyQt5 error "PyCapsule_GetPointer called with incorrect name" I've just built PyQt5 in a pyenv virtualenv with python 3.6.3 on OpenSUSE leap, the build went fine, but when I import>>> from PyQt5 import QtCoreTraceback (most recent call last): File "<stdin>", line 1, in <module>ValueError: PyCapsule_GetPointer called with incorrect nameI can import PyQt5, but then I cannot use the modules under it>>> import PyQt5>>> PyQt5.QtCoreTraceback (most recent call last): File "<stdin>", line 1, in <module>AttributeError: module 'PyQt5' has no attribute 'QtCore'I've read here that the cause could be another sip on the system for example of a PyQt4 installation, I tried to uninstall PyQt4 from the package manager but it didn't help.I have no idea what to do, any ideas?If I install the python3-qt5 package and use the system python it worksEdit:I had the same problem with PyQt4 on another machine on OpenSUSE Leap 15, the solution was to configure sip with:python configure.py --sip-module PyQt4.sip --no-dist-info --no-toolsas stated in the PyQt4 doc
OK so this was pretty easy actually, as stated in the doc (PyQt4, PyQt5), SIP must be configured with the --sip-module option, so for PyQt5 I did:python configure.py --sip-module PyQt5.sip --no-toolsand for PyQt4:python configure.py --sip-module PyQt4.sip --no-toolsThis applies for PyQt >= 4.12.2 and PyQt >= 5.11EDIT:PyQt5 now has the so called PyQt-builder, see the PyQt5 doc
Converting Histogram Values to Int Array in Python everyone. I created an histogram with by using an array with a size of 1x1000 and its values range from 0 to 99. I want to store histograms values as an int array. However, when I run the program I got the following results for the program(the numeric values are different for everyone since its random): (array([ 93., 101., 119., 91., 97., 110., 102., 111., 85., 91.]), array([ 0. , 9.9, 19.8, 29.7, 39.6, 49.5, 59.4, 69.3, 79.2, 89.1, 99. ]), )I want to get a result like this: [ 93 101 119 91 97 110 102 111 85 91]Can you help? Also what kind of array I am getting as a result? It doesn't look like anything I have ever seen beforeimport numpy as npimport matplotlib.pyplot as pltx = np.random.randint(100, size=1000) y = plt.hist(x,10) print(y)
When dealing with random numbers you can use np.random.seed(n) to ensure that others who run your code have the same numbers.According to the docs you get both the bin values and the bin edges back from plt.hist().Take a look at the following:print(y[0]) [109. 109. 82. 117. 84. 101. 80. 108. 110. 100.][type(i) for i in y[0]] [<class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>, <class 'numpy.float64'>]You can assign y[0] to whatever you'd like and you should be good to go.
Django 404 Page Not Found blog.views.post_detail I'm new with django and I use mysql. I followed the tutorial to make blog with django. I made list and detail blog, but when I clicked the detail post of blog, the error comes Page Not Found (404) Raised by: blog.views.post_detail. These are my site urls.py, blog urls.py, models.py and views.py.Site urls.py:from django.conf.urls import include, urlfrom django.contrib import adminurlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^blog/', include('blog.urls', namespace='blog', app_name='blog')),] blog urls.py:from django.conf.urls import url,includefrom . import viewsurlpatterns = [ url(r'^$', views.PostListView.as_view(),name='post_list'), url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\ r'(?P<post>[-\w]+)/$', views.post_detail, name='post_detail'),]models.py:# -*- coding: utf-8 -*-from __future__ import unicode_literalsfrom django.db import modelsfrom django.utils import timezonefrom django.contrib.auth.models import Userfrom django.core.urlresolvers import reverseclass PublishedManager(models.Manager): def get_queryset(self): return super(PublishedManager,self).get_queryset()\ .filter(status='published')class Post(models.Model): STATUS_CHOICES = (('draft', 'Draft'), ('published', 'Published'),) title = models.CharField(max_length=250) slug = models.SlugField(max_length=250,unique_for_date='publish') author = models.ForeignKey(User,related_name='blog_posts') body = models.TextField() publish = models.DateTimeField(default=timezone.now) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) status = models.CharField(max_length=10,choices=STATUS_CHOICES,default='draft') objects = models.Manager() published = PublishedManager() class Meta: ordering=('-publish',) def __str__(self): return self.title def get_absolute_url(self): return reverse('blog:post_detail', args=[self.publish.year, self.publish.strftime('%m'), self.publish.strftime('%d'), self.slug])views.py:# -*- coding: utf-8 -*-from __future__ import unicode_literalsfrom django.shortcuts import render,get_object_or_404from .models import Postfrom django.core.paginator import Paginator, EmptyPage, PageNotAnIntegerfrom django.views.generic import ListViewdef post_list(request): object_list = Post.published.all() paginator = Paginator(object_list,3) page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: posts = paginator.page(1) except EmptyPage: posts = paginator.page(paginator.num_pages) return render(request,'blog/post/list.html',{'page':page,'posts':posts})class PostListView(ListView): queryset = Post.published.all() context_object_name = 'posts' paginate_by = 3 template_name = 'blog/post/list.html'def post_detail(request,year,month,day,post): post = get_object_or_404(Post,slug=post, status='published', publish__year=year, publish__month=month, publish__day=day) return render (request, 'blog/post/detail.html', {'post':post})
I think maybe you should use "pk" as an argument instead of complicated urls that you have gotdef post_detail(request, pk): post = get_object_or_404(Post,pk=pk) return render (request, 'blog/post/detail.html', {'post':post})urlpatterns = [ url(r'^$', views.PostListView.as_view(),name='post_list'), url(r'^(?P<pk>[0-9]+)/$', views.post_detail, name='post_detail'), ]Just don't complicate things nobody loves thisAnd if you still want it to be your way, try to make it super simple from the start and then add your stuff to see what went really wrong.From the beginning use id, then try to add a year, then a month and then other stuff. You will find what's wrong
Why does my loop stop after one iteration? I'm struggling to see why my loop stops after one iteration.My code: import osdef open_data(fpath): counter=0 for i in os.listdir(fpath): if os.path.isfile(os.path.join(fpath,i)): #print counter f=open(os.path.join(fpath,i),"r") #counter=counter+1 return counter,f.readlines() #f.close()x=open_data("C:/Users/manchester/.ipynb_checkpoints/txt_sentoken/practice_")Basically I am trying to loop through all files in my directory which contain movie reviews. I am first aiming to read all files from the directory using a function then I need to take say 70% of the reviews for training 10% for testing 10% for validation 10% for hyperparamters sample. But I just can't get over this first hurdle of trying to read all files using a function.I have tried using list and append but this does not work either.
You are not reading all the files you are only opening all the files, in the same variable and in the end when you are doing f.readlines() f it's only whatever was your last file, you should read all in a "buffer" and in the end return itIt should be something like thisdef open_data(fpath): counter=0 all_lines = [] for i in os.listdir(fpath): if os.path.isfile(os.path.join(fpath,i)): all_lines += open(os.path.join(fpath,i),"r").readlines() counter=counter+1 return counter,all_linesKeep in mind that reading lot's of eventually big files will add up in memory, you'd better with using a generator if your code allows itdef get_lines(fpath): for i in os.listdir(fpath): if os.path.isfile(os.path.join(fpath,i)): for line in open(os.path.join(fpath,i),"r"): yield line# this would give you an iterable over all the lines in all the files, one line at a time Later edit:I have a folder "x" with 2 files "f1" and "f2"; "f1" contains the numbers 1,2,3 one per line while "f2" contains the numbers 4,5,6>>> print open_data(".\\x") # gives(2, ['1\n', '2\n', '3\n', '4\n', '5\n', '6\n'])using the generator, you won't have the list of all the lines, but an "iterable", you could call it "lazy-reader", in order to use it you have to iterate over it>>> for line in get_lines(".\\x"):... print line # will give123456the extra line between the numbers is the \n read from the files printed along the \n the print adds
Python: Can't add string at the end of the list I'm trying to make a text based RPG and when I'm trying to shorten every possible input into one variable I can't end a list with string:input_use = ["use ", "use the "]...input_press = ["press ", "press the ", input_use]...input_interact_button = input_press + "button"
If you want to build lists, then concatenate the lists onto existing values:input_press = ["press ", "press the "] + input_useinput_interact_button = input_press + ["button"]Demo:>>> input_use = ["use ", "use the "]>>> input_press = ["press ", "press the "] + input_use>>> input_interact_button = input_press + ["button"]>>> input_interact_button['press ', 'press the ', 'use ', 'use the ', 'button']
The len() of this list is not coming out right: Python I'm trying to count the number of times the word "fizz" appears in my list. This is the code:def fizz_count(key): for x in key: if x != 'fizz': key.remove(x) return len(key)print fizz_count(["fizz",0,0,0,10])However, this returns 4 instead of 1. Any help with my code?
As soon as a function returns something, it breaks. Hence, when you do return len(key), you return the length of the list after removing the first 0.If you want to count how many times something appears in a list, just do key.count('fizz')You should never remove items from a list while iterating over it. Look what would happen if you continued your loop, and there were more "fizz"es:>>> key = ['fizz', 1, 2, 3, 'fizz', 4]>>> for x in key:... if x != 'fizz':... key.remove(x)... print key # Print the list after an item is removed.... ['fizz', 2, 3, 'fizz', 4]['fizz', 2, 'fizz', 4]['fizz', 2, 'fizz']Notice how it never removed the 2? Because the for-loop never got up to it, because the list's length/order changed.
Removing zero values from a numpy array of arrays Original datas [[ 0.00000000e+00 1.00000000e+00 -6.76207728e+00 -1.63236398e+01] [ 0.00000000e+00 1.00000000e+00 2.51283367e+01 1.13952157e+02] [ 0.00000000e+00 1.00000000e+00 3.11402956e+00 -5.16009612e+02] [ 0.00000000e+00 1.00000000e+00 3.10969787e+01 1.82175649e+02] [ 1.00000000e+00 -2.31269114e+00 -4.13720127e+02 3.55395844e+03] [ 1.00000000e+00 4.54598490e+01 6.19694322e+02 2.61091335e+03] [ 1.00000000e+00 7.36925014e-01 -4.49386738e+02 -1.22392549e+03] [ 1.00000000e+00 3.29511609e+00 -4.43413555e+02 -4.12677155e+03]]I tried to remove zeros with this ode belowdef removeZeroPadding(X): res = [] for poly in enumerate(X): tmp = poly[1] tmp = tmp[tmp != 0] res.append(tmp) return resThis transforms into[array([ 1. , -6.76207728, -16.32363975]), array([ 1. , 25.1283367 , 113.95215706]), array([ 1. , 3.11402956, -516.0096117 ]), array([ 1. , 31.09697873, 182.17564943]), array([ 1.00000000e+00, -2.31269114e+00, -4.13720127e+02, 3.55395844e+03]), array([1.00000000e+00, 4.54598490e+01, 6.19694322e+02, 2.61091335e+03]), array([ 1.00000000e+00, 7.36925014e-01, -4.49386738e+02, -1.22392549e+03]), array([ 1.00000000e+00, 3.29511609e+00, -4.43413555e+02, -4.12677155e+03])]How can I keep the structures of the original data with no zeros? Thanksedit: it should look like this [[ 1.00000000e+00 -6.76207728e+00 -1.63236398e+01] [ 1.00000000e+00 2.51283367e+01 1.13952157e+02] [ 1.00000000e+00 3.11402956e+00 -5.16009612e+02] [ 1.00000000e+00 3.10969787e+01 1.82175649e+02] [ 1.00000000e+00 -2.31269114e+00 -4.13720127e+02 3.55395844e+03] [ 1.00000000e+00 4.54598490e+01 6.19694322e+02 2.61091335e+03] [ 1.00000000e+00 7.36925014e-01 -4.49386738e+02 -1.22392549e+03] [ 1.00000000e+00 3.29511609e+00 -4.43413555e+02 -4.12677155e+03]]
It seems like you do not want a list of arrays as the output but instead want a 2-dimensional array (i.e. [[...]] instead of [array([...]), array([...])]).However, this is not possible as the rows of your array end up with different sizes after you trim them by removing the zeros (i.e. some end up having 3 elements and some have 4). If you want it to be a single array, same as for matrices in mathematics, all columns (and all rows) will need to have the same number of elements.As an alternative, you could assign the elements you want to trim a different value, e.g. None, or you can create an empty array and fill it.
Python subprocess call throws error when writing file I'd like to use SVOX/pico2wave to write a wav-file from Python code. When I execute this line from a terminal the file is written just fine:/usr/bin/pico2wave -w=/tmp/tmp_say.wav "Hello world."I've verified that pico2wave is located in /usr/bin.This is my Python code:from subprocess import callcall('/usr/bin/pico2wave -w=/tmp/tmp_say.wav "Hello world."')... which throws this error:Traceback (most recent call last): File "app/app.py", line 63, in <module> call('/usr/bin/pico2wave -w=/tmp/tmp_say.wav "Hello world."') File "/usr/lib/python2.7/subprocess.py", line 168, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.7/subprocess.py", line 390, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1024, in _execute_child raise child_exceptionOSError: [Errno 2] No such file or directory
From the documentation Providing a sequence of arguments is generally preferred, as it allows the module to take care of any required escaping and quoting of arguments (e.g. to permit spaces in file names). If passing a single string, either shell must be True (see below) or else the string must simply name the program to be executed without specifying any arguments.So you might try with call(['/usr/bin/pico2wave', '-w=/tmp/tmp_say.wav', '"Hello world."'])
Button matrix/multiple button + function for each one def create_widget(self): for x in range(11): for y in range(11): self.bttn = Button(self) self.bttn.grid(row=x, column=y) for c in range(len(path)): if [x,y] == path[c]: self.bttn["text"] = numbers[c] break else: self.bttn["text"] = randint(0, 200)def select(self): print(self.bttn["text"])Note:path is the list of coordinates (example: [[0, 0], [0, 1], [1, 1],[2, 1], [3, 1], [3, 2], [4, 2], [5, 2], [6, 2], [6, 3], [7, 3], [8,3], [8, 4], [9, 4], [9, 5], [10, 5], [10, 6], [10, 7], [10, 8], [10, 9], [10, 10]])numbers is a randomly generated array of numbers (example: [15, 21, 27, 33, 39, 45, 51, 57, 63, 69, 75, 81, 87, 93, 99, 105, 111, 117, 123, 129, 135])I have a matrix of buttons and I want a function (e.g. function select(self)) to print the text of a clicked button. Right now it only prints text from the last clicked button.
Each new button you declare overwrite the same instance of self.bttn which leads to you in the end only having access to the lastly defined button. Thus, the buttons get created alright, but then you have access only to the last one. Thus, its name will be the one you access through self.bttn and therefore its name will be shown every time select is called.
Python: list indices must be integers or slices, not str Hi am trying to print a list of string in python but still its showing me this error."list indices must be integers or slices, not str"code:Features ['entity_number', 'type', 'programs', 'name', 'title', 'addresses']So in here i just want to display the data under 'name'.can some one help me to resolve this problem..enter image description here
it looks like you are looking for a dictionary{} and not a list[]. A dictionary has the added benefit of allowing for what is known as a 'key: value' pairs. If you know your key, you can get your value! Features = { 'entity_number': 'some number', 'type': 'some type', 'programs': 'some program', 'name': 'some name', 'title': 'some title', 'addresses': 'some address' }To find a specific value from a key, you can do the following:for key, value in Features.items(): if key is 'name': #'name' is the key we wish to get the value from print(value) # print its valuethis will give you the output:some nameI hope this helped.
how to display entry widget on new window in tkinter? from tkinter import *from tkinter import messageboxw = Tk()w.geometry('200x250')def buttons(): r1 = Tk() r1.geometry('200x250') r1.mainloop()t= Label(text = "MIB",font =("Arial", 49))t.pack(side = TOP)e = Label(text = "Email")e1 =Entry()e.pack()e1.pack()p = Label(text = "Password")p.pack()p1 = Entry()p1.pack()b= Button(text = "SIGN UP", command = buttons)b.pack()w.mainloop()How I can display the entry widget after clicking submit button in the new window in tkinter python? I tried defining under button function but it is showing me the window but not the widgets. The widgets are only displayed on the first window after clicking submit button.
TK() is the root window, so it needs to be called only once. After that, to open another window, you need to use tkinter.Toplevel(). Below is the code I put labelexample in a new window.from tkinter import *from tkinter import messageboxw = Tk()w.geometry('200x250')def buttons(): r1 = Toplevel(w) r1.geometry('200x250') labelexample = Label(r1, text = 'GOOD') labelexample.pack()t= Label(text = "MIB",font =("Arial", 49))t.pack(side = TOP)e = Label(text = "Email")e1 =Entry()e.pack()e1.pack()p = Label(text = "Password")p.pack()p1 = Entry()p1.pack()b= Button(text = "SIGN UP", command = buttons)b.pack()w.mainloop()
Changing name of the file to parent folder name I have a bunch of folders in my directory. In each of them there is a file, which you can see below:Regardless the file extension I would like to have the name of this file to be exactly the same as its parent folder, i.e. when considering folder 2023-10-18 I would like to have the file inside 2023-10-18 instead of occultation....I tried to rename the multiple files by using this thread:Renaming multiple files in a directory using Pythonand herehttps://pynative.com/python-rename-file/#:~:text=Use%20rename()%20method%20of,function%20to%20rename%20a%20file.but unfortunately after application the code like this: import os from pathlib import Path pth = Path(__file__).parent.absolute() files = os.listdir(pth) for file in files: os.rename(os.pth.join(pth, file), os.pth.join(pth, '' + file + '.kml'))I have an error:AttributeError: module 'os' has no attribute 'pth'described here:AttributeError: 'module' object has no attributewhich says only a little to me, as I am a novice in Python.How can I auto change the name of all the files in these directories? I need the same filename as the directory name. Is it possible?UPDATE:After hint below, my code looks like this now: import os from pathlib import Path pth = Path(__file__).parent.absolute() files = os.listdir(pth) for file in files: os.rename(os.path.join(pth, file), os.path.join(pth, '' + file + '.kml'))but instead of changing the filename inside the folder list, all the files in the given directory have been changed to .kml.How can I access to the individual files inside the folderlist?
Reasoning for each line of code are commented! Every answer should use iglob, please read more about it here! The code is also suffix agnostic (.klm as the suffix is not hardcoded), and works in any scenario that requires this utility.Only standard liberary functions were used.The most satisfying method: Move out of directory, rename, and delete directoryimport osfrom shutil import movefrom glob import iglobfrom pathlib import Pathfrom concurrent.futures import ThreadPoolExecutor# The .py file has to be on the same directory as the folders containing the files!root = Path(__file__).parent# Using threading in case the operation becomes I/O bound (many files)with ThreadPoolExecutor() as executor: for file in iglob(str(root / "**" / "*")): file = Path(file) # The new filename is the name of the directory, and the suffix(es) of the original file new_filename = f"{file.parent.name}{''.join(file.suffixes)}" # Move AND rename simultaneosly executor.submit(move, file, root / new_filename) # Delete directory because it is empty, and has no utility; ommit this line if not True executor.submit(os.rmdir, file.parent)Less satisfying; OPs request: Rename file (keep inside directory)In case you really want to only rename the files, and keep them in their respective directory:import osfrom shutil import movefrom glob import iglobfrom pathlib import Pathfrom concurrent.futures import ThreadPoolExecutorRENAME_ONLY = True# The .py file has to be on the same directory as the folders containing the files!root = Path(__file__).parent# Using threading in case the operation becomes I/O boundwith ThreadPoolExecutor() as executor: for file in iglob(str(root / "**" / "*")): file = Path(file) # The new filename is the name of the directory, and the suffix(es) of the original file new_filename = f"{file.parent.name}{''.join(file.suffixes)}" if RENAME_ONLY: executor.submit(os.rename, file, file.parent / new_filename) else: # Move AND rename simultaneosly executor.submit(move, file, root / new_filename) # Delete directory because it is empty, and has no utility; ommit this line if not True executor.submit(os.rmdir, file.parent)Why ''.join(file.suffixes)?There are files that have multiple periods; like abc.x.yz. We get .yz with file.suffix, and .x.yz with ''.join(file.suffixes); hence my choice to use the latter.It is a matter of sensitivity towards sub-suffixes, which are often important. For instance, .tar.gz files file.suffix wouldn't catch .tar, which is detrimental to the file format.