questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
How to speed up append in Python? My code is something like this:def fun -> list: array = ... # creating of array return arraydef fun2: array2 = [] # some code for i in range(some_value): array2.append(fun)Note that it isn't known how many values function fun returns in each step of the algorithm so it is impossible to allocate array2 at the beginning. Is there any way how to fasten this performance?
If you take a look at the time complexity table of different python operations on lists, you can see that that the speed of appending k elements to a list is always the same regardless to the number of elements to append.
duplicate key value violates unique constraint .... Django error I am trying to make a 'get' query on a model. The parameters that i'm using to query the model are foreign keys(both of them).The models looks something like this...class model_1(models.Model): field_1 = models.ForeignKey(model_2) field_2 = models.CharField(max_length = 512) field_3 = models.ForeignKey(model_3) class Meta: unique_together = ("field_1", "field_3")and i'm trying to run this querym = model_1.objects.get(field_1 = 'something', field_2 = 'something_1')But it throws back an error duplicate key value violates unique constraint... along with DETAIL: Key (model_1_id, model_3_id)=(1339, 5) already existsI'm not able to understand why the error is on duplicate keys when I'm trying to read the entries. It would have made sense to me if I were trying to insert a new record in it and had conflicting keys.Thanks!
You are trying to create a unique index on (field_1, field_3) which already have duplicate data.Please check your data there must be 2 rows with the same value of field_1 and field_3 ie.(1339, 5), which is violating the unique constraint.
How to fetch data from AngularJS to Python Script I am new for python so if you can help me to solve my problem. I want fetch Username and Password from angularjs to python script In my python script then what mechanism i want to follow to achieve this gole. html code<form action="" method="post" id="homeTitle"><table width="20%"><tbody><tr><td></td><td></td></tr><tr><td><label class='lblUserEmail'>Email</label></td><td align='center'><input id='Email' type='text' style='background-color: rgb(250, 255, 189);' ng-model='Email'></div></td></tr><tr><td><label class='lblPassword'>Password</label></td><td align='center'><input id='password' type='password'style='background-color: rgb(250, 255, 189);' ng-model='password'></div></td></tr><tr><td></td><td align='right'><button id='Login' name='Login' class="btn btn-primary btn-lg" ng-click="checkLogin()">Login</button></td> </tr></tbody></table></form>angularjs code$http.post("/python/index.py/", user_data) .success(function(response) { //console.log(response); $http.defaults.headers.common['Authorization'] = 'Token ' + response.token; // $rootScope.toke = response.token; // console.log($rootScope.toke); //$location.path("/dashboard"); var toke = response.token; //console.log(toke); $rootScope.$broadcast('event:login-confirmed',toke); jQuery("#loginBox").slideUp(function(){jQuery('#blanket').css({'display':'none'});}); jQuery('#homeTitle').css({'display':'none'}); jQuery(".bottomImages").slideDown(); }); };Iam not using Django Python frame work. I am trying to create ma own frame work using wsgi and python so. Help me to resolve this problem...
It seems you have tagged this post as Django but have not used Django yet. I suggest that you read the Django tutorial and solve this. Part 4 covers how you can create a simple form in Django, like yours.In simple terms, your problem is that you have created the frontend to show the form but do not have a backend to take action on the form once it gets submitted. You can use any server based web frameworks like Flask, Pyramid or even Node.js for this.
Django ModelAdmin Fieldset within fieldset What I've been asked create is an admin page with the following layout:Fieldset 1 nameSection 1 namefield 1field 2Fieldset 2 nameSection 2 namefield 3and so on.I can create the fieldsets with ModelAdmin.fieldsets obviously, but its the inner grouping or "Sections" that I'm having difficulty with. The fields to display all belong to the same model, so I can't achieve this with Inlines (or at least I believe I can't).I'm pretty sure the only way to achieve what I want is to create a custom template and to by pass Django's default loveliness but I'd ideally like to extend the Django Admin because this Fieldset -> Section -> fields layout will be required for several models and I don't want to have to manually generate forms & templates for each model, if I can help it.Could anyone point me in the right direction to achieve the above layout?Thanks
Unfortunately you're out of luck, the Django admin does not support nested fieldsets and has no way to output other structural tags, except by customising the templates.You can have a look at:http://django-betterforms.readthedocs.org/en/latest/basics.htmlIt supports nested fieldsets, so this code will help you when you are customising your admin templates.
Tkinter Photo Image Issues I am trying to run sample code and I am running into a problem where sublime says "_tkinter.TclError: couldn't recognize data in image file "C:\Users\atave\Dropbox\Python\tkinter Python Tutorial\Labels\prac.png". Can anyone tell me why? I have already set up sublime as an exception in the firewall, does it have to do with the location of the image?import tkinter as tkfrom tkinter import ttk# create the root windowroot = tk.Tk()root.geometry('300x200')root.resizable(False, False)root.title('Label Widget Image')# display an image labelphoto = tk.PhotoImage(file="C:\\Users\\atave\\Dropbox\\Python\\tkinter Python Tutorial\\Labels\\prac.png")image_label = ttk.Label( root, image=photo, padding=5)image_label.pack()root.mainloop()
the tk.PhotoImage doesn't work that well with .png files. I would suggest conversion to .bmp file or using PIL(Pillow) module- tk.PhotoImage(ImageTk.PhotoImage(file="C:\\Users\\atave\\Dropbox\\Python\\tkinter Python Tutorial\\Labels\\prac.png")The ImageTk in ImageTk.PhotoImage() is imported from PIL(from PIL import ImageTk)
Python - multiplication between a variable that belong to the same class How to set a class variable return to other data type (list or int)?So I have two variable that is belong to the same class and I want to use the operator for example multiplication to both of the variables, but It cannot be done because both of them have the class data type.For example:class Multip: def __init__(self,x,y): self.x = x self.y = y def __repr__(self): return "{} x {}".format(self.x, self.y) def __str__(self): return "{}".format(self.x*self.y) def __mul__(self, other): thisclass = self.x*self.y otherclass = other return thisclass * otherclassa = Multip(5,6)b = Multip(7,5)c = a*bprint(c)This will return an error: TypeError Traceback (most recent call last) in () 14 a = Multip(5,6) 15 b = Multip(7,5) ---> 16 c = a*b 17 print(c) in mul(self, other) 10 thisclass = self.x*self.y 11 otherclass = other ---> 12 return thisclass * otherclass 13 14 a = Multip(5,6) TypeError: unsupported operand type(s) for *: 'int' and 'Multip'
To get this to work, do this:otherclass = other.x*other.yinstead ofotherclass = otherThis will mean otherclass is an int and the multiplication will work.
Inner workings of NumPy's logical_and.reduce I'm wondering how does np.logical_and.reduce() work.If I look at logical_and documentation it presents it as a function with certain parameters. But when it is used with reduce it doesn't get any arguments.When I look at reduce documentation I can see it has ufunc.reduce as it's definition. So I'm left wondering, what kind of mechanisms are used when I call np.logical_and.reduce()? What does logical_and as a ufunc represent in that snippet: a function, an object, or something else?
I'm not sure what your question is. Using Pythons help the parameters to reduce are as shown below. reduce acts as a method of the ufunc, it's reduce that takes the arguments at run time.In [1]: import numpy as nphelp(np.logical_and.reduce)Help on built-in function reduce:reduce(...) method of numpy.ufunc instance reduce(a, axis=0, dtype=None, out=None, keepdims=False) Reduces `a`'s dimension by one, by applying ufunc along one axis.Playing with this:a=np.arange(12.0)-6.0a.shape=3,4aOut[6]:array([[-6., -5., -4., -3.], [-2., -1., 0., 1.], [ 2., 3., 4., 5.]])np.logical_and.reduce(a, axis=0)Out[7]: array([ True, True, False, True], dtype=bool)# False for zero in column 2np.logical_and.reduce(a, axis=1)Out[8]: array([ True, False, True], dtype=bool)# False for zero in row 1Perhaps clearer if the dimensions are kept.np.logical_and.reduce(a, axis=0, keepdims=True)Out[12]: array([[ True, True, False, True]], dtype=bool)np.logical_and.reduce(a, axis=1, keepdims=True)Out[11]:array([[ True], [False], # Row 1 contains a zero. [ True]], dtype=bool)The reduction ands each element along the chosen axis with the cumulative result bought forward. This is Python equivalent I'm sure numpy will be more efficient.res=a[0]!=0 # The initial value for result bought forwardfor arr in (a!=0)[1:]: print(res, arr) res = np.logical_and(res, arr) # logical and res and a!=0print('\nResult: ', res)Out:[ True True True True] [ True True False True][ True True False True] [ True True True True]Result: [ True True False True]Hope this helps or helps clarify what your question is.Edit: Link to Docs and callable object example.ufunc documentation The Method documentation is about 60% down the page.To understand a callable with methods here's a ListUfunc class to give very basic examples of numpy ufuncs for Python lists.class ListUfunc: """ Create 'ufuncs' to process lists. """ def __init__(self, func, init_reduce=0): self._do = func # _do is the scalar func to apply. self.reduce0 = init_reduce # The initial value for the reduction method # Some reductions start from zero, logical and starts from True def __call__(self, a, b): """ Apply the _do method to each pair of a and b elements. """ res=[] for a_item, b_item in zip(a, b): res.append(self._do(a_item, b_item)) return res def reduce(self, lst): bfwd = self.reduce0 for item in lst: bfwd = self._do(bfwd, item) return bfwda=range(12)b=range(12,24) plus = ListUfunc(lambda a, b : a+b)plus(a, b)Out[6]: [12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34]plus.reduce(a)Out[7]: 66plus.reduce(b)Out[8]: 210log_and = ListUfunc( lambda a, b: bool(a and b), True )log_and(a,b)Out[25]: [False, True, True, True, True, True, True, True, True, True, True, True]log_and.reduce(a)Out[27]: False # a contains a zerolog_and.reduce(b)Out[28]: True # b doesn't contain a zero
How can I print every values of each attributes of a class in Python I want to print each values of a class, but I don't know how to do it. I understand why my method of doing it doesn't work though.class test: def __init__(self, a = 1, b = 2, c = 3): self.a = a self.b = b self.c = cclassobject = test()for attr in dir(classobject) : if not attr.startswith('__'): print(str(attr) + ' value= ' + classobject.attr)I get the error AttributeError: 'test' object has no attribute 'attr'
classobject.attr is specifically looking for an attribute named attr in classobject.attr does not get magically replaced at runtime by the appropriate value.What you need is the getattr function:print(str(attr) + ' value= ' + getattr(classobject, attr))
Pyglet Opengl VBOs I'm new in pyglet, B4, I've tried to use PyOpenGL with Pygame, but PyOpenGL creates weird NuffFunctionErrors, so I've moved to Pyglet.I've tried out this code, it runs perfectly:from pyglet.gl import *window = pyglet.window.Window()vertices = [ 0, 0, window.width, 0, window.width, window.height]vertices_gl = (GLfloat * len(vertices))(*vertices)glEnableClientState(GL_VERTEX_ARRAY)glVertexPointer(2, GL_FLOAT, 0, vertices_gl)@window.eventdef on_draw(): glClear(GL_COLOR_BUFFER_BIT) glLoadIdentity() glDrawArrays(GL_TRIANGLES, 0, len(vertices) // 2)pyglet.app.run()I've tried to rewrite this to use VBOs, but I've got a black window.What's wrong with my code?from pyglet.gl import *window = pyglet.window.Window()vertices = [ 0, 0, window.width, 0, window.width, window.height]vertices_gl = (GLfloat * len(vertices))(*vertices)glEnableClientState(GL_VERTEX_ARRAY)buffer=(GLuint)(0)glGenBuffers(1,buffer)glBindBuffer(GL_ARRAY_BUFFER_ARB, buffer)glBufferData(GL_ARRAY_BUFFER_ARB, 4*3, vertices_gl, GL_STATIC_DRAW)glVertexPointer(2, GL_FLOAT, 0, 0)@window.eventdef on_draw(): glClear(GL_COLOR_BUFFER_BIT) glLoadIdentity() glDrawArrays(GL_TRIANGLES, 0, len(vertices) // 2)@window.eventdef on_resize(width, height): glViewport(0, 0, width, height) glMatrixMode(gl.GL_PROJECTION) glLoadIdentity() glOrtho(0, width, 0, height, -1, 1) glMatrixMode(gl.GL_MODELVIEW)pyglet.app.run()
Ok, I received the answer in comment, but the problem was that 12 bytes wan't enough 4 six floats.Each float uses 4 byte.
To find the _id of document other than during insertion I have created several documents and inserted into Mongo DB.I am using python for the same . Is there a way where in I can get the _id of a particular record?I know that we get the _id during insertion. But if i need to use it at a lateral interval is there a way I can get it by say using the find() command?
You can user the projection to get particular field from the document like this db.collection.find({query},{_id:1}) this will return only _id http://docs.mongodb.org/manual/reference/method/db.collection.find/
How to parse JSON from python service with unicode? SyntaxError: Unexpected token u I'm loading json from http://pythond3jsmashup.appspot.com/chart using angular js$http.get('http://pythond3jsmashup.appspot.com/chart');I got this service from a an example of how to get Google BigQuery data as a service using python (http://code.tutsplus.com/tutorials/data-visualization-app-using-gae-python-d3js-and-google-bigquery--cms-22175).Angular has a problem with the json parse with unicode. Is there any way around this with Angular, or do you have to modify your python to leave out the unicode character?I'm using angular 1.3 angular.js:11607 SyntaxError: Unexpected token u I can see it's failing at JSON.parsedata looks like this coming back from python service: {u'kind': u'bigquery#queryResponse', u'rows': [{u'f': [{u'v': u'brave'}]}, {u'f': [{u'v': u'forfeits'}]}, {u'f': [{u'v': u'holding'}]}, {u'f': [{u'v': u'profession'}]}, {u'f': [{u'v': u'Condemn'}]}, {u'f': [{u'v': u"fear'st"}]}, {u'f': [{u'v': u'answered'}]}, {u'f': [{u'v': u'religion'}]}, {u'f': [{u'v': u"You're"}]}, {u'f': [{u'v': u'deputy'}]}, {u'f': [{u'v': u'heed'}]}, {u'f': [{u'v': u'generation'}]}, {u'f': [{u'v': u'Boldly'}]}, {u'f': [{u'v': u"'"}]}, {u'f': [{u'v': u'told'}]}, {u'f': [{u'v': u'answer'}]}, {u'f': [{u'v': u'regard'}]}, {u'f': [{u'v': u'Touching'}]}, {u'f': [{u'v': u'meet'}]}, {u'f': [{u'v': u"o'er"}]}, {u'f': [{u'v': u'dawn'}]}, {u'f': [{u'v': u'authorities'}]}, {u'f': [{u'v': u'Mended'}]}, {u'f': [{u'v': u'quality'}]}, {u'f': [{u'v': u'lusty'}]}, {u'f': [{u'v': u'forbid'}]}, {u'f': [{u'v': u'instruments'}]}, {u'f': [{u'v': u'A'}]}, {u'f': [{u'v': u'dreadfully'}]}, {u'f': [{u'v': u'accordingly'}]}, {u'f': [{u'v':
Angular has a problem with the json parse with unicode.No, the problem is that the service literally returns {u'kind': u'bigquery#queryResponse', ...}`. Which is not JSON. That u right after the { is invalid (which is what the error tells you). Simple proof:> JSON.parse("{u'foo': 'bar'}");Uncaught SyntaxError: Unexpected token uWhatever you do, you are not creating the response properly. Use json.dumps.Given the fact that the linked tutorial claims the response is JSON is an indicator that it may not be a good tutorial.However, if you continue to follow the tutorial, you will see that the return proper JSON in the third part.
'str' object is not callable Python to SQL database I'm trying to enter details into an SQL database through Python. I have already created the database with fields customerID, firstname, surname, town and telephone.When running this function, I get the error 'str' object is not callablefor the line where I am trying to INSERT the values of the variables.import sqlite3#function used to add a new customer, once confirmed save the customer in a #customer listdef add_customer(): #open up the clients database new_db = sqlite3.connect('clients.db') #create a new cursor object to be able to use the database c = clients_db.cursor() print("Add customer") firstname = "" while len(firstname) == 0: firstname = input("Enter the customer's first name: ") surname = "" while len(surname) == 0: surname = input("Enter the customer's surname: ") town = "" while len(town) == 0: town=input("Enter the customer's town: ") telephone = '1' while len(telephone) != 11: while telephone[0] != '0': telephone = input("Please enter the customer's telephone number: ") if telephone[0] != '0': print ("telephone numbers must begin with zero") elif len(telephone) != 11: print("must have 11 numbers") #check that data has been entered print(firstname,surname,town,telephone) #insert data into the customer table c.execute('INSERT INTO Customer VALUES (NULL, ?,?,?,?,)'(firstname, surname, town, telephone)) print("Customer added successfully ...") clients_db.commit() clients_db.close() if choice ==1: another = input("Would you like to add another? yes [y] or no [n] --> ") if another=='y': add_customer() else: main_menu()
You forgot a comma between the SQL statement string and the parameters:c.execute('INSERT INTO Customer VALUES (NULL, ?,?,?,?,)'(firstname, surname, town, telephone))# ^so you effectively do this: "some string"(arg1, arg2, ...), trying to treat the string as a function.Simply insert a comma there:c.execute( 'INSERT INTO Customer VALUES (NULL, ?,?,?,?,)', (firstname, surname, town, telephone))
how to print list in a column? How do i print these function in one column. item = stock.stock_list(location_name)for x in sorted(item): """Stock list of given location""" print (x)for y in sorted(item): """Stock price of give location and stock list""" print ("{0:.2f}".format(stock.stock_price(y))) for z in sorted(item): """stock qty of given location name and item""" print(stock.stock_quantity(location_name, z)) output isElephant foot yamKai-lan 16.2513.96 9018want it to be Elephant foot yam 16.25 90Kai-lan 13.96 18 the first one has to be left aligned and 20 wide, the second being right aligned and 8 wide and the third being right aligned and 6 wide.also another question.how do i print location_id from below in brackets?print(toptext, location_name, location_id)output isStock summary for location 123456789i want it to be Stock summary for Wellington (123456789)i tired print(toptext, location_name, "(", location_id ,")" )but there is a space between the brackets like so ( 123456789 )Thank you in advance
For the second question try this:print "(%s)" % location_idIf you want to print more variables you can do it like this:print "%s %s (%s)" % (toptext, location_name, location_id)
How to write Generators , list comprehension in python When i think about any problem , thinking via list comprehension doesn't come naturally.Whats the best way to think through this?RegardsAshish
Here's how I think through list comprehensions.1) I need to output a list2) I'm starting with a list/iterable. 3) I either need to perform an action on all the elements and/or choose specific elements from the original list.That leads me to the following construction:output = [ mangle(x) for x in selector(input)]mangle() is some function that alters an element. For example I might use x.lower() to make an element lower case. I always use x as the iterator. Just keeps everything consistent (and I never use it as an iterator in a for loop).selector() is a function that outputs True or False. Usually this will be some sort of if statement. I've mostly used this a test for existence, especially if I'm mangling the output. For example, x for x in input if input.List comprehensions can be really great. I think that they really improve readibility and are way more than a neat trick. But remember, they're nothing more than a for loop inline. It might be easiest to try writing for loops and attempt to translate them into a list comprehension.
Gunicorn + Subprocesses raises exception [Errno 10] I've stumbled across a weird exception I haven't been able to resolve... can anyone suggest what is wrong or a new design? I'm running a Gunicorn/Flask application. In the configuration file, I specify some work to do with an on_starting hook [1]. Inside that hook code, I have some code like this (nothing fancy):# Called before the server is startedmy_thread = package.MyThread()my_thread.start()The package.MyThread class looks like the following. The ls command is unimportant, it can be any command.class MyThread(threading.Thread): """ Run a command every 60 seconds. """ def __init__(self): threading.Thread.__init__(self) self.event = threading.Event() def run(self): while not self.event.is_set(): ptest = subprocess.Popen(["ls"], stdout=subprocess.PIPE) ptest.communicate() self.event.wait(60) def stop(self): self.event.set()Upon starting up the server, I'm always presented with this exception:Exception in thread Thread-1:Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "__init__.py", line 195, in run ptest.communicate() File "/usr/lib64/python2.6/subprocess.py", line 721, in communicate self.wait() File "/usr/lib64/python2.6/subprocess.py", line 1288, in wait pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) File "/usr/lib64/python2.6/subprocess.py", line 462, in _eintr_retry_call return func(*args)OSError: [Errno 10] No child processesCan anyone suggest what is going on here? I haven't tried implementing the changes in [2], they seemed hacky.[1] - http://gunicorn.org/configure.html#server-hooks[2] - Popen.communicate() throws OSError: "[Errno 10] No child processes"
The error turns out to be related to signal handling of SIGCHLD. The gunicorn arbiter intercepts SIGCHLD, which breaks subprocess.Popen. The subprocess.Popen module requires that SIGCHLD not be intercepted (at least, this is true for Python 2.6 and earlier). According to bugs.python.org this bug has been fixed in Python 2.7.
Question about Django tutorial official part3 html code I have a question about the js code when I'am learning on Django official tutorial part3. In the "Raising a 404 error" section, the official code use following code to display the "question_text" in object called "question":{{ question }}I don't understand why this code could work. The "question" is not a string but a object. It should be "question.question_text" .views.pydef detail(request, question_id): try: question = Question.objects.get(pk=question_id) except Question.DoesNotExist: raise Http404("Question does not exist") return render(request, 'polls/detail.html', {'question': question})models.pyclass Question(models.Model): question_text = models.CharField(max_length=200) pub_date = models.DateTimeField('datepublished') def __str__(self): return self.question_text def was_published_recently(self): return self.pub_date >= timezone.now()-datetime.timedelta(days=1)Besides, It works when I use the code {{ question.question_text }}So, I want to know why those two can have same output.
Because you defined a __str__ for the object:class Question(models.Model): # ... def __str__(self): return self.question_textDjango implicitly calls str(..) over the variables. In case you did not override the __str__ it would still render something: the __str__ of the superclass. The same happens for non-model objects (like ints, lists, tuples, custom class objects, etc.).Since models by default have a __str__ that looks approximately like Model object (id), if you do not override the __str__ (nor some superclass in between), then it will thus render the object that way. So if you would not provide a __str__ yourself, it would look like Question object (123) (with 123 the id of the object).Note that you by writing {{ question }} thus depend on the __str__ function: if you later change the __str__, the rendering will change. So in case you need the question_text, it is better to do this explicitly.
Working with session cookies in Python I am trying to get access to the Kerio Connect (mailserver) api which uses jsonrpc as a standard for their api.There is Session.login method that works just fine, I get back a SESSION_CONNECT_WEBADMIN cookie that gets saved in the session:SESSION_CONNECT_WEBADMIN=2332a56d0203f27972ebbe74c09a7f41262e5b224bc6a05e53e62e5872e9b698; \path=/admin/; domain=<server>; Secure; HttpOnly; Expires=Tue, 19 Jan 2038 03:14:07 GMT;But when I then do my next request with the same session, I get back a message that tells me, that my session has expired:{ "jsonrpc": "2.0", "id": 2, "error": { "code": -32001, "message": "Session expired.", "data": { "messageParameters": { "positionalParameters": [], "plurality": 1 } } }}So here's the Python script leading to that messageimport jsonimport requestsuserName = "username"password = "password"n=1application = {}application["name"] = "Log in"application["vendor"] = "My Company"application["version"] = "1.0"params = {}params["userName"] = userNameparams["password"] = passwordparams["application"] = applicationpayload = {}payload["jsonrpc"] = "2.0"payload["id"] = nn += 1payload["method"] = "Session.login"payload["params"] = paramsheaders = {}headers["Content-Type"] = "application/json-rpc"json_payload =json.dumps(payload, sort_keys=True, indent=2)url = "https://<server>:4040/admin/api/jsonrpc/"session = requests.Session()response = session.post(url, headers=headers, data=json_payload, verify=False)# Results in a token / a cookie with that tokenpayload2 = {}payload2["jsonrpc"] = "2.0"payload2["id"] = nn += 1payload2["method"] = "Users.get"json_payload2 = json.dumps(payload2, sort_keys=True, indent=2)response2 = session.post(url, data=json_payload2, verify=False)print(response2.text)What am I missing here because of my lack of experience?[EDIT]:I just now realise that when I log in with a browser, two cookies are actually created, each with another token, whereas I get only one cookie back when I try to access the api with Python. Why is that?Cookies received with Chrome:TOKEN_CONNECT_WEBADMINSESSION_CONNECT_WEBADMINCookie received with Python:SESSION_CONNECT_WEBADMIN
Working example:import jsonimport urllib.requestimport http.cookiejarimport ssljar = http.cookiejar.CookieJar()opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(jar))urllib.request.install_opener(opener)server = "https://mail.smkh.ru:4040"username = "admin"password = "pass"ssl._create_default_https_context = ssl._create_unverified_context # disable ssl cert errordef callMethod(method, params, token=None): """ Remotely calls given method with given params. :param: method string with fully qualified method name :param: params dict with parameters of remotely called method :param: token CSRF token is always required except login method. Use method "Session.login" to obtain this token. """ data = {"jsonrpc": "2.0", "id": 1, "method": method, "params": params} req = urllib.request.Request(url=server + '/admin/api/jsonrpc/') req.add_header('Content-Type', 'application/json') if token is not None: req.add_header('X-Token', token) httpResponse = opener.open(req, json.dumps(data).encode()) if httpResponse.status == 200: body = httpResponse.read().decode() return json.loads(body)session = callMethod("Session.login", {"userName": username, "password": password, "application": {"vendor":"Kerio", "name":"Control Api Demo", "version":"8.4.0"}})token = session["result"]["token"]sessions = callMethod("Users.get", {"query": { "fields": [ "id", "loginName", "fullName", "description", "authType", "itemSource", "isEnabled", "isPasswordReversible", "emailAddresses", "emailForwarding", "userGroups", "role", "itemLimit", "diskSizeLimit", "consumedItems", "consumedSize", "hasDomainRestriction", "outMessageLimit", "effectiveRole", "homeServer", "migration", "lastLoginInfo", "accessPolicy" ], "start": 0, "limit": 200, "orderBy": [ { "columnName": "loginName", "direction": "Asc" } ] }, "domainId": Example:"keriodb://domain/908c1118-94ef-49c0-a229-ca672b81d965" }, token) try: user_names = [] for user in users["result"]["list"]: print(user["fullName"], " (", user["loginName"], ")", sep="") user_names.append(user["fullName"]) call_method("Session.logout", {}, token) return users except KeyError: print('Error: {}'.format(users['error']['message'])) call_method("Session.logout", {}, token) return None
What optimization algorithm is used within the .fit() method in scikit-learn? I'm curious what's 'under the hood' of model.fit() method from scikit-learn library? If it depends on a particular model, then let's say it's linear_model.LinearRegression.fit().
It is dependent on the particular model. Some methods have analytical solutions, like Linear-, Ridge- and Kernel Ridge regression. Some methods like neural networks use numerical solvers. In some cases the documentation has details on the exact methods used, or have options to set it yourself.
No handlers could be found for logger paramiko I am using paramiko module for ssh connection.I am facing below problem:No handlers could be found for loggerI am not getting the reason of this problem.I tried to get solution from below link but not able to get reason.No handlers could be found for logger "paramiko.transport"I am using below code: 1.ssh = paramiko.SSHClient() 2.ssh.set_missing_host_key_policy( 3.paramiko.AutoAddPolicy()) 4.ssh.connect(serverip, username=username, 5.password=password,timeout=None) 6.transport = ssh.get_transport() 7.transport.set_keepalive(30) 8.stdin, stdout, stderr =ssh.exec_command(cmd) 9.tables=stdout.readlines() 10.ssh.close()I think i am getting this problem at line no 8.Please advice how can i solve this.
I found the solution from this website.Basically, you just need to add a line:paramiko.util.log_to_file("filename.log")Then all connection will be logged to the file.
Creating a C++ Wrapper in Python using .dylib I'm basically trying to develop a Wrapper in Python that can access a library I have developed in C++. At the minute, it is very basic, as this is just for testing purposes. In my .h file I have the following:#include <iostream>class Foo { public: void bar() { std::cout << "Hello world"; }};And in my Python file I call using the following:from ctypes import cdll lib = cdll.LoadLibary('./libfoo.1.dylib')class Foo(object): def __init__(self): self.obj = lib.Foo_new() def bar(self): lib.Foo_bar(self.obj)f = Foo()f.bar()I have created a .dylib since I don't believe it is possible to create a shared library in GCC on a mac, but, I could be wrong. I'm getting the following errors:Traceback (most recent call last):File "main.py", line 3, in <module>lib = cdll.LoadLibary('./libfoo.1.dylib')File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypes/__init__.py", line 423, in __getattr__dll = self._dlltype(name)File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypes/__init__.py",line 353, in __init__self._handle = _dlopen(self._name, mode)OSError: dlopen(LoadLibary, 6): image not foundBut it is found, and the library (.dylib) is infact in the same directory.. Where am I going wrong?
The ctypes library doesn't know about c++, you need to write your shared library in c if you want to use ctypes.You can look at something like http://www.swig.org instead, which can hook into a shared library written in c++.
How do I put together members of two Python lists that have the same indexes? How do I combine the first member of a list in Python with the first member of the second list as well as the second member of the first list with the second member of the second list? And will this happen until the last members of the lists go ahead?
This is a basic zip function. For example:list_1 = [1, 2, 3]list_2 = ["s", "u", "p"]final_list = list(zip(list_1, list_2))print(final_list)This will return a list of tuples such that the first value from the first list is paired with the first value of the second list, iterating over each value up until the minimum length between the two lists is satisfied. In the example, this would be the output: [(1, 's'), (2, 'u'), (3, 'p')]It's important to note that if one of the lists is longer than the other, the extra values will be excluded. There are a ton of ways to handle this, but it would help to know what you're wanting to accomplish.
Class variables - missing one required positional argument I have two scripts. The first containing a class, with class variables defined and a function using those class variables. The second script calls the class and function within a function of it's own.This sort of set up works fine for functions inside a class, however adding class variables is causing me the below error. Can anyone explain why, please and what I need to do to fix?Thanksobj1.py:class my_test_class(): def __init__(self): self.test1 = 'test1' self.test2 = 'test2' self.test3 = 'test3' def test_func(self, var): new_var = print(var, self.test1, self.test2, self.test3)obj2.pyfrom obj1 import *def import_test(): target_var = my_test_class.test_func('my test is:') print(target_var)import_test()Error:Traceback (most recent call last): File "G:/Python27/Test/obj2.py", line 9, in <module> import_test() File "G:/Python27/Test/obj2.py", line 6, in import_test target_var = my_test_class.test_func('my test is:')TypeError: test_func() missing 1 required positional argument: 'var'
As the commentors have pointed out, since the test_func is a class method, we need to call it using a class instance object.Also print function returns None, so doing new_var = print(var, self.test1, self.test2, self.test3) assigns new_var=None, so if you want to return the variable, you need to assign new_var = ' '.join([var, self.test1, self.test2, self.test3]), which creates a string with a whitespace between all the words, and return new_varCombining all of this, the code comes out as followsclass my_test_class(): def __init__(self): self.test1 = 'test1' self.test2 = 'test2' self.test3 = 'test3' def test_func(self, var): #Assign value to new_var and return it new_var = ' '.join([var, self.test1, self.test2, self.test3]) return new_vardef import_test(): #Create instance of my_test_class test_class = my_test_class() #Call test_func using instance of my_test_class print(test_class.test_func('my test is:'))import_test()The output will be my test is: test1 test2 test3
How to retrieve model column value dynamically in python Suppose I have a model object. print (dir(table))['...', 'col1', 'col2', '...']# this will output column 1 valueprint (table.col1)I would like to do it dynamically, for example:col = 'col1'table.colThx
You want to use getattr when doing dynamic attribute retrieval in python:col = 'col1'getattr(table, col)
PCA decomposition with python: features relevances I'm following now next topic: How can I use PCA/SVD in Python for feature selection AND identification? Now, we decompose our data set in Python with PCA method and use for this the sklearn.decomposition.PCA With the usage of attributes components_ we get all components. Now we have very similar goal: want take only first several components (this part is not a problem) and see, what the input features proportions has every PCA component (to know, which features are much important for us). How is possible to do it? Another question is, has the python lybrary another implementations of Principal Component Analysis?
what the input features proportions has every PCA component (to know, which features are much important for us). How is possible to do it?The components_ array has shape (n_components, n_features) so components_[i, j] is already giving you the (signed) weights of the contribution of feature j to component i.If you want to get the indices of the top 3 features contributing to component i irrespective of the sign, you can do:numpy.abs(pca.component_[i]).argsort()[::-1][:3]Note: the [::-1] notation makes it possible to reverse the order of an array:>>> import numpy as np>>> np.array([1, 2, 3])[::-1]array([3, 2, 1]) Another question is, has the python library another implementations of Principal Component Analysis?PCA is just a truncated Singular Value Decomposition of the centered dataset. You can use numpy.linalg.svd directly if you wish. Have a look at the soure code of the scikit-learn implementation of PCA for details.
How can I locate to this image by using selenium (python script)? I would like to know how to use selenium to locate to this img location by using selenium by using python script.I have tried:driver.find_element_by_xpath("//div[@id='welcome-tooltip-dialog']/img[1]").click()driver.find_element_by_id("tooltip-dialog-list")driver.find_element_by_css_selector("img[src*='images/close-button.png']")But all of them are not working properly.
The tooltip is inside <iframe> tag, you need to switch to it first# switch to the iframeiframe = driver.find_element_by_tag_name('iframe')driver.switch_to.frame(iframe)# close the tooltipdriver.find_element_by_css_selector('#welcome-tooltip-dialog > .close').click()# switch backdriver.switch_to.default_content()
SQLAlchemy cannot find a class name Simplified, I have the following class structure (in a single file):Base = declarative_base()class Item(Base): __tablename__ = 'item' id = Column(BigInteger, primary_key=True) # ... skip other attrs ... class Auction(Base): __tablename__ = 'auction' id = Column(BigInteger, primary_key=True) # ... skipped ... item_id = Column('item', BigInteger, ForeignKey('item.id')) item = relationship('Item', backref='auctions')I get the following error from this:sqlalchemy.exc.InvalidRequestErrorInvalidRequestError: When initializing mapper Mapper|Auction|auction, expression 'Item' failed to locate a name ("name 'Item' is not defined"). If this is a class name, consider adding this relationship() to the Auction class after both dependent classes have been defined.I'm not sure how Python cannot find the Item class, as even when passing the class, rather than the name as a string, I get the same error. I've been struggling to find examples of how to do simple relationships with SQLAlchemy so if there's something fairly obvious wrong here I apologise.
This all turned out to be because of the way I've set SQLAlchemy up in Pyramid. Essentially you need to follow this section to the letter and make sure you use the same declarative_base instance as the base class for each model.I was also not binding a database engine to my DBSession which doesn't bother you until you try to access table metadata, which happens when you use relationships.
Running two separate (non-nested) for loops within one CSV file I am currently trying to create a config template using CSV rows and columns as variable output into a text file for one of my network configurations. Here is the code:import sysimport osimport csvwith open('VRRP Mapping.csv', 'rb') as f: reader = csv.reader(f) myfile = open('VRRP fix.txt', 'w') next(reader, None) myfile.write('*' * 50 + '\n' + 'VLAN interfaces for Core A\n' + '*' * 50 + '\n\n') for row in reader: #First, for the A side myfile.write('interface vlan ' + str(row[0]) + '\n') myfile.write('no ip vrrp ' + str(row[7]) + '\n') myfile.write('ip vrrp ' + str(row[8]) + ' ' + row[9] + '\n') myfile.write('ip vrrp ' + str(row[8]) + ' adver-int 10\n') myfile.write('ip vrrp ' + str(row[8]) + ' backup-master enable\n') myfile.write('ip vrrp ' + str(row[8]) + ' holddown-timer 60\n') myfile.write('ip vrrp ' + str(row[8]) + ' priority ' + str(row[3]) + '\n') myfile.write('ip vrrp ' + str(row[8]) + ' enable\n') myfile.write('exit\n\n') myfile.write('*' * 50 + '\n' + 'VLAN interfaces for Core B\n' + '*' * 50 + '\n\n') for row in reader: #And then the B side myfile.write('interface vlan ' + str(row[0]) + '\n') myfile.write('no ip vrrp ' + str(row[7]) + '\n') myfile.write('ip vrrp ' + str(row[8]) + ' ' + row[9] + '\n') myfile.write('ip vrrp ' + str(row[8]) + ' adver-int 10\n') myfile.write('ip vrrp ' + str(row[8]) + ' backup-master enable\n') myfile.write('ip vrrp ' + str(row[8]) + ' holddown-timer 60\n') myfile.write('ip vrrp ' + str(row[8]) + ' priority ' + str(row[5]) + '\n') myfile.write('ip vrrp ' + str(row[8]) + ' enable\n') myfile.write('exit\n\n') myfile.close()The problem I am having is after the first for loop. The 'VLAN interfaces for core b' shows up, but everything in the second for loop does not output to the text file at all.Suggestions?
You have consumed all lines in your reader object and need to start from the beginning again. Before the second for loop add:f.seek(0)This will bring the pointer back to the beginning of the file so you can loop over it again. http://www.tutorialspoint.com/python/file_seek.htm
Python fails to compile a regex I'm trying to detect all set from a cmake file using a python regex, fo the file below:# Library to includeset(LIB_TO_INCLUDE a b c)# comon code (inclusion in source code)set(SHARED_TO_INCLUDE d e f)# Library to includeset(THIRD_PARTY g h)I'd like to retrieve:LIB_TO_INCLUDE a b cSHARED_TO_INCLUDE d e fTHIRD_PARTY g hI tested the regex set\((?s:[^)])*?\) (get all but ) items following set() using regex101.com (see https://regex101.com/r/aB5tX2/1), it apparently does what I want.Now when I try to run re.compile(r'set\((?s:[^)])*?\)') from Python, I get the error: File "private\python_scripts\convert.py", line 34, in create_sde_files pattern = re.compile(r'set\((?s:[^)])*?\)') File "b:\dev\vobs_ext_2015\tools_ext\python\Python34_light\lib\re.py", line 223, in compile return _compile(pattern, flags) File "b:\dev\vobs_ext_2015\tools_ext\python\Python34_light\lib\re.py", line 294, in _compile p = sre_compile.compile(pattern, flags) File "b:\dev\vobs_ext_2015\tools_ext\python\Python34_light\lib\sre_compile.py", line 568, in compile p = sre_parse.parse(p, flags) File "b:\dev\vobs_ext_2015\tools_ext\python\Python34_light\lib\sre_parse.py", line 760, in parse p = _parse_sub(source, pattern, 0) File "b:\dev\vobs_ext_2015\tools_ext\python\Python34_light\lib\sre_parse.py", line 370, in _parse_sub itemsappend(_parse(source, state)) File "b:\dev\vobs_ext_2015\tools_ext\python\Python34_light\lib\sre_parse.py", line 721, in _parse raise error("unknown extension") sre_constants.error: unknown extensionIs this kind of regex not supported by Python?
This should do it: set\(([^)]*?)\)The "single line" modifier is passed as an argument when you compile the regex:>>> t = """set(LIB_TO_INCLUDE ... a... b... c)""">>> >>> pattern = r'set\(([^)]*?)\)'>>> >>> regex = re.compile(pattern, re.S)>>> >>> result = regex.search(t).groups()[0]>>> result'LIB_TO_INCLUDE \n a\n b\n c'You can then eliminate the extra spacing and new lines:>>> ' '.join(x.strip() for x in result.split('\n'))'LIB_TO_INCLUDE a b c'Note than in your link, if you switch to "python" in the "Flavors" on the left you'll get the errors that your particular format was causing.EDIT: to get all (3) matches you need to use <regex>.findall(...) instead of search.>>> tt = """# Library to include... set(LIB_TO_INCLUDE ... a... b... c)... ... # comon code (inclusion in source code)... set(SHARED_TO_INCLUDE d e f)... ... # Library to include... set(THIRD_PARTY g h)""">>> >>> result = regex.findall(tt)>>> result['LIB_TO_INCLUDE \n a\n b\n c', 'SHARED_TO_INCLUDE d e f', 'THIRD_PARTY g h']
Python3 executing Javascript I need to execute a Javascript function from Python 3 (pass a few variables into it's environment and collect the result). I found pyv8 and python-spidermonkey, but neither supports Python 3.Is there a library for that job?
What you can always do:Install Node.Js binaries on the server-sideWrite a script as standalone .js filePass input as command-line arguments, pipes (stdin) or filesExecute the script using subprocessCollect the result from a named pipe or file output.js scripts can be executed in similar fashion as .sh and .py scripts.
preclassified trained twitter comments for categorization So I have some 1 million lines of twitter comments data in csv format. I need to classify them in certain categories like if somebody is talking about : "product longevity", "cheap/costly", "on sale/discount" etc. As you can see I have multiple classes to classify these tweets data into.The thing is that how do I even generate/create a training data for such a huge data.Silly question but I was wondering whether/not there are already preclassified/tagged comments data to train our model with? If not then what is the best approach to create a training data for multi-class classification of text/comments ?While I have tried and tested NaiveBayes for sentiment classification for a smaller dataset, could you please suggest which classifier shall I use for this problem (multiple categories to classify the comments into).Thanks!!!
The thing is that how do I even generate/create a training data for such a huge dataI would suggest finding a training data set that could help you with the categories you are interested in. So let's say price related articles, you might want to find a training data set that is all about price related articles and then perhaps expand it by using synonyms for key-words like cheap or so. And perhaps look into sentence structure to find out whether if the structure of the sentence helps with your classifier algorithm. If not then what is the best approach to create a training data for multi-class classification of text/comments? key-words, pulling articles that are all about related categories and then go from there. Lastly, I suggest being very familiar with NLTK's corpus library, this might also help you with retrieving training data as well. As for your last question, I'm kinda confused to what you mean by 'multiple categories to classify the comments into', do you mean having multiple classifiers for a particular comment to belong in? So a comment can belong to 1 to more classifiers?
Django forms: how to display media (javascript) for a DateTimeInput widget? Hello (please excuse me for my bad english ;) ),Imagine the classes bellow:models.pyfrom django import modelsclass MyModel(models.Model): content_type = models.ForeignKey(ContentType, verbose_name=_('content type')) object_id = models.PositiveIntegerField(_('object id')) content_object = generic.GenericForeignKey('content_type', 'object_id') published_at = models.DateTimeField()forms.pyfrom django import formsclass MyModelForm(forms.ModelForm): published_at = forms.DateTimeField(required=False, widget=DateTimeInput)admin.pyfrom django.contrib import adminform django.contrib.contenttypes import genericclass MyModelInline(generic.GenericStackedInline): model = MyModel form = MyModelFormclass MyModelAdmin(admin.ModelAdmin): inlines = [MyModelInline]Problem: the <script> tags for javascript from the DateTimeInput widget don't appear in the admin site (adding a new MyModel object). i.e. these two lines :<script type="text/javascript" src="/admin/media/js/calendar.js"></script><script type="text/javascript" src="/admin/media/js/admin/DateTimeShortcuts.js"></script>Please, do you have any idea to fix it ?Thank you very much and have a good day :)
The standard DateTimeWidget doesn't include any javascript. The widget used in the admin is a different one - django.contrib.admin.widgets.AdminSplitDateTime - and this includes the javascript.
deleting old folders with datetime function I am trying to delete old folders and I am asking does anyone know how to set up a variable that allows me to check the variable 'todaystr' which is today's date and minus 7 days of this string and store it another variable. I am wanting to automatically delete old files after a week. Below shows the variable 'todaystr' being set up.todaystr = datetime.date.today().isoformat() I would like to create a variable 'oldfile' that stores the current date minus 7 days so I can delete the file with this date. Thanks for any help.
import datetimeimport osimport shutilthreshold = datetime.datetime.now() + datetime.timedelta(days=-7)file_time = datetime.datetime.fromtimestamp(os.path.getmtime('/folder_name'))if file_time < threshold: shutil.rmtree('/folder_name')
Multiprocessing debug techniques I'm having trouble debugging a multi-process application (specifically using a process pool in python's multiprocessing module). I have an apparent deadlock and I do not know what is causing it. The stack trace is not sufficient to describe the issue, as it only displays code in the multiprocessing module.Are there any python tools, or otherwise general techniques used to debug deadlocks?
Yah, debugging deadlocks is fun. You can set the logging level to be higher -- see the Python documentation for a description of it, but really quickly:import multiprocessing, logginglogger = multiprocessing.log_to_stderr()logger.setLevel(multiprocessing.SUBDEBUG)Also, add logging for anything in your code that deals with a resource or whatnot that might be in contention. Finally, shot in the dark: spawning off child processes during an import might cause a problem.
Comparing pixels between image and colourbar I am working on a project where I have to extract the pixels from a thermal image and convert it into temperature to detect the respiratory time, using which I can detect the respiratory rate. I have done upto detecting the object(i.e the nostrils), but I am not sure how to extract and convert the pixel values to temperature.My idea is to create a colourbar, then compare the pixel values from the image to those of the colourbar and get the temperature values.Is this the right way to do this? (I am a newbie to this).Also I am not sure on how to proceed the way I've mentioned over here(like on how to compare the pixels between image and colourbar).Any help will be grateful.
You don't need to effectively draw the color bar, it suffices to have a representation in an array. Then you can indeed match the colors of the pixels to the colors in the color bar and deduce the "position". If your system is correctly calibrated, you should know the corresponding temperatures.For different reasons, it could arise that the colors in the image do not match any in the bar, just consider the closes match.Note that a thermal image is in fact a pseudo-colored representation of a scaler field (temperature is a scalar), color being used to please the human eye. But as regards quantitative processing, only the scalar value matters, and it is a kind of waste to have a scalar image mapped to pseudocolors, and then back to grayscale.So if possible, prefer to fetch the original image, before pseudocoloring (but do not confuse with the conversion of the pseudocolors to grayscale intensity).
Script to Check if Torque Jobs are Running/Queued Is there a way to programmatically check if there are running and/or queued jobs? I'm looking for a script (can be Bash, Python, or any other typical language) that does that and then take some actions if necessary, e.g., shutdown the server (or in my case, an instance in Google Compute Engine). I'd also like to check if there are other users logged in before taking actions. I know the command qstat, but not sure how to use it in a script. Same thing for the command who. I'm using Torque and Ubuntu Server.Thank you.EDITGiven the "down votes", I'll try to give more information. I'd like to do something like the following in pseudo-code:if "no jobs queued" and "no jobs running" and "no users logged in" then shutdown machineendifObviously, the missing part is how to detect, in a script file, the conditions within quotes. The shutdown part isn't important here. I'd appreciate if anyone could give me some pointers or share some ideas. Thanks.
import subprocessdef runCmd(exe): p = subprocess.Popen(exe,stdout=subprocess.PIPE, stderr=subprocess.STDOUT) while True: retcode = p.poll() line = p.stdout.readline() yield line if retcode is not None: breakdef hasRQJob(): jobs = runCmd('qstat') for line in jobs: columns = line.split() if columns[-2] in ('Q','R'): return True return FalseThe above example shows how to use python to execute shell script and to check whether there is job running or queuing. The function runCmd return the job information like this:Job id Name User Time Use S Queue------------------------- ---------------- --------------- -------- - -----1773.cluster CC Jianhong.Zhou 00:00:00 C MasterStudents 1774.cluster RDPSO Wei.Fang 00:00:00 C PRCI Thus, it is easy to judge jobs' status.The way to check whether there is user logging can reference to 4 Ways to Identify Who is Logged-In on Your Linux SystemBy the way, the way to execute shell script in python can reference to Calling an external command in Python
CPython - Making the date show up using the date on your computer I'm really new to programming and I only just started using Python, if anyone could edit the code I put up to make it work how I want it to then please do.I was wondering if I could make the date show up, on my python program but make it different for different regions, so if someone opened the program in United Kingdom the day would show up first and if it was in the US it would show the month or year first etc.This is what I have so far.import datetimecurrentDate = datetime.date.today()print(currentDate.strftime('Todays date is: %d %B %Y'))I currently have it set so it shows the day first then the month then the year.Is there anyway to make it use it in a different order depending on what country you're in?
Does this work for you?>>> import datetime>>> today = datetime.date.today()>>> print(today.strftime('%x'))09/10/15Specifically, you probably should look at the %c, %x, and %X format codes. See 8.1.8. strftime() and strptime() Behavior for more information on how to use the strftime method.
Bottle Framework PUT request This is my Bottle codeimport sqlite3import jsonfrom bottle import route, run, requestdef dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return ddef db_connect(): conn = sqlite3.connect('inventory.db') conn.row_factory = dict_factory return conn, conn.cursor()@route('/inventory', method='GET')def get_inventory(): conn,c=db_connect() c.execute("SELECT id, name, category, location, date, amount FROM inventory") result = c.fetchall() json_result=json.dumps(result) return json_result@route('/inventory/get/:id', method='GET')def get_item(id): conn,c=db_connect() c.execute("SELECT id, name, category, location, date, amount FROM inventory WHERE id=?",(id, )) result=c.fetchall() json_result=json.dumps(result) return json_result@route('/inventory/new', method='POST')def add_item(): name = request.POST.get('name') category = request.POST.get('category') location = request.POST.get('location') date = request.POST.get('date') amount = request.POST.get('amount') conn,c=db_connect() c.execute("INSERT INTO inventory (name, category, location, date, amount) VALUES (?,?,?,?,?)", (name,category,location,date,amount)) new_id = c.lastrowid conn.commit() c.close() return '<p>The entry with id %d has been added to the database</p>' %new_id@route('/inventory/delete/:id', method='DELETE')def delete_item(id): conn,c=db_connect() c.execute("DELETE FROM inventory WHERE id =?", (id, )) conn.commit() c.close() return 'The entry with id %s has been deleted from the database' %id@route('/inventory/edit/:id', method='PUT')def edit_item(id): name = request.PUT.get('name') category = request.PUT.get('category') amount = request.PUT.get('location') location = request.PUT.get('date') date = request.PUT.get('amount') conn,c=db_connect() c.execute("UPDATE Inventory SET name=?, category=?, location=?, date=?, amount=? WHERE id=?", (name, category, location, date, amount,id)) conn.commit() c.close(); return '<p>The entry with id %s has been edited in the database</p>' %idrun(reloader=True)I am trying to make the make the edit_item method to work.When I call it with curlcurl -X PUT -d "name=aa&category=bb&amount=23&location=xx&date=21-10-2014" http://localhost:8080/inventory/edit/2I get a server error which says raise AttributeError('Atrribute %r is not defined.' % name) AttributeError: Attribute 'PUT' not defined' What should i do ?
Instead of this,name = request.PUT.get('name')use this:name = request.params.get('name')
Machine Learning - test set with fewer features than the train set guys.I was developing an ML model and I got a doubt. Let's assume that my train data has the following data:ID | Animal | Age | Habitat0 | Fish | 2 | Sea1 | Hawk | 1 | Mountain2 | Fish | 3 | Sea3 | Snake | 4 | ForestIf I apply One-hot Encoding, it will generate the following matrix:ID | Animal_Fish | Animal_Hawk | Animal_Snake | Age | ...0 | 1 | 0 | 0 | 2 | ...1 | 0 | 1 | 0 | 1 | ...2 | 1 | 0 | 0 | 3 | ...3 | 0 | 0 | 1 | 4 | ...That's beautiful and work in most of the cases. But, what if my test set contains fewer (or more) features than the train set? What if my test set doesn't contain "Fish"? It will generate one less category.Can you guys help me how can I manage this kind of problem?Thank you
It sounds like you have your train and test sets completely separate. Here's a minimal example of how you might automatically add "missing" features to a given dataset:import pandas as pd# Made-up training datasettrain = pd.DataFrame({'animal': ['cat', 'cat', 'dog', 'dog', 'fish', 'fish', 'bear'], 'age': [12, 13, 31, 12, 12, 32, 90]})# Made-up test dataset (notice how two classes are from train are missing entirely)test = pd.DataFrame({'animal': ['fish', 'fish', 'dog'], 'age': [15, 62, 1]})# Discrete column to be one-hot-encodedcol = 'animal'# Create dummy variables for each level of `col`train_animal_dummies = pd.get_dummies(train[col], prefix=col)train = train.join(train_animal_dummies)test_animal_dummies = pd.get_dummies(test[col], prefix=col)test = test.join(test_animal_dummies)# Find the difference in columns between the two datasets# This will work in trivial case, but if you want to limit to just one feature# use this: f = lambda c: col in c; feature_difference = set(filter(f, train)) - set(filter(f, test))feature_difference = set(train) - set(test)# create zero-filled matrix where the rows are equal to the number# of row in `test` and columns equal the number of categories missing (i.e. set difference # between relevant `train` and `test` columnsfeature_difference_df = pd.DataFrame(data=np.zeros((test.shape[0], len(feature_difference))), columns=list(feature_difference))# add "missing" features back to `testtest = test.join(feature_difference_df)test goes from this: age animal animal_dog animal_fish0 15 fish 0.0 1.01 62 fish 0.0 1.02 1 dog 1.0 0.0To this: age animal animal_dog animal_fish animal_cat animal_bear0 15 fish 0.0 1.0 0.0 0.01 62 fish 0.0 1.0 0.0 0.02 1 dog 1.0 0.0 0.0 0.0Assuming each row (each animal) can only be one animal, it's fine for us to add an animal_bear feature (a sort-of "is-a-bear" test/feature) because of the assumption that if there were any bears in test, that information would have been accounted for in the animal column.As a rule of thumb, it's a good idea to try to account for all possible features (i.e. all possible values of animal, for example) when building/training a model. As mentioned in the comments, some methods are better at handling missing data than others, but if you can do it all from the outset, that's probably a good idea. Now, that would be tough to do if you're accepting free-text input (as the number of possible inputs is never-ending).
np.delete and np.s_. What's so special about np_s? I don't really understand why regular indexing can't be used for np.delete. What makes np.s_ so special? For example with this code, used to delete the some of the rows of this array..inlet_names = np.delete(inlet_names, np.s_[1:9], axis = 0)Why can't I simply use regular indexing and do..inlet_names = np.delete(inlet_names, [1:9], axis = 0)orinlet_names = np.delete(inlet_names, inlet_names[1:9], axis = 0)From what I can gather, np.s_ is the same as np.index_exp except it doesn't return a tuple, but both can be used anywhere in Python code. Then when I look into the np.delete function, it indicates that you can use something like [1,2,3] to delete those specific indexes along the entire array. So whats preventing me from using something similar to delete certain rows or columns from the array?I'm simply assuming that this type of indexing is read as something else in np.delete so you need to use np.s_ in order to specify, but I can't get to the bottom of what exactly it would be reading it as because when I try the second piece of code it simply returns "invalid syntax". Which is weird because this code works...inlet_names = np.delete(inlet_names, [1,2,3,4,5,6,7,8,9], axis = 0) So I guess the answer could possibly be that np.delete only accepts a list of the indexes that you would like to delete. And that np._s returns a list of the indexes that you specify for the slice.Just could use some clarification and some corrections on anything I just said about the functions that may be wrong, because a lot of this is just my take, the documents don't exactly explain everything that I was trying to understand. I think I'm just overthinking this, but I would like to actually understand it, if someone could explain it.
np.delete is not doing anything unique or special. It just returns a copy of the original array with some items missing. Most of the code just interprets the inputs in preparation to make this copy.What you are asking about is the obj parameter obj : slice, int or array of intsIn simple terms, np.s_ lets you supply a slice using the familiar : syntax. The x:y notation cannot be used as a function parameter.Let's try your alternatives (you allude to these in results and errors, but they are buried in the text):In [213]: x=np.arange(10)*2 # some distinctive valuesIn [214]: np.delete(x, np.s_[3:6])Out[214]: array([ 0, 2, 4, 12, 14, 16, 18])So delete with s_ removes a range of values, namely 6 8 10, the 3rd through 5th ones.In [215]: np.delete(x, [3:6]) File "<ipython-input-215-0a5bf5cc05ba>", line 1 np.delete(x, [3:6]) ^SyntaxError: invalid syntaxWhy the error? Because [3:4] is an indexing expression. np.delete is a function. Even s_[[3:4]] has problems. np.delete(x, 3:6) is also bad, because Python only accepts the : syntax in an indexing context, where it automatically translates it into a slice object. Note that is is a syntax error, something that the interpreter catches before doing any calculations or function calls.In [216]: np.delete(x, slice(3,6))Out[216]: array([ 0, 2, 4, 12, 14, 16, 18])A slice works instead of s_; in fact that is what s_ producesIn [233]: np.delete(x, [3,4,5])Out[233]: array([ 0, 2, 4, 12, 14, 16, 18])A list also works, though it works in different way (see below).In [217]: np.delete(x, x[3:6])Out[217]: array([ 0, 2, 4, 6, 8, 10, 14, 18])This works, but produces are different result, because x[3:6] is not the same as range(3,6). Also the np.delete does not work like the list delete. It deletes by index, not by matching value.np.index_exp fails for the same reason that np.delete(x, (slice(3,6),)) does. 1, [1], (1,) are all valid and remove one item. Even '1', the string, works. delete parses this argument, and at this level, expects something that can be turned into an integer. obj.astype(intp). (slice(None),) is not a slice, it is a 1 item tuple. So it's handled in a different spot in the delete code. This is TypeError produced by something that delete calls, very different from the SyntaxError. In theory delete could extract the slice from the tuple and proceed as in the s_ case, but the developers did not choose to consider this variation.A quick study of the code shows that np.delete uses 2 distinct copying methods - by slice and by boolean mask. If the obj is a slice, as in our example, it does (for 1d array):out = np.empty(7)out[0:3] = x[0:3]out[3:7] = x[6:10]But with [3,4,5] (instead of the slice) it does:keep = np.ones((10,), dtype=bool)keep[[3,4,5]] = Falsereturn x[keep]Same result, but with a different construction method. x[np.array([1,1,1,0,0,0,1,1,1,1],bool)] does the same thing.In fact boolean indexing or masking like this is more common than np.delete, and generally just as powerful.From the lib/index_tricks.py source file:index_exp = IndexExpression(maketuple=True)s_ = IndexExpression(maketuple=False)They are slighly different versions of the same thing. And both are just convenience functions. In [196]: np.s_[1:4]Out[196]: slice(1, 4, None)In [197]: np.index_exp[1:4]Out[197]: (slice(1, 4, None),)In [198]: np.s_[1:4, 5:10]Out[198]: (slice(1, 4, None), slice(5, 10, None))In [199]: np.index_exp[1:4, 5:10]Out[199]: (slice(1, 4, None), slice(5, 10, None))The maketuple business applies only when there is a single item, a slice or index.
SyntaxError: Non-UTF-8 code starting with '\xae' I am using Python in selenium to create scripts. When used the below code getting syntax error. I could find that the issue is with the registered trademark symbol '®' in title. Please help me out of this.from selenium import webdriverfrom selenium.webdriver.common.keys import Keysdriver = webdriver.Firefox()driver.get('https://advance.lexis.com')assert 'Lexis Advance® Sign In | LexisNexis' in driver.title
The content of your question is fine: I inspected it to see that StackOverflow provides the ® symbol encoded as UTF-8.Based on the error message in the title, Python is reading the file as UTF-8 but I suspect that your editor is using a different encoding to save the file.Perhaps it is using ISO 8859-1 (aka 'latin1'), or something else. ISO 8859-1 defines the byte 0xAE as the registered trademark symbol. Unicode also defines code point U+00AE as the registered trademark symbol.You have two solutions:determine what encoding your editor is using and tell python by putting # encoding: foo at the top of your fileconfigure your editor to use UTF-8
Having trouble running Python file out of Terminal. ImportError: no module named requests I originally asked this question and took up Martijn Pieters solution and did as he posted:However, I'm still unable to run my file out of Terminal. I should see files being saved in to the folder thisdir.I've changed my directory to the directory of the file and I run./my_file.py --todir thisdir foobarBut I'm still gettingTraceback (most recent call last): File "my_file.py", line 13, in <module> import requestsImportError: No module named requestsI don't know if it matters, but I've tried running it with the first line of my file containing#!/usr/bin/env pythonas well as#!/usr/bin/pythonI really have no idea what I'm doing here, could someone please help me through this?Update:I replaced the shebang in my code, and I no longer get an error, but I also don't get any output (the issue isn't a bug in the code as I tested it fully). Here's the part of my code that requires requestsThe part of my code that uses requests is below. I don't know if it for i in xrange(urls_count): r = requests.get(urls[i], stream=True) with open(save_here + '/file' + str(i), 'wb') as f_in: f_in.write(r.content)I should see files being saved into thisdir and also in Terminal, a list of the newly created file's names being print out (in Terminal). So the issue isn't to add print the content (as suggested by hd1).Also, not sure if this is relevant, but I clicked the box for "Install to user's site packages directory (/Users/AlanH/.local)
Your shebang line should read:#!/path/to/anaconda/pythoninstead of /usr/bin/python or /usr/bin/env python. Per your comment, the solution posted on this question may sort you. If not, leave another comment.You aren't printing anything to stdout from your code. The snippet below should sort that:for i in xrange(urls_count): r = requests.get(urls[i], stream=True) with open(save_here + '/file' + str(i), 'wb') as f_in: f_in.write(req.content) print req.content
Is there anything like Python export? We use all the time python's import mechanism to import modules and variables and other stuff..but, is there anything that works as export? like:we import stuff from a module:from abc import *so can we export like?:to xyz export *or export a,b,c to program.pyI know this question isn't a typical type of question to be asked here..but just in curiosity..I checked over the python console and there is nothing that exists as 'export'..maybe it exists with some different name..?
First, import the module you want to export stuff into, so you have a reference to it. Then assign the things you want to export as attributes of the module:# to xyz export a, b, cimport xyzxyz.a = axyz.b = bxyz.c = cTo do a wildcard export, you can use a loop:# to xyz export *exports = [(k, v) for (k, v) in globals().iteritems() if not k.startswith("_")]import xyzfor k, v in exports: setattr(xyz, k, v)(Note that we gather the list of objects to be exported before importing the module, so that we can avoid exporting a reference to the module we've just imported into itself.)This is basically a form of monkey-patching. It has its time and place. Of course, for it to work, the module that does the "exporting" must itself be executed; simply importing the module that will be patched won't magically realize that some other code somewhere is going to patch it.
How to convert between bytes and strings in Python 3? This is a Python 101 type question, but it had me baffled for a while when I tried to use a package that seemed to convert my string input into bytes. As you will see below I found the answer for myself, but I felt it was worth recording here because of the time it took me to unearth what was going on. It seems to be generic to Python 3, so I have not referred to the original package I was playing with; it does not seem to be an error (just that the particular package had a .tostring() method that was clearly not producing what I understood as a string...)My test program goes like this:import mangler # spoof packagestringThing = """<Doc> <Greeting>Hello World</Greeting> <Greeting>你好</Greeting></Doc>"""# print out the inputprint('This is the string input:')print(stringThing)# now make the string into bytesbytesThing = mangler.tostring(stringThing) # pseudo-code again# now print it outprint('\nThis is the bytes output:')print(bytesThing)The output from this code gives this:This is the string input:<Doc> <Greeting>Hello World</Greeting> <Greeting>你好</Greeting></Doc>This is the bytes output:b'\n<Doc>\n <Greeting>Hello World</Greeting>\n <Greeting>\xe4\xbd\xa0\xe5\xa5\xbd</Greeting>\n</Doc>\n'So, there is a need to be able to convert between bytes and strings, to avoid ending up with non-ascii characters being turned into gobbledegook.
The 'mangler' in the above code sample was doing the equivalent of this:bytesThing = stringThing.encode(encoding='UTF-8')There are other ways to write this (notably using bytes(stringThing, encoding='UTF-8'), but the above syntax makes it obvious what is going on, and also what to do to recover the string:newStringThing = bytesThing.decode(encoding='UTF-8')When we do this, the original string is recovered.Note, using str(bytesThing) just transcribes all the gobbledegook without converting it back into Unicode, unless you specifically request UTF-8, viz., str(bytesThing, encoding='UTF-8'). No error is reported if the encoding is not specified.
How to install OpenCV+Python on Mac? I am trying to install OpenCV+Python on Mac. I am trying to do this in six steps by running commands at terminal (after step2):Step1: Install XcodeStep2: Install Homebrew Step3: Install Python2 and Python31) brew install python python32) brew linkapps pythonbrew linkapps python34) which pythonwhich python3Step4: Install Python libraries by installing a virtual environmentStep5: Install OpenCVStep6: Symlink OpenCV+Python to virtual environmentThe problem is that which python must give output /usr/local/bin/python and not /usr/bin/python as it gives by default so that the virtual environment can be installed to install then the Python libraries.I removed the link by running unlink /usr/bin/python and I created a symlink by running ln -s /usr/local/Cellar/python /usr/bin/python (python and python3 are installed by default at /usr/local/Cellar/).However now which python gives me no output even though I have created the symlink. Why is this? How can I change the output of which command to install finally OpenCV+Python on Mac?Any better idea to install OpenCV+Python on Mac with most of the useful libraries or virtual environments etc? (Obviously I know how to do the installation without all these)P.S. I followed this link: https://www.learnopencv.com/install-opencv3-on-macos/
The officially recommended python packaging tool is pipenv. One example of a workflow you could use to make a virtual environment with the exact libraries your project needs as well as ensuring security is this:$ brew install pipenv$ cd /path/to/project$ pipenv --three$ pipenv install opencv-pythonAnd after you write your code in, say, project.py$ pipenv run python3 project.pyMore info on the pipenv site.
Get rid of script text in HTML using beautifulsoup I want to analyze all visible text from an HTML.UrlTo get rid of all HTML elements I currently use: from bs4 import BeautifulSoupimport re soup = BeautifulSoup(test.content, 'html.parser')#soup_str = soup.get_text() # doesn't helpsoup_str = str(soup)pattern = r'''<.*?>'''clean_str = re.sub(pattern,' ', soup_str)This works well but I have still some script text in the beginning and end of my string left (see below).I also tried other re patterns like r'''<!-.*}''' or suggested methods in other posts like:for script in soup.find_all('script', src=False): script.decompose()The first method does not work and the second deletes a lot of embedded text in my case.<!--/email_off--", "validThrough": "2019-09-01", "hiringOrganization" : { "@type" : "Organization", "name" : "NAME"}, "jobLocation":[{"@type":"Place","geo":{"@type":"GeoCoordinates","latitude":"58.1833","longitude":"8.2"},"address":{"@type":"PostalAddress","addressLocality":"Locality","postalCode":"ZIPS","addressCountry":"Country"}}] } } var framefenster = document.getElementsByTagName("iFrame"); var auto_resize_timer = window.setInterval("autoresize_frames()", 400); function autoresize_frames() { for (var i = 0; i < framefenster.length; ++i) { if(framefenster[i].contentWindow.document.body){ var framefenster_size = framefenster[i].contentWindow.document.body.offsetHeight; if(document.all && !window.opera) { framefenster_size = framefenster[i].contentWindow.document.body.scrollHeight; } framefenster[i].style.height = framefenster_size + 20 + 'px'; } } }Thanks.
Apparently, the page keeps it's content in <script> tag. To get content from it, I used re module:import reimport requestsfrom bs4 import BeautifulSoupurl = 'https://www.pflegejob.de/index.php?section=anzeige&id=1233125'soup = BeautifulSoup(requests.get(url).text, 'html.parser')txt = soup.select_one('script[type="application/ld+json"]').texttxt = re.findall(r'"description": "(.*?)",\s*$', txt, flags=re.DOTALL|re.M)[0]soup = BeautifulSoup(txt, 'html.parser')print(soup.get_text(strip=True, separator='\n'))Prints:Die Evangelische Stiftung Tannenhof leistet mit ca. 525 Behandlungsplätzen, fünf Tageskliniken und drei Institutsambulanzen die psychiatrische Pflichtversorgung für mehr als eine halbe Million Einwohner für die Städte Wuppertal, Remscheid und Velbert. Wir verfügen über eine Reihe störungsspezifischer Behandlungsangebote, u. a. Fachstationen für depressive Störungen, Psychotraumatologie, Psychosomatik, Gerontopsychiatrie und Abhängigkeitserkrankungen.Außerdem verfügt die Evangelische Stiftung Tannenhof im Rahmen der Eingliederungshilfe für chronisch psychisch kranke und behinderte Menschen über ein spezialisiertes Wohn- und Betreuungsangebot (Bereich Integration – Wohnverbund) mit 170 stationären Wohnplätzen, ambulant betreutem Wohnen und vielfältigen tagesstrukturierenden Angeboten.Wir suchen zum01.09.2019 oder spätereinenGesundheits- und Krankenpfleger, Altenpfleger und / oder Heilerziehungspfleger (m/w/d)Vollzeit oder Teilzeit, unbefristetIhre AufgabenIntensive, individuelle und ganzheitliche Betreuung, Begleitung und Beratung von Menschen mit psychischer BehinderungErstellung und Fortschreibung von Hilfeplänen im Rahmen der individuellen Hilfeplanung (IHP bzw. BEI_NRW)Förderung, Wiederherstellung und Erhaltung der Selbstständigkeit der BewohnerInnen/KlientInnen im Rahmen einer aktiven Tagesstruktur und mit Blick auf eine realistische ZukunftsperspektiveEinhaltung von Qualitätsstandards und DokumentationsanforderungenMitwirkung an der Weiterentwicklung unserer Wohnangebote sowie der tagestrukturierenden AngeboteIhr Profilabgeschlossene dreijährige Ausbildung in der Gesundheits- und Kranken-, Alten- bzw. Heilerziehungspflegegerne auch Berufsanfänger (m/w/d)Verständnis für und Akzeptanz von Menschen mit einer psychischen BehinderungEngagement für die Unterstützung und Förderung der Selbstbestimmung und Eigenverantwortung der BewohnerInnenFlexibilität und hohes VerantwortungsbewusstseinArbeiten im Schicht- und Bereitschaftsdienstgute EDV-KenntnisseInteresse an Fort- und WeiterbildungWas wir Ihnen bietendie Möglichkeit einer Hospitationeine intensive Einarbeitungeine interessante, vielseitige und verantwortungsvolle Tätigkeitmotivierte multiprofessionelle Teams und ein angenehmes BetriebsklimaFort- und Weiterbildungsmöglichkeiten im hauseigenen Diakonischen Bildungszentrum und bei anderen Bildungsträgernleistungsgerechte Vergütung nach BAT/KFzusätzliche betriebliche Altersversorgung bei der kirchlichen Zusatzversorgungskassemöglichst familienfreundliche DienstplangestaltungIhren Kindern steht unsere KITA offenWir freuen uns über Ihr InteresseFür weitere Informationen steht IhnenHerr Günter Fuchs,Leiter Wohnbereich / Integration, gerne telefonisch unter+49 (0) 2191 12 - 1450 zur Verfügung.Evangelische Stiftung TannenhofBereich Integration – WohnverbundRemscheider Str. 76 | 42899 RemscheidJetzt bewerben<!--/email_off--
How do I detect laser line from 2 images and calculate its center using python opencv? How can I detect laser line using 2 images, first with laser turned off and second with turned on and then calculate its center?These are my images:img1.jpgimg2.jpgThis is my code:import cv2import timeimg1 = cv2.imread("img1.jpg")img2 = cv2.imread("img2.jpg")img_org = img1img1 = img1[:,:,2]img2 = img2[:,:,2]diff = cv2.absdiff(img1, img2)diff = cv2.medianBlur(diff,5)ret, diff = cv2.threshold(diff ,0 ,255 ,cv2.THRESH_BINARY+cv2.THRESH_OTSU)cv2.imwrite("output1.png", diff)count = 0height, width = diff.shape[:2]start = time.time() # time startfor y in range(height): for x in range(width): if diff[y,x] == 255: count += 1 elif not count == 0: img_org[y, round(x - count/2)] = [0, 255, 0] count = 0end = time.time() # time stopprint(end - start)cv2.imwrite("output2.png", img_org)cv2.waitKey(0)This code takes red channel from both images, compare them to detect difference, then blur and treshold the difference image. This doesnt work good enought because on the top is some white that shouldn't be there. output1.png (diff)For detecting center of thresholded line I have tried looping through every row and pixel of the threshold image, counting white pixels. It works correcly but because of slow python loops and arrays calculating one 4032x2268 thresholded image takes about 16 seconds. For testing my code is setting laser line center to green pixels on output2.png. output2.png (img_org)How can I make laser detection more accurate and center of line calculation way faster?I'm fairly new to opencv.
differencegaussian blur to suppress noise, and smooth over saturated sectionsnp.argmax to find maximum for each rowI would also recommendsome more reduction in exposurePNG instead of JPEG for real processing. JPEG saves space, okay for viewing on the web.Gamma curves don't necessarily matter here. Just make sure the environment is darker than the laser. Exact calculation depends on what color space it is exactly, and the 2.2 exponent is a good approximation of the actual curveim0 = cv.imread("background.jpeg")im1 = cv.imread("foreground.jpeg")(height, width) = im0.shape[:2]# gamma stuff, make values linear#im0 = (im0 / np.float32(255)) ** 2.2#im1 = (im1 / np.float32(255)) ** 2.2diff = cv.absdiff(im1, im0)diff = cv.GaussianBlur(diff, ksize=None, sigmaX=3.0)plane = diff[:,:,2] # redindices = np.argmax(plane, axis=1) # horizontally, for each rowout = diff.copy() # "drawing" 3 pixels thickout[np.arange(height), indices-1] = (0,255,0)out[np.arange(height), indices ] = (0,255,0)out[np.arange(height), indices+1] = (0,255,0)cv.imwrite("out.jpeg", out)
How to convert a rectangular matrix into a stochastic and irreducible matrix? I have written the following code to convert a matrix into a stochastic and irreducible matrix. I have followed a paper (Deeper Inside PageRank) to write this code. This code works well for the square matrix but giving an error for rectangular matrices. How can I modify it to convert rectangular matrices into stochastic and irreducible matrices? My Code: import numpy as np P = np.array([[0, 1/2, 1/2, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1/3, 1/3, 0, 0, 1/3, 0], [0, 0, 0, 0, 1/2, 1/2], [0, 0, 0, 1/2, 0, 1/2]]) #P is the original matrix containing 0 rows col_len = len(P[0]) row_len = len(P) eT = np.ones(shape=(1, col_len)) # Row vector of ones to replace row of zeros e = eT.transpose() # it is a column vector e eT_n = np.array(eT / col_len) # obtained by dividing row vector of ones by order of matrix Rsum = 0 for i in range(row_len): for j in range(col_len): Rsum = Rsum + P[i][j] if Rsum == 0: P[i] = eT_n Rsum = 0 P_bar = P.astype(float) #P_bar is stochastic matrix obtained by replacing row of ones by eT_n in P alpha = 0.85 P_dbar = alpha * P_bar + (1 - alpha) * e * (eT_n) #P_dbar is the irreducible matrix print("The stocastic and irreducible matrix P_dbar is:\n", P_dbar)Expected output:A rectangular stochastic and irreducible matrix.Actual output:Traceback (most recent call last): File "C:/Users/admin/PycharmProjects/Recommender/StochasticMatrix_11Aug19_BSK_v3.py", line 13, in <module>P_dbar = alpha * P_bar + (1 - alpha) * e * (eT_n) #P_dbar is the irreducible matrixValueError: operands could not be broadcast together with shapes (5,6) (6,6)
You are trying to multiply two arrays of different shapes. That will not work, since one array has 30 elements, and the other has 36 elements.You have to make sure the array e * eT_n has the same shape as your input array P.You are not using the row_len value. But if e has the correct number of rows, your code will run. # e = eT.transpose() # this will only work when the input array is squaree = np.ones(shape=(row_len, 1)) # this also works with a rectangular P You can check that the shape is correct:(e * eT_n).shape == P.shape You should study the numpy documentation and tutorials to learn how to use the ndarray data structure. It's very powerful, but also quite different from the native python data types.For example, you can replace this verbose and very slow nested python loop with a vectorized array operations. Original code (with fixed indentation):for i in range(row_len): Rsum = 0 for j in range(col_len): Rsum = Rsum + P[i][j] if Rsum == 0: P[i] = eT_nIdiomatic numpy code:P[P.sum(axis=1) == 0] = eT_nFurthermore, you don't need to create the array eT_n. Since it's just a single value repeated, you can assign the scalar 1/6 directly instead.# eT = np.ones(shape=(1, col_len)) # eT_n = np.array(eT / col_len)P[P.sum(axis=1) == 0] = 1 / P.shape[1]
How to read multiple partitioned .gzip files into a Spark Dataframe? I have the following folder of partitioned data-my_folder |--part-0000.gzip |--part-0001.gzip |--part-0002.gzip |--part-0003.gzipI try to read this data into a dataframe using->>> my_df = spark.read.csv("/path/to/my_folder/*")>>> my_df.show(5)+--------------------+| _c0|+--------------------+|��[I���...||��RUu�[*Ք��g��T...||�t��� �qd��8~��...||�(���b4�:������I�...||���!y�)�PC��ќ\�...|+--------------------+only showing top 5 rowsAlso tried using this to check the data->>> rdd = sc.textFile("/path/to/my_folder/*")>>> rdd.take(4)['\x1f�\x08\x00\x00\x00\x00\x00\x00\x00�͎\\ǖ�7�~�\x04�\x16��\'��"b�\x04�AR_<G��"u��\x06��L�*�7�J�N�\'�qa��\x07\x1ey��\x0b\\�\x13\x0f\x0c\x03\x1e�Q��ڏ�\x15Y_Yde��Y$��Q�JY;s�\x1d����[��\x15k}[B\x01��ˀ�PT��\x12\x07-�\x17\x12�\x0c#\t���T۱\x01yf��\x14�S\x0bc)��\x1ex���axAO˓_\'��`+HM҈�\x12�\x17�@']NOTE: When I do a zcat part-0000.gzip | head -1 to read the file content, it shows the data is tab separated and in plain readable English.How do I read these files properly into a dataframe?
For some reason, Spark does not recognize the .gzip file extension. So I had to change the file extensions before reading the partitioned data-import os# go to my_folderos.chdir("/path/to/my_folder")# renaming all `.gzip` extensions to `.gz` within my_foldercmd = 'rename "s/gzip/gz/" *.gzip'result_code = os.system(cmd)if result_code == 0: print("Successfully renamed the file extensions!") # finally reading the data into a dataframe my_df = spark.read.csv("/path/to/my_folder/*", sep="\t")else: print("Could not rename the file extensions!")
Print string between two variable substrings I am looking for the following:dna = '**atg**abcdefghijk**tga**aa'start = 'atg'stop = 'tag' or 'taa' or 'tga'I would like to get all the characters between start and stop. I have tried with:print (dna.find(start:stop))but it keeps me saying that the ":" are invalid syntax.
You could use a regular expression to help find any suitable matches as follows:import redna = 'atgabcdefghijktgaaa'start = 'atg'stop = ['tag', 'taa', 'tga']print re.findall(r'{}(.*?)(?:{})'.format(start, '|'.join(stop)), dna)This would display the following:['abcdefghijk']
Alphabetically sorting URLs to Download Image Having an issue with the sorting of urls. The .jpg files end in "xxxx-xxxx.jpg". The second set of keys need to be sorted in alphabetical order. Thus far I've only been able to sort the first set of characters alphabetically (which is not necessary).For instance:http://code.google.com/edu/languages/google-python-class/images/puzzle/p-babf-bbac.jpgis proceedinghttp://code.google.com/edu/languages/google-python-class/images/puzzle/p-babh-bajc.jpgwhen #!/usr/bin/python# Copyright 2010 Google Inc.# Licensed under the Apache License, Version 2.0# http://www.apache.org/licenses/LICENSE-2.0# Google's Python Class# http://code.google.com/edu/languages/google-python-class/import osimport reimport sysimport requests"""Logpuzzle exerciseGiven an apache logfile, find the puzzle urls and download the images.Here's what a puzzle url looks like:10.254.254.28 - - [06/Aug/2007:00:13:48 -0700] "GET /~foo/puzzle-bar-aaab.jpg HTTP/1.0" 302 528 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6""""def url_sort_key(url): print url [-8:]#Extract the puzzle urls from inside a logfiledef read_urls(filename): """Returns a list of the puzzle urls from the given log file, extracting the hostname from the filename itself. Screens out duplicate urls and returns the urls sorted into increasing order.""" # +++your code here+++# Use open function to search fort the urls containing "puzzle/p"# Use a line split to pick out the 6th section of the filename# Sort out all repeated urls, and return sorted list with open(filename) as f: out = set() for line in f: if re.search("puzzle/p", line): url = "http://code.google.com" + line.split(" ")[6] print line.split(" ") out.add(url) return sorted(list(out))# Complete the download_images function, which takes a sorted# list of urls and a directorydef download_images(img_urls, dest_dir): """Given the urls already in the correct order, downloads each image into the given directory. Gives the images local filenames img0, img1, and so on. Creates an index.html in the directory with an img tag to show each local image file. Creates the directory if necessary. """ # ++your code here++ if not os.path.exists(dest_dir): os.makedirs(dest_dir) # Create an index index = file(os.path.join(dest_dir, 'index.html'), 'w') index.write('<html><body>\n') i = 0 for img_url in img_urls: i += 1 local_name = 'img%d' %i print "Retrieving...", local_name print local_name print dest_dir print img_url response = requests.get(img_url) if response.status_code == 200: f = open(os.path.join(dest_dir,local_name + ".jpg"), 'wb') f.write(response.content) f.close() index.write ('<img src="%s">' % (local_name + ".jpg")) index.write('\n</body></html>\n') index.close()def main(): args = sys.argv[1:] print args if not args: print ('usage: [--todir dir] logfile ') sys.exit(1) todir = None if args[0] == '--todir': todir = args[1] del args[0:2] img_urls = read_urls(args[0]) if todir: download_images(img_urls, todir) else: print ('\n'.join(img_urls))if __name__ == '__main__': main()I think the error lies in the return for the read_urls function, but am not positive.
Given the urls end in the format xxxx-yyyy.jpgand you want to sort the urls based on the second key, i.e. yyyydef read_urls(filename): with open(filename) as f: s = {el.rstrip() for el in f if 'puzzle' in el} return sorted(s, key=lambda u: u[-8:-4]) # u[-13:-9] if need to sort on the first keyFor example, with an input file containinghttp://localhost/p-xxxx-yyyy.jpghttp://code.google.com/edu/languages/google-python-class/images/puzzle/p-babf-bbac.jpghttp://code.google.com/edu/languages/google-python-class/images/puzzle/p-babh-bajc.jpghttp://localhost/p-xxxx-yyyy.jpgit produces the list['http://code.google.com/edu/languages/google-python-class/images/puzzle/p-babh-bajc.jpg', 'http://code.google.com/edu/languages/google-python-class/images/puzzle/p-babf-bbac.jpg']i.e. bajc comes before bbac.See the comment in the code, in case you want to sort by the first key (xxxx)
Python unitest doesn't work I'm beginning to learn TDD. I have just started with unit test from python. When I try to execute:vagrant@vagrant:~/pruebaTestPython$ python test_python_daily_software.py ----------------------------------------------------------------------Ran 0 tests in 0.000sOKI have read in other links that I need to rename my own functions with test_ at the beginning. However, this is exactly what I did, but it still doesn't work.test_python_daily_software.py file:#!/usr/bin/python# -*- coding: utf-8 -*-import unittestimport python_dailyclass TestPythonSoftware(unittest.TestCase): def test_should_return_python_when_number_is_3(self): self.assertEqual('Python', python_daily.get_string(3)) def test_should_return_daily_when_number_is_5(self): self.assertEqual('Daily', python_daily.get_string(5)) if __name__ == '__main__': unittest.main()and python_daily.py file:#!/usr/bin/python# -*- coding: utf-8 -*-def get_string(number): return 'Hello'what is wrong?
If your python_daily.py Python module is:#!/usr/bin/python# -*- coding: utf-8 -*-def get_string(number): return 'Hello'and your test_python_daily_software.py test module is:#!/usr/bin/python# -*- coding: utf-8 -*-import unittestimport python_dailyclass TestPythonSoftware(unittest.TestCase): def test_should_return_python_when_number_is_3(self): self.assertEqual('Python', python_daily.get_string(3)) def test_should_return_daily_when_number_is_5(self): self.assertEqual('Daily', python_daily.get_string(5))if __name__ == '__main__': unittest.main()You should have:$ python test_python_daily_software.py FF======================================================================FAIL: test_should_return_daily_when_number_is_5 (__main__.TestPythonSoftware)----------------------------------------------------------------------Traceback (most recent call last): File "test_python_daily_software.py", line 11, in test_should_return_daily_when_number_is_5 self.assertEqual('Daily', python_daily.get_string(5))AssertionError: 'Daily' != 'Hello'======================================================================FAIL: test_should_return_python_when_number_is_3 (__main__.TestPythonSoftware)----------------------------------------------------------------------Traceback (most recent call last): File "test_python_daily_software.py", line 8, in test_should_return_python_when_number_is_3 self.assertEqual('Python', python_daily.get_string(3))AssertionError: 'Python' != 'Hello'----------------------------------------------------------------------Ran 2 tests in 0.000sFAILED (failures=2)Mind your indentation in source code!
Python: Writing to csv-file I am trying to read a csv-style file and create another one.Here is the (simplified) code.import osimport csvimport sysfn = 'C:\mydird\Temp\xyz'ext = '.txt'infn = fn+extoutfn = fn+'o'+extinfile = open(infn, newline='')outfile = open(outfn, 'w', newline='')try: reader = csv.reader(infile, delimiter=',', quotechar='"') # creates the reader object writer = csv.writer(outfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) for row in reader: # iterates the rows of the file in orders if reader.line_num == 1 : print("Header") writer.writerow(['Date', 'XY']) # ####### Does not work else:# Do something print(row[0], row[3]) # ####### Works writer.writerow([row[0], row[3]]) # ####### Does not workfinally: infile.close() # closingsys.exit(0))Neither of the writerow statements generate output. The file is created, but is empty.The print statement creates 'expected' ouput.I have also tried already csv.DictWriter with no success either.I have looked at the various simlar questions, but can't see any difference.I am using Py 3.3.3 on a win 7 machine.EDIT:the writer got lost be code simplification
Your code doesn't define a writer.Add writer = csv.writer(outfile) and then close the outfile and it should work. Using the with idiom makes the code cleaner.import csvfn = 'C:\mydird\Temp\xyz'ext = '.txt'infn = fn+extoutfn = fn+'o'+extwith open(infn) as rf, open(outfn, 'w') as wf: reader = csv.reader(rf, delimiter=',', quotechar='"') writer = csv.writer(wf) for row in reader: # iterates the rows of the file in orders if reader.line_num == 1 : print("Header") writer.writerow(['Date', 'XY']) else: # do something print(row[0], row[3]) writer.writerow([row[0], row[3]])
Reverse a python dictionary ordered pairs This question has been asked many times, and I searched diligently to no avail. Here is an example of my question:dict = {"a":"1", "b":"2", "c":"3"}The output I am looking for is as below:dict = {"c":"3", "b":"2", "a":"1"}I am really unsure how to attack this, as here is my current code:def reorder(a): clean = {} pair = {} i = 0 for k, v in a.iteritems(): pair = a.popitem() #Do stuff here return cleanWhat I am currently doing is grabbing the tuple pair as a key/value, as these need to remain the same. I am not sure how to insert this pair in a reverse order though.
Firstly, the dict object of python is a hashtable, so It have no order.but if you only want to get a list is order you can use sorted(iterable, key=None, reverse=False) method.def order(dic): return sorted(dic.items(),key=lambda x:x[1],reverse=True)
Storing objects in file instead of in memory I've created a genetic programming system in Python, but am having troubles related to memory limits. The problem is with storing all of the individuals in my population in memory. Currently, I store all individuals in memory, then reproduce the next generation's population, which then gets stored in to memory. This means that I have two populations worth of individuals loaded in memory. After some testing, I've found that I exceed the default 2GB application memory size for Windows fairly quickly.Currently, I write out the entire population's individual trees to a file, which I can then load and recreate the population if I want. What I have been considering is instead of having all of the individuals loaded in memory, access individual information by pulling the individual from the file and only instantiating that single individual. From my understanding of Python's readline functionality, it should only load a single line from the file at a time, instead of the entire file. If I did this, I think I would be able to only store in memory the individuals that I was currently manipulating.My question is, is there an underlining problem with doing this that I'm not seeing right now? I understand that because I am dealing with data on disk instead of in memory my performance is going to take a hit, but for this situation memory is more important than speed. Also I don't want to increase the allotted 2GB of memory given to Python programs.Thanks!
Given the RAM constraint, I'd change the population model from generational to steady state.The idea is to iteratively breed a new child or two, assess their fitness and then reintroduce them directly into the population itself, killing off some preexisting individuals to make room for them.Steady state uses half the memory of a traditional genetic algorithm because there is only one population at a time.Changing the implementation shouldn't be too hard, but you have to pay attention to premature convergence (i.e. tweaks parameters like mutation rate, tournament size...).The island model is another / additional possibility: population is broken into separate sub-populations (demes). Demes send individuals to one another to help spread news of newly-discovered fit areas of the space.Usually it's a asynchronous mechanism but you could use a synchronous algorithm, loading demes one by one, with a great reduction of the required memory resources.Of course you can write the population to a file and you can load just the needed individuals. If you choose this approach, it's probably a good idea to compute a hash signature of individuals to optimize the identification / loading speed.Anyway you should consider that, depending on the task your GP system is performing, you could register a massive performance hit.
Python: sorting a list of tuples on alpha case-insensitive order I have a list of tuples ("twoples")[('aaa',2), ('BBB',7), ('ccc',0)]I need to print it in that order, but>>> sorted([('aaa',2), ('BBB',7), ('ccc',0)])gives[('BBB', 7), ('aaa', 2), ('ccc', 0)]list.sort(key=str.tolower)doesn't work (obviously), becauseAttributeError: type object 'str' has no attribute 'tolower'I don't want to change the strings in the list.Another answer gavelist.sort(key=lambda (a, b): (a.lower(), b))but that must be a Python 2 thing, becauseSyntaxError: invalid syntax... at the first (itemgetter() doesn't help, because there's only one 'key' allowed
You're right that this is a Python 2 thing, but the fix is pretty simple:list.sort(key=lambda a: (a[0].lower(), a[1]))That doesn't really seem any less clear, because the names a and b don't have any more inherent meaning than a[0] and a[1]. (If they were, say, name and score or something, that might be a different story…)Python 2 allowed you to unpack function arguments into tuples. This worked (and was sometimes handy) in some simple cases, but had a lot of problems. See PEP 3113 for why it was removed.The canonical way to deal with this is to just split the value inside the function, which doesn't quite work in a lambda. But is there a reason you can't just define the function out of line?def twoplekey(ab): a, b = ab return a.lower(), blist.sort(key=twoplekey)As a side note, you really shouldn't call your list list; that hides the list type, so you can't use it anymore (e.g., if you want to convert a tuple to a list by writing list(tup), you'll be trying to call your list, and get a baffling error).
Python Quicksort Recursion unsupported operand type(s) for -: 'list' and 'int'` I have been having issues with a sorting program I am writing, the error Traceback (most recent call last): File "/Users/Shaun/PycharmProjects/Sorts/BubbleSort.py", line 117, in <module> sorted = quick_sort(unsorted, 0, len(unsorted - 1))TypeError: unsupported operand type(s) for -: 'list' and 'int'occurs at the function call for my quicksort see below print("You chose Quick Sort\n")sorted = []sorted = quick_sort(unsorted, 0, len(unsorted - 1))Here is the quicksort function with its input parametersdef quick_sort(list, leftBound, rightBound) -> object: leftBound = int(leftBound) rightBound = int(rightBound) pivot = int((list[math.floor(leftBound + rightBound / 2)])) print(pivot) while leftBound <= rightBound: # while bigger numbers are above pivot and lower are below # update bounds left + , right - while list[leftBound] < pivot: leftBound += 1 while list[rightBound] > pivot: rightBound -= 1 if (leftBound <= rightBound): list[rightBound], list[leftBound] = list[leftBound], list[rightBound] leftBound += 1 rightBound -= 1 if (leftBound < rightBound): quick_sort(list, leftBound, rightBound) if (rightBound < leftBound): quick_sort(list, leftBound, rightBound) print(list) return list
It is clearly stated in the Error message:TypeError: unsupported operand type(s) for -: 'list' and 'int'It means you are doing an operation, which is "-" (as stated in the error message: "for -") on operands whose type(s) not supported by this Operation.So you were subtracting an INT type from a LIST type.you need to change that line from:len(unsorted - 1)tolen(unsorted) - 1
Removing comments using lex: why doesn't this work? I'm writing a parser using Python/lex and trying to create an entry to remove C-style comments. My current (faulty) attempt is:def t_comment_ignore(t): r'(\/\*[^*]*\*\/)|(//[^\n]*)' passThis produced a quirk that baffled me. When I parse the string below: input = """if // else mystery =/*=*/= true /* false */ return"""The output tokens are:['IF', 'EQUAL', 'TIMES', 'EQUAL', 'DIVIDE', 'EQUAL', 'TRUE', 'RETURN']Apparently the comment on line 3 wasn't recognized properly and 3 of the symbols therein were returned as tokens.But if I add a space before the comment in line 3, i.e.:input = """if // else mystery = /*=*/= true /* false */ return"""I get:['IF', 'EQUAL', 'EQUAL', 'TRUE', 'RETURN']Debugging showed that all 3 comments were recognized correctly when the extra space was added.Well, I'm utterly baffled by this behavior. Any input is appreciated.Thanks, PauloPS: As some probably noticed, this enchilada is from Problem Set 2 in https://www.udacity.com/wiki/cs262. They give a more elaborate solution using another of lex's features, but I'm wondering if my approach is sound and if my code is fixable.
My guess is that your pattern for EQUALS matches =. instead of (or as well as) =. By the way, the correct comment pattern is /[*][^*]*[*]+([^/*][^*]*[*]+)*/|//[^\n]*.
Python 3 decimal module calculation reversability I am trying to implement a reversible physics engine so I have decided to use the decimal module. So this obviously works.>>> from decimal import *>>> a = Decimal('1')>>> b = Decimal('0.82')>>> a = a*b/b>>> print(a)1However, when this operation is repeated, i.e. "multiply 100 times and then divide 100 times", the result does not precisely equal to a again.>>> for _ in range(100):... a = a*b...>>> for _ in range(100):... a = a/b...>>> aDecimal('0.9999999999999999999999999965')Am I doing something wrong? Is it possible to do these calculations reversably, so that I get the initial result?
Decimal doesn't have infinite precision. You can increase its precision if you're finding it too inaccurate.from decimal import *getcontext().prec = some larger number
My code is wrong? I can't seem to find the answer Question is:Create a text file named team.txt and store 8 football team names and their best player, separate the player from the team name by a comma.Create a program that reads from the text file and displays a randomteam name and the first letter of the player’s first name and the first letter of their surname.import randomteamList = open("team.txt", "r")data = teamList.readlines()randomChoice = random.choice(range(8))teamName =[["arsenal"],["tottenham"],["chelsea"],["westham"],["city"],["united"],["barcelona"],["liverpool"]]player =[["kane"],["messi"],["ronaldo"],["ronaldino"],["ibrahimovic"],["neymar"],["salah"],["hazard"]]for lines in data: split = lines.split(',') teamName.append(split[0]) player.append(split[1])teamName = teamName[randomChoice]letters = player[randomChoice]print("\nThe team is ", teamName)splitLetters = letters.append('')print("And the first letter of the player’s firstname and surname is")for x in range(len(splitLetters)): print((splitLetters[x][0]).upper())
The issue is with this line:splitLetters = letters.append('')The issue is .append() does not return any value, so splitLetters is None and therefore doesn't have a length. To use .append(), you need to append directly to the string (i.e letters.append('t') would append the string 't' to letters, but wouldn't return a value)However, this is not necessary since you are appending an empty string '' to letters. Try removing the line splitLetters = letters.append('') changing the second last line to:for x in range(len(letters)):
List to a json object I have a list of ec2 instanceID like below:['i-111111111111', 'i-22222222222']And I want to convert it to json.{ "instance_id": "i-111111111111", "instance_id": "i-222222222222"}Can someone please advise how to do it in Python?
li = ['i-111111111111', 'i-22222222222']di = dict()for i in li: a = {'instance_id':i} di.update(a)print(di)But having a single key is not a good approach
Query SQL Server encrypted columns through python and show decrypted values I am trying to query a table (in an on premise database), which has some encrypted columns, in Python. I don't have any problem querying this data in Microsoft SQL Server Management Studio. Since I have a certificate, I just use Windows authentication to login, and in Additional Connection Parameters, I add "Column Encryption Setting = Enabled". This way I am able to see all the decrypted values.In Python I am not able to do this. I have tried various parameters in the pyodbc library, but the values always show up encrypted.What I have tried so far:conn = pyodbc.connect('Driver={SQL Server};''Server=ServerName;''Database=DBName;''ColumnEncryption=Enabled;''Trusted_Connection=yes;''TrustedServerCertificate=yes;')conn = pyodbc.connect('Driver={SQL Server};''Server=ServerName;''Database=DBName;''Trusted_Connection=yes;''ColumnEncryption=Enabled;')The values in the Python pandas dataframe still show up encrypted, something like'\xee\x91\xghk\x00n\xk9.....
Driver={SQL Server} (SQLSRV32.DLL) is the ancient SQL Server driver that ships with Windows. It dates back to the days of SQL Server 2000 and does not support many of the more modern SQL Server features.Today's applications should use a more current driver like Driver=ODBC Driver 17 for SQL Server (MSODBCSQL17.DLL) or Driver=ODBC Driver 18 for SQL Server (MSODBCSQL18.DLL).
Pass arguments to slot function built by QT Designer I'm new to GUI designing and I started working with QT Designer 4.8.6. I'm connecting buttons to function using the signals-slots mechanism.Here is some connections generated by the pyuic4 util to create the GUI script:QtCore.QObject.connect(self.btn_add_codes, QtCore.SIGNAL(_fromUtf8("clicked()")), MainWindow.sel_codes_file)QtCore.QObject.connect(self.btn_add_xls, QtCore.SIGNAL(_fromUtf8("clicked()")), MainWindow.sel_excel_file)The associated functions in my main python file (path_codes and path_excel are QLineEdit widgets):class MainDialog(QtGui.QMainWindow, gui_v1.Ui_MainWindow): def __init__(self, parent=None): super(MainDialog, self).__init__(parent) self.setupUi(self) def sel_codes_file(self): self.path_codes.setText(QtGui.QFileDialog.getOpenFileName()) def sel_excel_file(self): self.path_excel.setText(QtGui.QFileDialog.getOpenFileName())I want to use a generic function for all the buttons which actions is to search for a file and show the path in a LineEdit widget. I added this function to my MainDialog class:def select_file(self, textbox): self.textbox.setText(QtGui.QFileDialog.getOpenFileName())I modified the connection to: QtCore.QObject.connect(self.btn_add_codes, QtCore.SIGNAL(_fromUtf8("clicked()")), MainWindow.select_file(textbox=self.path_codes)It's not working. The main window does not show with this code and I'm getting this error: AttributeError: 'MainDialog' object has no attribute 'textbox'Is it possible to pass arguments to slots connection functions ? If so, what am I doing wrong ? Thanks !
Does this lambda work? At least that's how I do it with Qt when using C++.self.btn_add_codes.clicked.connect(lambda codes=self.path_codes: MainWindow.select_file(codes))
Convert time strings to integers I am reading in a file that has a column of times that is in the format of hour, minute, seconds (023456). There are other columns in the file that I am not dealing with at the time. I have ignored the other values. 020746 10 -1020823 5 -1020839 6 -1020812 6 0My goal is to read the column in as a string, then split the time into the corresponding hour, minute, and second as an integer. So far I have this:f = open(file, 'r')for line in f: line = line.strip() columns = line.split() time = columns[0]f.close()hour = int(time[0:2])minute = int(time[2:4])second = int(time[4:6])If I put a print statement in the for loop, it prints all of the corresponding times in a string. However, when I print hour, minute, or second, it only prints out the values of the final time in the time column. For example it will printprint(hour)2print(minute)8print(second)12Is there a way to print out all of the corresponding hour, minute, and seconds into a list to get:print(hour)[2, 2, 2, 2]print(minute)[7, 8, 8, 8]print(second)[46, 23, 39, 12]Any help would be greatly appreciated!
As suggested by pyNoob. Create a list with the times and append the info to it.time_list = []f = open(file, 'r')for line in flash: line = line.strip() columns = line.split() time = columns[0] hour = int(time[0:2]) minute = int(time[2:4]) second = int(time[4:6]) time_list.append(hour, minute, second)f.close()
Interrupt python grpc client when receiving stream I am playing a little bit with gRPC and I do not know how to close a connection between a client and the server when receiving a stream. Both the client and the server are written in Python.For example, my server reads messages from a Queue and it yields each message. My idea is that the client subscribes to the server and starts receiving those messages.My questions are:I would like to get the client killed whenever CTRL+C is pressed, but it gets stuck with the current code. How can it be done properly?How does the server realize that the client has stopped listening? My nbi.proto file:syntax = "proto3";service Messenger { rpc subscribe(Null) return (stream Message) {}}message Null {}message Message { string value = 1;}Python client:import test_pb2_grpc as test_grpcimport test_pb2 as testimport grpcdef run(): channel = grpc.insecure_channel('localhost:50051') stub = test_grpc.MessengerStub(channel) stream = stub.subscribe(test.Null()) try: for e in stream: print e except grpc._channel._Rendezvous as err: print err except KeyboardInterrupt: stub.unsuscribe(test.Null)Python server:import test_pb2_grpc as test_grpcimport test_pb2 as testfrom Queue import Emptyimport grpcclass Messenger(test_grpc.MessengerServicer): def __init__(self, queue): self.queue = queue def subscribe(self, request, context): while True: try: yield self.queue.get(block=True, timeout=0.1) except Empty: continue except Exception as e: logger.error(e) break return
I would like to get the client killed whenever CTRL+C is pressed, but it gets stuck with the current code. How can it be done properly?The KeyboardInterrupt should be enough to terminate the client application. Probably the process is hanging on the stub.unsubscribe. If you use the client disconnection callback, perhaps you don't need to unsubscribe explicitly. How does the server realize that the client has stopped listening?You can add a callback to the context object that gets passed to your Messenger.subscribe method. The callback is called on client disconnection.By the way, you can look in using empty.proto in place of your Null type.
Test the presence of a subexpression involving noncommutative symbols I have the following Sympy expressionexpr=b0*d0*u0 - b0*d1*u1 - b1*d0*u1 - b1*d1*u0 + d0*b0*u0 - d0*b1*u1 - d1*b0*u1 - d1*b1*u0And I want to know if, for example, the productd0*u0is in this expression. For this, I useprint(expr.has(d0*u0))but the result isFalseHowever, if I replace this subexpression without asking if it is in the expression, Sympy does it without any problem, e.g.print(expr.subs(d0*u0,x0))b0*x0 - b0*d1*u1 - b1*d0*u1 - b1*d1*u0 + d0*b0*u0 - d0*b1*u1 - d1*b0*u1 - d1*b1*u0So, how can I know if the subexpression I want to find is in the expression?
This appears to be currently an issue with noncommutative symbols, otherwise expr.has(d0*u0) returns True. The following works whenever subs can identify a subexpression:dummy = Dummy()print(expr.subs(d0*u0, dummy).has(dummy))I.e., replace the subexpression with a dummy variable, and test the presence of that dummy.However, this workaround will become unnecessary in a future version of SymPy (1.2+) when this bug is fixed.
A single executable file with Py2Exe I have been trying to make a single executable file and I am getting close. Please do not recommend that I use PyInstaller -- I have tried that route, asked on SO here, and have put in tickets. It is close but not quite working. I am now trying py2exe and am also very close. In pyinstaller, I am able to create resource files (which builds the executable with the files included -- I can then access these in the temporary folder). I want to do the same for py2exe. I have a single executable, but five extra folders (maps, mpl-data, data, pics and tcl). I have seen this question but can't seem to understand it, nor get it to work. In my main py file, I am using PersistentDict(filepath) which is where I need the path to the file. My question is two parts: 1. How do I get the files (data files below) packaged into the executable. 2. How do I access these files in my code and return their path (as a string) such as /temp/file1.jpg.Here is my code for my py2exe setup file -- note that I have matplotlib and must include the mpl-data correctly in my executable. Thanks!from distutils.core import setup import py2exe import shutilimport glob import matplotlib,sixopts = {'py2exe': { "includes" : ["matplotlib.backends", "matplotlib.backends.backend_qt4agg", "matplotlib.figure","numpy", "six", "mpl_toolkits.basemap", "matplotlib.backends.backend_tkagg"], 'excludes': ['_gtkagg', '_tkagg','_agg2','_cairo', '_cocoaagg', '_fltkagg', '_gtk', '_gtkcairo', 'tcl' ], 'dll_excludes': ['libgdk-win32-2.0-0.dll','w9xpopen.exe', 'libgobject-2.0-0.dll'], 'bundle_files': 1, 'dist_dir': "Dist Folder", 'compressed': True, } }data_files = [(r'mpl-data', glob.glob(r'C:\Python27\Lib\site-packages\matplotlib\mpl-data\*.*')), (r'mpl-data', [r'C:\Python27\Lib\site-packages\matplotlib\mpl-data\matplotlibrc']), (r'mpl-data\images',glob.glob(r'C:\Python27\Lib\site-packages\matplotlib\mpl-data\images\*.*')), (r'mpl-data\fonts',glob.glob(r'C:\Python27\Lib\site-packages\matplotlib\mpl-data\fonts\*.*')), (r'mpl-data\data', glob.glob(r'C:\Python27\Lib\site-packages\matplotlib\mpl-data\data\*.*')), ('data', ['C:\\Users\\Me\\Documents\\Example_Json_File.json']), ('pics', ['C:\\Users\\Me\\Documents\\Example_Icon.ico', 'C:\\Users\\Me\\Documents\\Example_Jpg.jpg', ])]setup(windows=[{"script" : "MyMainScript.py", "data_files" : data_files, "icon_resources": [(1, 'C:\\Users\\Me\\Documents\\Example_Icon.ico')]}, ], version = "1.0", options=opts, data_files=data_files, zipfile = None, )
Guy here explains how to package to one file with py2exe. He's setup doesn't package resources inside the executable either.When I package my apps, I don't use one executable optionoptions = {"py2exe": {'bundle_files': 1, 'compressed': True}},not even bothered to put them in library.zip viaoptions = {"py2exe": {"skip_archive":0}}Just have a number of pyc's, data files, dlls etc in one dir. Then create an installer using NSIS or Inno setup. As some of my apps have to run as services, Inno was taking care of that.The biggest plus of that approach, you don't have to deal with "frozen" paths to your files, that are different from your original paths.Otherwise you might need to alter your code to detect frozen paths, e.g. http://www.py2exe.org/index.cgi/WhereAmI
how to monitor availability Decathlon's products with python? I have a request for you.I wanna to scrape the following product https://www.decathlon.it/p/kit-manubri-e-bilanciere-bodybuilding-93kg/_/R-p-10804?mc=4687932&c=NERO#The prodcuts have two possible status:"ATTUALMENTE INDISPONIBILE""Disponibile"In a nutshell I wanna to create a script that monitors for all minutes if the product is available, recording all data in the shell.The output could be the following:28/03/2021 12:07 - Attualmente Indisponibile28/03/2021 12:08 - Attualmente Indisponibile28/03/2021 12:09 - Disponibile Is it possibile with python? Someone could help me to write the code?I'm not able to use "request" patch or other web-scraping pythons tools, but I wanna learn.I have tried with the following code:import requestsimport reurls = ['p/kit-manubri-e-bilanciere-bodybuilding-93kg/_/R-p-10804.html']def main(site): with requests.Session() as req: for url in urls: r = req.get(site.format(url)) match = re.search('availability.+org\/(.*?)"', r.text) print("url: {:<70}, status: {}".format(r.url, match.group(1)))main("https://www.decathlon.it/{}")but give me the following error:AttributeError: 'NoneType' object has no attribute 'group'
Try this:import requestsimport reimport timeurls = ['p/kit-manubri-e-bilanciere-bodybuilding-93kg/_/R-p-10804.html']user_agent = {'User-agent': 'Mozilla/5.0'}def main(site): with requests.Session() as req: for url in urls: r = req.get(site.format(url), headers=user_agent) match = re.search('availability.+org\/(.*?)"', r.text) print("url: {:<70}, status: {}".format(r.url, match.group(1)))while True: main("https://www.decathlon.it/{}") time.sleep(60)Output:url: https://www.decathlon.it/p/kit-manubri-e-bilanciere-bodybuilding-93kg/_/R-p-10804, status: OutOfStock
Python Cookies and PHP virtual() I narrowed down the problem:os.environ.get('HTTP_COOKIE')This always seems to be None when calling the Python file with that line using PHP's virtual(). Does anyone know why this is?I'm using Python 2.7 because of how much I need the Python Imaging Library.EDIT: Never mind, it's been fixed. It was because I'm an idiot and didn't know I had to set the cookie's path to /, causing it only to work where the cookie was generated.
Try to set cookies by your own with apache_setenv function.But if the only thing that you need from python is PIL, then you probably don't need python at all. PHP have very powerful tools like MagickWand, a frontend to image magick.
Multiple keys vs dictionary in memcache? What is a better approach? Having multiple keys or having a dictionary?In my scenario I want to store songs in a country basis and cache that for further access later on. Below I write the rough pseudocode without disclosing too many details to keep it simple. The actual songs will most probably be IDs from songs in a database.Many keys approachcache.set("songs_from_city1", city1_songs)cache.set("songs_from_city2", city2_songs)..Dictionary approachcache.set("songs_by_city", { 'city1': city1_songs 'city2': city2_songs ..})..
As mentioned in the comment it mainly depends on the application requirement. To Add another perspective,You can think of it as the problem of 'storing and retrieve 1 object vs multiple granular objects'. There is a detailed discussion on this tradeoff in this post.Hope it helps!
How to update the weights of a Tensorflow.js model? I currently have a tensorflow.js convolutional neural network model that detects if certain images are happy or sad (based on facial expression). This is done through the browser with the user uploading an image of a face or using the webcam which the model then determines the outcome for. However, the user also has the option to override this result if the model predicts incorrectly. What I am planning to do is have the model retrain with the user uploaded image if the user decides to override the result. I understand this can be done with model.fit and then model.save functions within the tensorflow.js API.My concern is that the model weights are currently stored in a google cloud storage bucket but I am unsure how to update the files in order to reuse the updated weights the next time a user passes in a face. Is there a certain way I can do it using google cloud or another similar cloud storage without having to change the model.load link every time?I know that this is a rather vague problem but I don't have access to an indexeddb so cloud storage seems to be the best option for storing the weights for the browser to access and update. I am just not sure how to save the model without changing the link from which it should be accessed through model.load at a later time.
You can update the data of the objects within the Cloud Storage bucket by setting up Object Versioning. Only thing is that if you want to access a previous version of the object, you would have to use the generation number, like this:gs://[BUCKET_NAME]/[OBJECT_NAME]#[GENERATION_NUMBER]
Generating a date in a particular format using python I have a python code that looks like this. I am receiving the valuesof year, month and day in form of a string. I will test whether they are not null.If they are not null I will like to generate a date in this format MMddyyyy from the variables from datetime import datetime year = "2022" month = "7" day = "15" if len(year) and len(month) and len(day): print('variables are not empty') #prepare = "{month}/{day}/{year}" #valueDt = datetime.strptime(prepare,"%m/%d/%Y") else: print('variables are empty') The solution I have is not working. How can I generate this date?
It should work without calling len as well.from datetime import datetime, dateyear = "2022"month = "7"day = "15"if year and month and day: print('variables are not empty') prepare = date(int(year), int(month), int(day)) valueDt = datetime.strftime(prepare, "%m/%d/%Y") print(valueDt)else: print('variables are empty')
Scrapy image problems on production server I have a scrapy script to download images from a site. Locally work perfectly, and also seems on production server, but despite not receiving any error, don't save the images.This is the output on production server:2013-07-10 05:12:33+0200 [scrapy] INFO: Scrapy 0.16.5 started (bot: mybot)2013-07-10 05:12:33+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState2013-07-10 05:12:33+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats2013-07-10 05:12:33+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware2013-07-10 05:12:33+0200 [scrapy] DEBUG: Enabled item pipelines: CustomImagesPipeline2013-07-10 05:12:33+0200 [bh] INFO: Spider opened2013-07-10 05:12:33+0200 [bh] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2013-07-10 05:12:33+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:60232013-07-10 05:12:33+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:60802013-07-10 05:12:34+0200 [bh] DEBUG: Crawled (200) <GET http://www.mysite.com/find/brands.jsp> (referer: None)2013-07-10 05:12:37+0200 [bh] DEBUG: Crawled (200) <GET http://www.mysite.com/c/browse/BrandName/ci/5732/N/4232860366> (referer: http://www.mysite.com/find/brands.jsp)2013-07-10 05:12:41+0200 [bh] DEBUG: Crawled (200) <GET http://www.mysite.com/c/browse/Accessories-for-Camcorders/ci/5766/N/4232860347> (referer: http://www.mysite.com/c/browse/BrandName/ci/5732/N/4232860366)2013-07-10 05:12:44+0200 [bh] DEBUG: Crawled (200) <GET http://www.mysite.com/c/buy/CategoryName/ci/5786/N/4232860316> (referer: http://www.mysite.com/c/browse/BrandName/ci/5732/N/4232860366)2013-07-10 05:12:46+0200 [bh] DEBUG: Crawled (200) <GET http://www.mysite.com/images/images500x500/927001.jpg> (referer: None)2013-07-10 05:12:46+0200 [bh] DEBUG: Image (downloaded): Downloaded image from <GET http://www.mysite.com/images/images500x500/927001.jpg> referred in <None>2013-07-10 05:12:46+0200 [bh] DEBUG: Scraped from <200 http://www.mysite.com/c/buy/CategoryName/ci/5786/N/4232860316> {'code': u'RFE234', 'image_urls': u'http://www.mysite.com/images/images500x500/927001.jpg', 'images': []}2013-07-10 05:12:50+0200 [bh] DEBUG: Crawled (200) <GET http://www.mysite.com/images/images500x500/896290.jpg> (referer: None)2013-07-10 05:12:50+0200 [bh] DEBUG: Image (downloaded): Downloaded image from <GET http://www.mysite.com/images/images500x500/896290.jpg> referred in <None>2013-07-10 05:12:50+0200 [bh] DEBUG: Scraped from <200 http://www.mysite.com/c/buy/CategoryName/ci/5786/N/4232860316> {'code': u'ABCD123', 'image_urls': u'http://www.mysite.com/images/images500x500/896290.jpg', 'images': []}2013-07-10 05:13:18+0200 [bh] INFO: Closing spider (finished)2013-07-10 05:13:18+0200 [bh] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 11107, 'downloader/request_count': 14, 'downloader/request_method_count/GET': 14, 'downloader/response_bytes': 527125, 'downloader/response_count': 14, 'downloader/response_status_count/200': 14, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2013, 7, 10, 3, 13, 18, 673536), 'image_count': 10, 'image_status_count/downloaded': 10, 'item_scraped_count': 10, 'log_count/DEBUG': 40, 'log_count/INFO': 4, 'request_depth_max': 2, 'response_received_count': 14, 'scheduler/dequeued': 4, 'scheduler/dequeued/memory': 4, 'scheduler/enqueued': 4, 'scheduler/enqueued/memory': 4, 'start_time': datetime.datetime(2013, 7, 10, 3, 12, 33, 367609)}2013-07-10 05:13:18+0200 [bh] INFO: Spider closed (finished)The difference that I noticed is the 'images' variable on my Item that is a empty list [] instead in local normally is like this:2013-07-10 00:22:31-0300 [bh] DEBUG: Scraped from <200 http://www.mysite.com/c/buy/CategoryName/ci/5742/N/4232860364> {'code': u'BGT453', 'image_urls': u'http://www.mysite.com/images/images500x500/834569.jpg', 'images': [{'checksum': 'ef2e2e42eeb06591bdfbdee568d29df1', 'path': u'bh/BGT453.jpg', 'url': 'http://www.mysite.com/images/images500x500/834569.jpg'}]}The main problem is that there is no error in the output and therefore do not know what to do to solve the problem.I have PIL updated and same scrapy version 0.16.5 and python 2.7UPDATE 1...2013-07-10 06:48:50+0200 [scrapy] DEBUG: This is a DEBUG on CustomImagesPipeline !!...UPDATE 2I created the CustomImagesPipeline to save the images with product code as file name. I copied the code from ImagesPipeline and I did only some changes.from scrapy import logfrom twisted.internet import defer, threadsfrom scrapy.http import Requestfrom cStringIO import StringIOfrom PIL import Imageimport timefrom scrapy.contrib.pipeline.images import ImagesPipelineclass CustomImagesPipeline(ImagesPipeline): def image_key(self, url, image_name): path = 'bh/%s.jpg' % image_name return path def get_media_requests(self, item, info): log.msg("This is a DEBUG on CustomImagesPipeline !! ", level=log.DEBUG) yield Request(item['image_urls'], meta=dict(image_name=item['code'])) def get_images(self, response, request, info): key = self.image_key(request.url, request.meta.get('image_name')) orig_image = Image.open(StringIO(response.body)) width, height = orig_image.size if width < self.MIN_WIDTH or height < self.MIN_HEIGHT: raise ImageException("Image too small (%dx%d < %dx%d)" % (width, height, self.MIN_WIDTH, self.MIN_HEIGHT)) image, buf = self.convert_image(orig_image) yield key, image, buf for thumb_id, size in self.THUMBS.iteritems(): thumb_key = self.thumb_key(request.url, thumb_id) thumb_image, thumb_buf = self.convert_image(image, size) yield thumb_key, thumb_image, thumb_buf def media_downloaded(self, response, request, info): referer = request.headers.get('Referer') if response.status != 200: log.msg(format='Image (code: %(status)s): Error downloading image from %(request)s referred in <%(referer)s>', level=log.WARNING, spider=info.spider, status=response.status, request=request, referer=referer) raise ImageException('download-error') if not response.body: log.msg(format='Image (empty-content): Empty image from %(request)s referred in <%(referer)s>: no-content', level=log.WARNING, spider=info.spider, request=request, referer=referer) raise ImageException('empty-content') status = 'cached' if 'cached' in response.flags else 'downloaded' log.msg(format='Image (%(status)s): Downloaded image from %(request)s referred in <%(referer)s>', level=log.DEBUG, spider=info.spider, status=status, request=request, referer=referer) self.inc_stats(info.spider, status) try: key = self.image_key(request.url, request.meta.get('image_name')) checksum = self.image_downloaded(response, request, info) except ImageException as exc: whyfmt = 'Image (error): Error processing image from %(request)s referred in <%(referer)s>: %(errormsg)s' log.msg(format=whyfmt, level=log.WARNING, spider=info.spider, request=request, referer=referer, errormsg=str(exc)) raise except Exception as exc: whyfmt = 'Image (unknown-error): Error processing image from %(request)s referred in <%(referer)s>' log.err(None, whyfmt % {'request': request, 'referer': referer}, spider=info.spider) raise ImageException(str(exc)) return {'url': request.url, 'path': key, 'checksum': checksum} def media_to_download(self, request, info): def _onsuccess(result): if not result: return # returning None force download last_modified = result.get('last_modified', None) if not last_modified: return # returning None force download age_seconds = time.time() - last_modified age_days = age_seconds / 60 / 60 / 24 if age_days > self.EXPIRES: return # returning None force download referer = request.headers.get('Referer') log.msg(format='Image (uptodate): Downloaded %(medianame)s from %(request)s referred in <%(referer)s>', level=log.DEBUG, spider=info.spider, medianame=self.MEDIA_NAME, request=request, referer=referer) self.inc_stats(info.spider, 'uptodate') checksum = result.get('checksum', None) return {'url': request.url, 'path': key, 'checksum': checksum} key = self.image_key(request.url, request.meta.get('image_name')) dfd = defer.maybeDeferred(self.store.stat_image, key, info) dfd.addCallbacks(_onsuccess, lambda _: None) dfd.addErrback(log.err, self.__class__.__name__ + '.store.stat_image') return dfdLocal System Mac OSX, Production Server Debian GNU/Linux 7 (wheezy)
From the docs: The images in the list of the images field will retain the same order of the original image_urls field. If some image failed downloading, an error will be logged and the image won’t be present in the images field.It seems logging must be explicitly enabled, for example like this:from scrapy import loglog.msg("This is a warning", level=log.WARNING)So please enable logging, and edit your question to include the error you're getting.
Is tzinfo=tzutc() same as +00:00 in python? Are both the time formats equivalent in python : datetime.datetime(2013, 6, 17, 7, 46, 0, 609263, tzinfo=tzutc())datetime.datetime(2013, 6, 17, 7, 46, 0, 609263, +00:00)Also is there a way to replace tzinfo=tzutc() with +00:00 and vice versa?
If you look into the source of dateutil ZERO = datetime.timedelta(0) # same as 00:00class tzutc(datetime.tzinfo): def utcoffset(self, dt): return ZERO def dst(self, dt): return ZEROwhich I assume you are using.You'll see that according to the datetime object you put tzutc into the two are equivalent because tzutc will return the following:datetime.timedelta(0)But the class also includes a whole range of functions that you may find useful at some point if you wish to use them.You can replace them easily enough by just using a variable and using that variable in every place you would use either 00:00 or tzutc.
Find all partitions of n of length less-than-or-equal to L How might I find all the partitions of n that have length less-than-or-equal-to L?
Based on the code given here, we can include an additional argument L (which defaults to n).We might naively include if len((i,) + p) <= L: before yield (i,) + p. However, since len((i,) + p) = 1 + len(p), any partitions of n-i that are longer than L-1 are discarded. Thus time is wasted by finding them. Instead, we should include L=L-1 as an argument when finding partitions of n-1. We then need to deal with the L=0 case properly, by not running the main body:def partitions(n, L=None, I=1): if L is None: L = n if L: yield (n,) for i in range(I, n//2 + 1): for p in partitions(n-i, L-1, i): yield (i,) + pNow if L=1, the for i loop will be executed, but none of the for p loops will since the partitions calls won't yield anything; we need not execute the for i loop at all in this case, which can save a lot of time:def partitions(n, L=None, I=1): if L is None: L = n if L == 1: yield (n,) elif L > 1: yield (n,) for i in range(I, n//2 + 1): for p in partitions(n-i, L-1, i): yield (i,) + p
Creating a new variable with the average of categories of another variable I have data of houses sold in different locations. There are a variable "zipcode" and a variable "price". I have to predict for every object the average price for the relative zipcode.import pandas as pddata = {"zipcode":[100, 100, 101, 101], "price":[500, 600, 800, 1000]}df = pd.DataFrame(data)dfI create a Series with the average price for every zipcode:zipcode_mprice = df.groupby(["zipcode"])["price"].mean()zipcode_mpriceHow can I have to create a new variable df["pred_price"] that gives me the average price of the relative zipcode?It was told me to use the function replace().Thank you!
You can actually merge the result with the dataframe:df = df.merge(zipcode_mprice, on= "zipcode" )df.columns = ["zipcode","price","mean_zipcode"]df
How to fix ERR_TOO_MANY_REDIRECTS? I'm developing a site on Django, but I got an error ERR_TOO_MANY_REDIRECTS. I think that the matter is in the views.py file. Help figure it out.P.S. already tried to delete cookie files, it didn't help(from email import messagefrom wsgiref.util import request_urifrom django.shortcuts import redirect, renderfrom django.contrib.auth.models import User, authfrom django.contrib import messages# Create your views here.def reg(request): if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] cpassword = request.POST['cpassword'] if password == cpassword: if User.objects.filter(username=username): messages.info(request, 'Username taken') return redirect('registration') else: user = User.objects.create_user(username=username, password=password) user.save() return redirect('login') else: messages.info(request, 'Passwords not matching') return redirect('registration') return redirect('/') else: return render(request, 'registration.html')def login(request): if request.method == "POST": username = request.POST['username'] password = request.POST['password'] user = auth.authenticate(username = username, password = password) if user is not None: auth.login(request, user) return redirect('/') else: messages.info(request, 'Invalid credentials') return redirect('login') else: return render(request, 'login.html')def logout(request): auth.logout(request) return redirect('/')
The problem is coming from your return redirect('/'). Redirect to one of the views written in your urls.py and your problem will be solved.
How to execute a function on two values of two separate arrays for value in distance_moduli_error_array: DM_error = (np.log(10)*(10**((distance_moduli_array/5)+1))*(value*0.2)) list.append(distance_to_galaxies_parsecs_error, DM_error)distance_moduli_error_array and distance_moduli_array are two arrays each with 8 values. I'm trying to figure out the best way to execute the calculation stored in the DM_error variable on each value in both arrays. My code above doesn't work, because for each value in the distance_moduli_error_array array, it's doing the calculation for every value in the distance_moduli_array array, whereas I want it to do a 1-1 calculation.
for x,y in zip(distance_moduli_error_array, distance_moduli_array): DM_error = (np.log(10)*(10**((y/5)+1))*(x*0.2)) list.append(distance_to_galaxies_parsecs_error, DM_error)Use zip
Where do I find the python source code/documentation for the selenium ActionBuilder class? I would like to rewrite the selenium ActionChains class and noticed that it uses the ActionBuilder class. Browsing the python documentation and the internet I was only able to find a documentation for the ruby and c# implementation of the ActionBuilder class and not one for python. Does it simply not exist? What am I missing here?
You can find the source code in the selenium github repository. But I don't know if there are any documentation for ActionBuilder class exists or not. Here is the link to action_builder.py file.https://github.com/SeleniumHQ/selenium/blob/master/py/selenium/webdriver/common/actions/action_builder.py
Why are tables not being created in flask SQLAlchemy? This is what I have in my app.py:from flask import Flask, render_template, url_for, request, redirectfrom flask_sqlalchemy import SQLAlchemyfrom datetime import datetimeapp = Flask(__name__)app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///D:/Documents/.my projects/flask-website/blog.db'db = SQLAlchemy(app)class Blogpost(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(50)) subtitle = db.Column(db.String(50)) author = db.Column(db.String(20)) date_posted = db.Column(db.DateTime) content = db.Column(db.Text)if __name__ == '__main__': app.run(debug=True)and in python terminal, I try this:>>> from app import db>>> db.create_all()Then I check to see if the table has been created using command prompt:> sqlite3 blog.db> .tablesNothing gets returned, which I believe means that no tables are in the database. I'm following the tutorial here, but maybe the tutorial is out of date, so I'm not really sure where to go from here.I am using python 3.9
Turns out I had my file path wrong in app.config['SQLALCHEMY_DATABASE_URI']... currently hitting my head on my desk because I have spent more time than I care to admit on this issue.
How can I achieve a 30 FPS frame rate using this Python screen recorder code? I want a screen recorder. I thought of making my own.I checked the internet and found: https://www.thepythoncode.com/code/make-screen-recorder-pythonThe Code:import cv2import numpy as npimport pyautogui# Display screen resolution, get it from your OS settingsSCREEN_SIZE = (1366, 768)# Define the codecfourcc = cv2.VideoWriter_fourcc(*"XVID")# Create the video write objectout = cv2.VideoWriter("output.avi", fourcc, 30.0, (SCREEN_SIZE))while True: # make a screenshot img = pyautogui.screenshot() # img = pyautogui.screenshot(region=(0, 0, 300, 400)) # convert these pixels to a proper numpy array to work with OpenCV frame = np.array(img) # convert colors from BGR to RGB frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # write the frame out.write(frame) # show the frame cv2.imshow("screenshot", frame) # if the user clicks q, it exits if cv2.waitKey(1) == ord("q"): break# Make sure everything is closed when exitedcv2.destroyAllWindows()out.release()The Problem:When I run this, this works good. But it has a random speed after output. The fps is 30 but when I record for 1 minute, the video is 5 seconds or 10 minutes (random).How do I make this recorder give output in 30 fps with the correct speed?
basically if you want to continue with your same code, you will have to compromise on resolution or frame rate.My suggestion is to try the cv2.VideoCapture() functionality.I am attaching the link to the webpage where there is a detailed step-by-step process where the author has achieved an FPS rate of 30.75.Here's the link:https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/The second half of the content present in the link has The faster, threaded method to reading video frames with OpenCV.# import the necessary packagesfrom imutils.video import FileVideoStreamfrom imutils.video import FPSimport numpy as npimport argparseimport imutilsimport timeimport cv2# construct the argument parse and parse the argumentsap = argparse.ArgumentParser()ap.add_argument("-v", "--video", required=True, help="path to input video file")args = vars(ap.parse_args())# start the file video stream thread and allow the buffer to# start to fillprint("[INFO] starting video file thread...")fvs = FileVideoStream(args["video"]).start()time.sleep(1.0)# start the FPS timerfps = FPS().start()# loop over frames from the video file streamwhile fvs.more(): # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale (while still retaining 3 # channels) frame = fvs.read() frame = imutils.resize(frame, width=450) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame = np.dstack([frame, frame, frame]) # display the size of the queue on the frame cv2.putText(frame, "Queue Size: {}".format(fvs.Q.qsize()), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2) # show the frame and update the FPS counter cv2.imshow("Frame", frame) cv2.waitKey(1) fps.update()# stop the timer and display FPS informationfps.stop()print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))# do a bit of cleanupcv2.destroyAllWindows()fvs.stop()
what is the easiest way to find if a specific value exists in a table with multiple rows on a website using selenium in python? I am trying to make my script perform a specific action based on the existence of a value in a table row on a website. E.g if x is in row 1 of table 'lab', create investigation, else move to next row and check if x is in that row. Sorry the website I am trying this on is not accessible by those who do not have an account but please see a simpler version of my code to help figure this out. As of now, I am stuck on the second for loop, the code below goes through each row and prints it out but just hangs. Might just be a break I am needing but I have tried it all (break, continue, pass).#for each patient id in list, find the section in the patients web-table by looking through each row where the covid name and result is stored#search for results table within a table in a web page for eicrselems = driver.find_elements_by_xpath('//*[@id="xmlBlock"]/ul[1]/li/table[1]/tbody')for idx, elem in enumerate(elems, 1): for rownum, lab in enumerate(elem.text.split('\n'), 1): #get lab test from first column for each row lab_test = text_xpath(f'//*[@id="xmlBlock"]/ul[1]/li[{idx}]/table[1]/tbody/tr[{rownum}]/td[1]') #get result from second column for each row lab_result= text_xpath(f'//*[@id="xmlBlock"]/ul[1]/li[{idx}]/table[1]/tbody/tr[{rownum}]/td[2]') #if lab test is in list of rna tests and positive lab regex, create CONFIRMED investigation if re.search(covid_test_names, lab_test.lower()) and re.search(pos_regex, lab_result.lower()): print('Log update: created confirmed investigation') #else if lab test is in list of antigen tests and positive lab regex, create PROBABLE investigation elif re.search(ant_regex, lab_test.lower()) and re.search(antigen_pos_regex, lab_result.lower()): print('Log update: created probable investigation') else: print('Log doc: No lab test matches regex', lab_test, lab_result) continue #continue loop through rows continue #not sure if needed break #break out of topmost for loop and move to next line of code once value has been found to match condition.print('done with that')
If I understand correctly what you need, you should remove both continue from your code and the break at the bottom as well and add a break inside the if and else blocks so if you found the condition you are looking for and performed the action you need - now break the for loop.Like this:elems = driver.find_elements_by_xpath('//*[@id="xmlBlock"]/ul[1]/li/table[1]/tbody')for idx, elem in enumerate(elems, 1): for rownum, lab in enumerate(elem.text.split('\n'), 1): #get lab test from first column for each row lab_test = text_xpath(f'//*[@id="xmlBlock"]/ul[1]/li[{idx}]/table[1]/tbody/tr[{rownum}]/td[1]') #get result from second column for each row lab_result= text_xpath(f'//*[@id="xmlBlock"]/ul[1]/li[{idx}]/table[1]/tbody/tr[{rownum}]/td[2]') #if lab test is in list of rna tests and positive lab regex, create CONFIRMED investigation if re.search(covid_test_names, lab_test.lower()) and re.search(pos_regex, lab_result.lower()): print('Log update: created confirmed investigation') break #else if lab test is in list of antigen tests and positive lab regex, create PROBABLE investigation elif re.search(ant_regex, lab_test.lower()) and re.search(antigen_pos_regex, lab_result.lower()): print('Log update: created probable investigation') break else: print('Log doc: No lab test matches regex', lab_test, lab_result) has been found to match condition.print('done with that')But maybe I do not understand your logic and the question
Kivy add widget won't appear on screen: I'm stuck with the following problem:When I run the following code - this seems to work:class Board(GridLayout): def __init__(self, numLines=8, numCols=8, **kwargs): # constructor of the board GridLayout._init_(self, **kwargs) self.finish_game = Button() # Code that operates on the button self.finish_game.text = "You lose" self.add_widget(self.finish_game) # The rest of the code that doesn't matter for now ...class TestApp(App): def build(self): self.title = 'based graphics' return Board()TestApp().run()But when I try this, I see through debugging mode that it goes inside the Board(), but it doesn't show anything on screen:class StartBoard(Layout): def _init_(self): Layout._init_(self) # Some code that works and not important return Board()class Board(GridLayout): .... #As beforeclass TestApp(App): def build(self): self.title = 'based graphics' return StartBoard()TestApp().run()I know it's not a full code, but maybe you could explain how does TestApp().run() works and why would Board() shows widgets when it run from TestApp.build() and not from StartBoard().
Running Board() creates an instance of the Board class, but that doesn't fundamentally draw anything. It will appear on your screen only if you add it to your widget tree somehow.In your code you return Board(), but that return doesn't go anywhere so the Board() is instantiated and immediately discarded.You probably want something like self.add_widget(Board()) instead.
How to import pyinstaller modules/files I have files with functions which I've already compiled with pyinstaller. How would I import these files' functions into a new python file? Is this even possible?(The idea is for them to be somewhat of an equivalent to a Windows dll. Ideally I would like to dynamically import functions from these files.)Thanks in advance!
As far as I know,no.The pyinstaller basicly creates a compiled .pyc file plus the interpter.If you want to have something like the DLL, you may turn to the .pyd files.
Python distribute 8 bits into beginnings of 4 x 8 bits, two by two I have an integer that is 8 bits and I want to distribute these bits into beginnings of 4 integers (4x8 bit) two by two. For example:bit_8 = 0b_10_11_00_11bit_32 = b"\x12\x32\x23\54" # --> [0b100_10, 0b1100_10, 0b1000_11, 0b1011_00]what_i_want = [0b100_10, 0b1100_11, 0b1000_00, 0b1011_11]For readability I wrote numbers in a list, but I want them as bytes. I am not very good at bit manipulations and I couldn't find a good way. I will repeat this process many times, so I need a fast solution. I found a way of setting bits by one by at here, but I wonder if there is a better way for my problem.Language is not so important, I need an algorithm. Nevertheless I prefer Python.
You could do it by iterating in reverse on bit_32, and at the same time taking the last two bits of bit_8, then shifting it right. This way, you can build a list of the output values, in reverse order, which you can reorder while converting to bytes.bit_8 = 0b_10_11_00_11bit_32 = b"\x12\x32\x23\54" # --> [0b100_10, 0b1100_10, 0b1000_11, 0b1011_00]what_i_want = [0b100_10, 0b1100_11, 0b1000_00, 0b1011_11]out_lst = []for b in reversed(bit_32): bits_from_bit_8 = bit_8 & 0b11 # last two bits of bit_8 bit_8 >>= 2 # we shift it right by to bits out_lst.append(b & 0b11111100 | bits_from_bit_8) out = bytes(reversed(out_lst))print(out)#b'\x123 /'# Check that this is the expected output:print([i for i in out], what_i_want)# [18, 51, 32, 47] [18, 51, 32, 47]
How to I get July month of all years in a yearly time series? (Jupyter notebook) I need some help to get my script to plot my SPI values only for July-month.My script looks like this:from pandas import read_csvimport numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport osimport cartopy%matplotlib inlinedf = pd.read_csv('SPI1_and_rr_for_200011.0.csv',header=0)dfand it reads this: time rr spi0 1985-01-16 00:00:00 42.200000 0.4525611 1985-02-14 12:00:00 52.300000 1.3835622 1985-03-16 00:00:00 21.900000 -0.5620753 1985-04-15 12:00:00 35.600002 0.5620164 1985-05-16 00:00:00 22.400000 -0.699583... ... ... ...403 2018-08-16 00:00:00 110.400000 1.094294404 2018-09-15 12:00:00 74.400000 0.451431405 2018-10-16 00:00:00 44.400000 -0.071395406 2018-11-15 12:00:00 26.100000 -1.293115407 2018-12-16 00:00:00 51.000000 0.792487then I plot and get this:df.plot(y='spi',x='time')
Make sure df['time'] is of type datetime and use the dt accessor to filter by month.# Convert to datetime df['time'] = pd.to_datetime(df['time'])# Filter by month number (July == 7)july_df = df[df['time'].dt.month == 7]
how to solve weasyprint error message gobject-2.0-0 error 0x7e message? I installed several files based upon `https://pbpython.com/pdf-reports.htm to create reports. However the following error messagesTraceback (most recent call last): File "C:\histdata\test02.py", line 10, in <module> from weasyprint import HTML File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\__init__.py", line 322, in <module> from .css import preprocess_stylesheet # noqa isort:skip File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\css\__init__.py", line 27, in <module> from . import computed_values, counters, media_queries File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\css\computed_values.py", line 16, in <module> from ..text.ffi import ffi, pango, units_to_double File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\text\ffi.py", line 380, in <module> gobject = _dlopen( File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\text\ffi.py", line 377, in _dlopen return ffi.dlopen(names[0]) # pragma: no cover File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\cffi\api.py", line 150, in dlopen lib, function_cache = _make_ffi_library(self, name, flags) File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\cffi\api.py", line 832, in _make_ffi_library backendlib = _load_backend_lib(backend, libname, flags) File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\cffi\api.py", line 827, in _load_backend_lib raise OSError(msg)OSError: cannot load library 'gobject-2.0-0': error 0x7e. Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0-0'Any suggestions? Thanks in advance. (Please note that there is a similar issue on github which tells the individual to install GTK3.) Is this correct?
The error means that the gobject-2.0.0 library, which is part of GTK3+, cannot be found. Did you follow the installation instructions (https://doc.courtbouillon.org/weasyprint/stable/first_steps.html), which include installation of GTK3+? If no, do that. If yes, then the problem is, that the GTK3+ DLLs are not where Python is looking for them. For this, you need to add the directory containing the DLLs (e.g. C:\Program Files\GTK3-Runtime Win64\bin on Windows) to your PATH environment variable. That directory contains the relevant libgobject-2.0-0.dll library.For Python 3.8+ and weasyprint 54+ you can manually set the path to your GTK3+ library with the environment variable WEASYPRINT_DLL_DIRECTORIES (documentation).
Beautiful Soup - making a list I've been at it for several days in the soup trying to scrape a simple html structure into a list to make a dataframe. If it was html tables i have no problem. I am working with a structure like: <div class="someTypeofRow"> <a href="/mainpage/choc.html"> Chocolate flavor </a> <span class="yearText"> (2009) </span> <br/> <a href="/mainpage/van.html"> Vanilla flavor </a> <span class="yearText"> (2004) </span> <br/>And I hope to make a list out of it such that could be put into a dataframelist = [ ('/mainpage/choc.html', 'Chocolate flavor', '2009') , ('/mainpage/van.html', 'Vanilla flavor', '2004' )]I am able to get href so far:firstlist = []jims = soup.find(class_='someOtherRow')for jim in jims.find_all('a', href=True): if jim.text: firstlist.append(jim['href'])print(firstlist)I am able to get the text stuff separately:car_elems = soup.find(class_='someOtherRow')d1 = car_elems.find_all_next(string=True)for car_elem in car_elems: print (d1)but i can't seem to put it all together or iterate correctly. thank you for any suggestions.
You can select all <a> tags whose href= begins with "/mainpage" and then do .find_next() for <span class="yearText">.For example:import pandas as pdfrom bs4 import BeautifulSouptxt = ''' <div class="someTypeofRow"> <a href="/mainpage/choc.html"> Chocolate flavor </a> <span class="yearText"> (2009) </span> <br/> <a href="/mainpage/van.html"> Vanilla flavor </a> <span class="yearText"> (2004) </span> <br/>'''soup = BeautifulSoup(txt, 'html.parser')all_data = []for a in soup.select('a[href^="/mainpage"]'): all_data.append((a['href'], a.get_text(strip=True), a.find_next('span', class_='yearText').get_text(strip=True) ))df = pd.DataFrame(all_data, columns=['URL', 'Flavour', 'Year'])print(df)Prints: URL Flavour Year0 /mainpage/choc.html Chocolate flavor (2009)1 /mainpage/van.html Vanilla flavor (2004)
Scikit-Learn One-hot-encode before or after train/test split I am looking at two scenarios building a model using scikit-learn and I can not figure out why one of them is returning a result that is so fundamentally different than the other. The only thing different between the two cases (that I know of) is that in one case I am one-hot-encoding the categorical variables all at once (on the whole data) and then splitting between training and test. In the second case I am splitting between training and test and then one-hot-encoding both sets based off of the training data. The latter case is technically better for judging the generalization error of the process but this case is returning a normalized gini that is dramatically different (and bad - essentially no model) compared to the first case. I know the first case gini (~0.33) is in line with a model built on this data.Why is the second case returning such a different gini? FYI The data set contains a mix of numeric and categorical variables.Method 1 (one-hot encode entire data and then split) This returns: Validation Sample Score: 0.3454355044 (normalized gini).from sklearn.cross_validation import StratifiedKFold, KFold, ShuffleSplit,train_test_split, PredefinedSplitfrom sklearn.ensemble import RandomForestRegressor , ExtraTreesRegressor, GradientBoostingRegressorfrom sklearn.linear_model import LogisticRegressionimport numpy as npimport pandas as pdfrom sklearn.feature_extraction import DictVectorizer as DVfrom sklearn import metricsfrom sklearn.preprocessing import StandardScalerfrom sklearn.grid_search import GridSearchCV,RandomizedSearchCVfrom sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressorfrom scipy.stats import randint, uniformfrom sklearn.metrics import mean_squared_errorfrom sklearn.datasets import load_bostondef gini(solution, submission): df = zip(solution, submission, range(len(solution))) df = sorted(df, key=lambda x: (x[1],-x[2]), reverse=True) rand = [float(i+1)/float(len(df)) for i in range(len(df))] totalPos = float(sum([x[0] for x in df])) cumPosFound = [df[0][0]] for i in range(1,len(df)): cumPosFound.append(cumPosFound[len(cumPosFound)-1] + df[i][0]) Lorentz = [float(x)/totalPos for x in cumPosFound] Gini = [Lorentz[i]-rand[i] for i in range(len(df))] return sum(Gini)def normalized_gini(solution, submission): normalized_gini = gini(solution, submission)/gini(solution, solution) return normalized_gini# Normalized Gini Scorergini_scorer = metrics.make_scorer(normalized_gini, greater_is_better = True)if __name__ == '__main__': dat=pd.read_table('/home/jma/Desktop/Data/Kaggle/liberty/train.csv',sep=",") y=dat[['Hazard']].values.ravel() dat=dat.drop(['Hazard','Id'],axis=1) folds=train_test_split(range(len(y)),test_size=0.30, random_state=15) #30% test #First one hot and make a pandas df dat_dict=dat.T.to_dict().values() vectorizer = DV( sparse = False ) vectorizer.fit( dat_dict ) dat= vectorizer.transform( dat_dict ) dat=pd.DataFrame(dat) train_X=dat.iloc[folds[0],:] train_y=y[folds[0]] test_X=dat.iloc[folds[1],:] test_y=y[folds[1]] rf=RandomForestRegressor(n_estimators=1000, n_jobs=1, random_state=15) rf.fit(train_X,train_y) y_submission=rf.predict(test_X) print("Validation Sample Score: {:.10f} (normalized gini).".format(normalized_gini(test_y,y_submission)))Method 2 (first split and then one-hot encode) This returns: Validation Sample Score: 0.0055124452 (normalized gini).from sklearn.cross_validation import StratifiedKFold, KFold, ShuffleSplit,train_test_split, PredefinedSplitfrom sklearn.ensemble import RandomForestRegressor , ExtraTreesRegressor, GradientBoostingRegressorfrom sklearn.linear_model import LogisticRegressionimport numpy as npimport pandas as pdfrom sklearn.feature_extraction import DictVectorizer as DVfrom sklearn import metricsfrom sklearn.preprocessing import StandardScalerfrom sklearn.grid_search import GridSearchCV,RandomizedSearchCVfrom sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressorfrom scipy.stats import randint, uniformfrom sklearn.metrics import mean_squared_errorfrom sklearn.datasets import load_bostondef gini(solution, submission): df = zip(solution, submission, range(len(solution))) df = sorted(df, key=lambda x: (x[1],-x[2]), reverse=True) rand = [float(i+1)/float(len(df)) for i in range(len(df))] totalPos = float(sum([x[0] for x in df])) cumPosFound = [df[0][0]] for i in range(1,len(df)): cumPosFound.append(cumPosFound[len(cumPosFound)-1] + df[i][0]) Lorentz = [float(x)/totalPos for x in cumPosFound] Gini = [Lorentz[i]-rand[i] for i in range(len(df))] return sum(Gini)def normalized_gini(solution, submission): normalized_gini = gini(solution, submission)/gini(solution, solution) return normalized_gini# Normalized Gini Scorergini_scorer = metrics.make_scorer(normalized_gini, greater_is_better = True)if __name__ == '__main__': dat=pd.read_table('/home/jma/Desktop/Data/Kaggle/liberty/train.csv',sep=",") y=dat[['Hazard']].values.ravel() dat=dat.drop(['Hazard','Id'],axis=1) folds=train_test_split(range(len(y)),test_size=0.3, random_state=15) #30% test #first split train_X=dat.iloc[folds[0],:] train_y=y[folds[0]] test_X=dat.iloc[folds[1],:] test_y=y[folds[1]] #One hot encode the training X and transform the test X dat_dict=train_X.T.to_dict().values() vectorizer = DV( sparse = False ) vectorizer.fit( dat_dict ) train_X= vectorizer.transform( dat_dict ) train_X=pd.DataFrame(train_X) dat_dict=test_X.T.to_dict().values() test_X= vectorizer.transform( dat_dict ) test_X=pd.DataFrame(test_X) rf=RandomForestRegressor(n_estimators=1000, n_jobs=1, random_state=15) rf.fit(train_X,train_y) y_submission=rf.predict(test_X) print("Validation Sample Score: {:.10f} (normalized gini).".format(normalized_gini(test_y,y_submission)))
While the previous comments correctly suggest it is best to map over your entire feature space first, in your case both the Train and Test contain all of the feature values in all of the columns.If you compare the vectorizer.vocabulary_ between the two versions, they are exactly the same, so there is no difference in mapping. Hence, it cannot be causing the problem.The reason Method 2 fails is because your dat_dict gets re-sorted by the original index when you execute this command. dat_dict=train_X.T.to_dict().values()In other words, train_X has a shuffled index going into this line of code. When you turn it into a dict, the dict order re-sorts into the numerical order of the original index. This causes your Train and Test data become completely de-correlated with y.Method 1 doesn't suffer from this problem, because you shuffle the data after the mapping.You can fix the issue by adding a .reset_index() both times you assign the dat_dict in Method 2, e.g.,dat_dict=train_X.reset_index(drop=True).T.to_dict().values()This ensures the data order is preserved when converting to a dict.When I add that bit of code, I get the following results:- Method 1: Validation Sample Score: 0.3454355044 (normalized gini)- Method 2: Validation Sample Score: 0.3438430991 (normalized gini)
Appending HDFStore pandas TypeError According to https://stackoverflow.com/a/46206376/11578009 I am trying to append HDFStore fileimport pandas as pdhdfStore = pd.HDFStore('dataframe.h5')#df=#a b c d f#0 125 -6.450 ... 0 2020-04-#16T02:30:00#2 124 -6.403 ... 0 2020-04-#16T02:30:00#4 128 -6.403 ... 0 2020-04-#16T02:30:00##[3 rows x 5 columns]hdfStore.append('df', df, format='t',data_columns=True )Trying to append this df to hdfStore throws:TypeError: object of type 'int' has no len()
I found out answer when I was replicating error, but maybe it will be useful for sb.Error does not occur when dtypes in used pd.DataFrame are df.dtypesOut[65]: time_diffrences int64temp_diffrences int64label objectdtype: objectSo dtypes cant be for example 'object' for integer
I have a code where I import another code file, but the other code gets executed before my actual code runs So I am making a Voice assistant, and then I have to import another file, but the file always gets executed.So I know there is a fix using an if loops but none of the answers are elaborate enough.#voice_assistantimport math;import DayCal2; #DayCal is a code I made to calculate the number of days between a user input day and the current date...The DayCal code starts running as an individual code!
What you want to do is write everything in your second module that runs right away inside a check like this: if __name__ == "__main__": do_things()this way, your do_things() function will be called if you open up this file and run it directly, BUT if you import this file, the __name__ will not be "__main__" so your do_things() function will not be called in this case.
How can I create a list of records groupedby index in Pandas? I have a CSV of records:name,credits,emailbob,,test1@foo.combob,6.0,test@foo.combill,3.0,something_else@a.combill,4.0,something@a.comtammy,5.0,hello@gmail.orgwhere name is the index. Because there are multiple records with the same name, I'd like to roll the entire row (minus the name) into a list to create JSON of the form:{ "bob": [ { "credits": null, "email": "test1@foo.com"}, { "credits": 6.0, "email": "test@foo.com" } ], // ...}My current solution is a bit kludgey as it seems to use pandas only as a tool for reading CSV, but nonetheless it generates my expected JSONish output:#!/usr/bin/env python3import ioimport pandas as pdfrom pprint import pprintfrom collections import defaultdictdef read_data(): s = """name,credits,emailbob,,test1@foo.combob,6.0,test@foo.combill,3.0,something_else@a.combill,4.0,something@a.comtammy,5.0,hello@gmail.org""" data = io.StringIO(s) return pd.read_csv(data)if __name__ == "__main__": df = read_data() columns = df.columns index_name = "name" print(df.head()) records = defaultdict(list) name_index = list(columns.values).index(index_name) columns_without_index = [column for i, column in enumerate(columns) if i != name_index] for record in df.values: name = record[name_index] record_without_index = [field for i, field in enumerate(record) if i != name_index] remaining_record = {k: v for k, v in zip(columns_without_index, record_without_index)} records[name].append(remaining_record) pprint(dict(records))Is there a way to do the same thing in native pandas (and numpy)?
Is that what you want?cols = df.columns.drop('name').tolist()or as recommended by @jezrael:cols = df.columns.difference(['name']) and then:s = df.groupby('name')[cols].apply(lambda x: x.to_dict('r')).to_json()let's print it nicely:In [45]: print(json.dumps(json.loads(s), indent=2)){ "bill": [ { "credits": 3.0, "email": "something_else@a.com" }, { "credits": 4.0, "email": "something@a.com" } ], "bob": [ { "credits": null, "email": "test1@foo.com" }, { "credits": 6.0, "email": "test@foo.com" } ], "tammy": [ { "credits": 5.0, "email": "hello@gmail.org" } ]}
How to print the processing steps/report from model.fit MXNet Python I am trying to train my 20x20 images dataset using MXNet deep learning library, you can see the code below:the question is when I run it, although it shows no errors it returns nothing, I mean it does not show any processing like :epoch 0 : ........accuracy:.....epoch 1 : ........accuracy:.....so, how shall I make it print such processing format, or where might be the problem?Note: I tried all kinds of the Callback API:http://mxnet.io/api/python/callback.html?fref=gc and none of them are giving any response; the code work with no errors but no processing steps shown!Thanks in advanceX_train = []training_flatten_rows_mxnet_csv=np.loadtxt("training_set_flatten_rows_mxnet.csv", delimiter=",")train_data = training_flatten_rows_mxnet_csvX_train = train_data.reshape((training_counter,1,20,20))Y_train = np.loadtxt("training_labels.csv", delimiter=",")X_validate = []validate_flatten_rows_mxnet_csv=np.loadtxt("validation_set_flatten_rows_mxnet.csv", delimiter=",")validate_data = validate_flatten_rows_mxnet_csvX_validate = validate_data.reshape((validate_counter,1,20,20))Y_validate = np.loadtxt("validate_labels.csv", delimiter=",")train_iterator = mx.io.NDArrayIter(X_train, Y_train, batch_size=batch_size,shuffle=True)#,last_batch_handle='discard')validate_iterator = mx.io.NDArrayIter(X_validate, Y_validate, batch_size=batch_size,shuffle=True)data = mx.sym.var('data')conv1 = mx.sym.Convolution(data=data, kernel=(3,3), num_filter=6)relu1 = mx.sym.Activation(data=conv1, act_type="relu")pool1 = mx.sym.Pooling(data=relu1, pool_type="max", kernel=(2,2), stride=(2,2))conv2 = mx.sym.Convolution(data=pool1, kernel=(6,6), num_filter=12)relu2 = mx.sym.Activation(data=conv2, act_type="relu")pool2 = mx.sym.Pooling(data=relu2, pool_type="max", kernel=(2,2), stride=(2,2))flatten = mx.sym.flatten(data=pool2)fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=12 )lenet = mx.sym.SoftmaxOutput(data=fc1, name='softmax')lenet_model = mx.mod.Module(symbol=lenet, context=mx.cpu())lenet_model.fit(train_iterator, eval_data=validate_iterator, optimizer='sgd', optimizer_params={'learning_rate':0.1}, eval_metric='acc', batch_end_callback =mx.callback.Speedometer(batch_size, 100), num_epoch=5)
Solvedadd to your code these lines:import logginglogging.getLogger().setLevel(logging.INFO)for different types of processing report, refer to "Callback API in MXNet"
Detect difference in x, y direction between 2 images using OpenCv and ORB detector I am trying to detect is there any shift in x or y direction between 2 images, one of the images is reference image and the other one is live image coming from camera. Idea is to use ORB detector to extract keypoints in 2 images and then use BFFMatcher to find good matches. After that do further analysis by checking if good matches are matching coordinates of keypoints in image1 and image2, if they match then we are assuming that there is no any shift. If there is offeset by in x direction 3px for example in all set of good matches then image is shifted by 3px (maybe there is better way of doing it(?)).Up to now I am able to get keypoints between 2 images, however I am not sure how to check coordinates of those good matches in image1 and image2.import cv2import numpy as npimport matplotlib.pyplot as pltimport os.pathimport helpersreferenceImage = NoneliveImage = Nonelowe_ration= 0.75orb = cv2.ORB_create()bfMatcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = False)cap = cv2.VideoCapture(1)def compareUsingOrb(): kp1, des1 = orb.detectAndCompute(liveImage, None) print("For Live Image it detecting %d keypoints" % len(kp1)) matches = bfMatcher.knnMatch(des1, des2, k=2) goodMatches=[] for m,n in matches: if(m.distance < lowe_ration*n.distance): goodMatches.append([m]) #Check good matches x, y coordinates img4 = cv2.drawKeypoints(referenceImage, kp2, outImage=np.array([]), color=(0,0,255)) cv2.imshow("Keypoints in reference image", img4) img3 = cv2.drawMatchesKnn(liveImage, kp1, referenceImage, kp2, goodMatches, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) print("Found %d good matches" % (len(goodMatches))) cv2.imshow("Matches", img3)if(helpers.doesFileExist() == False): ret, frame = cap.read() cv2.imwrite('referenceImage.png', frame) referenceImage = cv2.imread('referenceImage.png') kp2, des2 = orb.detectAndCompute(referenceImage, None) print("For Reference Image it detecting %d keypoints" % len(kp2))else: referenceImage = cv2.imread('referenceImage.png') kp2, des2 = orb.detectAndCompute(referenceImage, None) print("For Reference Image it detecting %d keypoints" % len(kp2))while(True & helpers.doesFileExist()): ret, liveImage = cap.read() cv2.imshow("LiveImage", liveImage) compareUsingOrb() if(cv2.waitKey(1) & 0xFF == ord('q')): breakcap.release()cv2.destroyAllWindows()The goal is detect if there is a shift between 2 images and if there is - then attempt to align images and do image comparison. Any tips how to achieve this using OpenCV would be appreciated.
Basically, you want to know How to get pixel coordinates from Feature Matching in OpenCV Python. Then you need some way to filter outliers. If only differnce between your images is translation (shift) on live image, this should be straightforward. But I'd suspect your live image might also be affected by rotation, or 3D transformation to some extent. If ORB finds enough features, finding right transformation using OpenCV isn't hard.
How to filter out data into unique pandas dataframes from a combined csv of multiple datatypes? Sample csvtime,type,-1,time,type,0,wtime,type,1,a,12,b,13,c,15,name,appletime,type,5,r,2,s,43,t,45,u,67,style,blue,font,13time,type,11,a,12,c,15time,type,5,r,2,s,43,t,45,u,67,style,green,font,15time,type,1,a,12,b,13,c,15,name,appletime,type,11,a,12,c,15time,type,5,r,2,s,43,t,45,u,67,style,green,font,15time,type,1,a,12,b,13,c,15,name,appletime,type,5,r,2,s,43,t,45,u,67,style,yellow,font,9time,type,19,b,12type,19,b,42I would like to filter each of the following "type,1", "type,5", "type,11", "type,19" into a separate pandas frame for further analysis. What's the best way to do it ? [Also, I will be ignoring "type,0" and "type,-1"]Sample Codeimport pandas as pdtype1_header = ['type','a','b','c','name']type5_header = ['type','r','s','t','u','style','font']type11_header = ['type','a','c']type19_header = ['type','b']type1_data = pd.read_csv(file_path_to_csv, usecols=[2,4,6,8,10] , names=type1_header)type5_data = pd.read_csv(file_path_to_csv, usecols=[2,4,6,8,10,12,14] , names=type5_header)
import pandas as pdheaders = {1:['a','b','c','name'], 5:['r','s','t','u','style','font'],}usecols = {1:[4,6,8,10], 5:[4,6,8,10,12,14], }frames = {}for h in headers: frames[h] = pd.DataFrame(columns=headers[h])count = 0for line in open('irreg.csv'): row = line.split(',') count += 1 ID = int(row[2]) row_subset = [] if ID in frames: for col in usecols[ID]: row_subset.append(row[col]) frames[ID].loc[len(frames[ID])] = row_subset else: print('WARNING: line %d: type %s not found'%(count, row[2]))Although, that done, how often do you do this and how often does the data change? For a one-off it's probably easiest to split up the incoming csv file, e.g. by grep type,19 irreg.csv > 19.csvat the commandline, and then import each csv according to its headers and usecols.
How to create files which are named by the elements of a column stored in a file? I have a file which contains two columns, the first one contains the name and the second one contains the measurements for the corresponding names.I need to create files with the name of the measurements, and each measurement has the corresponding measurement for it.In other words, I need to make files which are named by the first column and they contain the element of the second column (but the same line).I thought of the following code:import numpy as nparr=np.genfromtxt('file.txt', dtype=(str))arr_0=arr[:,0]arr_1=arr[:,1]numrows = len(arr)for i in range (0,6): name=arr_0[i] value=arr_1[i] np.savetxt(name, int(value)) i=i+1My code does not work, can you tell me what should be changed and why?FYI, I tried np.savetxt(name,value) and this did not work either!The file:el02_f125lp_wht_r.fits 520758208.el02_f140lp_wht_r.fits 758538560.el02_f150lp_wht_r.fits 1.030201E9el02_f336w_wht_r.fits 30627980.el02_f680n_wht_r.fits 18258366.el02_f775w_wht_r.fits 3536094.el02_fq508n_wht_r.fits 58293324.
This code works...import numpy as nparr = np.genfromtxt('file.txt', dtype=str)for i in range(0, len(arr)): np.savetxt(arr[i, 0], [float(arr[i, 1])]) # int -> float
sklearn custom scorer multiple metrics at once I have a function which returns an Observation object with multiple scorersHow can I integrate it into a custom sklearn scorer?I defined it as:class Observation(): def __init__(self): self.statValues = {} self.modelName = "" def setModelName(self, nameOfModel): self.modelName = nameOfModel def addStatMetric(self, metricName,metricValue): self.statValues[metricName] = metricValueA custom score is defined like:def myAllScore(y_true, y_predicted): return Observationmy_scorer = make_scorer(myAllScore)which could look like { 'AUC_R': 0.6892943119440752, 'Accuracy': 0.9815382629183745, 'Error rate': 0.018461737081625407, 'False negative rate': 0.6211453744493393, 'False positive rate': 0.0002660016625103907, 'Lift value': 33.346741089307166, 'Precision J': 0.9772727272727273, 'Precision N': 0.9815872808592603, 'Rate of negative predictions': 0.0293063938288739, 'Rate of positive predictions': 0.011361068973307943, 'Sensitivity (true positives rate)': 0.3788546255506608, 'Specificity (true negatives rate)': 0.9997339983374897, 'f1_R': 0.9905775376404309, 'kappa': 0.5384745595159575}
In short: you cannot.Long version: scorer has to return a single scalar, since it is something that can be used for model selection, and in general - comparing objects. Since there is no such thing as a complete ordering over vector spaces - you cannot return a vector inside a scorer (or dictionary, but from mathematical perspective it might be seen as a vector). Furthermore, even other use cases, like doing cross validation does not support arbitrary structured objects as a return value since they try to call np.mean over the list of the values, and this operation is not defined for the list of python dictionaries (which your method returns).The only thing you can do is to create separate scorer for each of the metrics you have, and use them independently.
can't perform this operation for unregistered loader type I'm using bokeh for data visualization, and trying to make an executable but it shows an error message of "can't perform this operation for unregistered loader type"I have tried as a solution of init.py to the directory (+subdir) of my script.py, but it's not work.PS. Win10, Python 3.6.3, pyinstaller 3.4, bokeh 0.12.13Code: from bokeh.plotting import figure, showp = figure(width=800, height=400, title="Money")p.title.text_color = "green"p.title.text_font_size = "18pt"p.xaxis.axis_label = "Time"p.xaxis.axis_label_text_color = "violet"p.yaxis.axis_label = "Money"p.yaxis.axis_label_text_color = "violet"dashs = [12, 4]listx1 = [1,5,7,9,13,16]listy1 = [15,50,80,40,70,50]p.line(listx1, listy1, line_width=4, line_color="red", line_alpha=0.3, line_dash=dashs, legend="Idle")show(p)Error message:enter image description hereThx in advance for your help
Ran into the same error using pyinstaller.This should solve your problem and the problem of not finding jinja2 that will follow:edit the file: your-python-env\Lib\site-packages\bokeh\core\templates.py(nb: change your-python-env to wherever you've installed python)and change the import statements from:import jsonfrom jinja2 import Environment, PackageLoader, Markupto the following:import jsonimport sys, osimport bokeh.core from jinja2 import Environment, FileSystemLoader, MarkupNext, find the line where it says:_env = Environment(loader=PackageLoader('bokeh.core', '_templates'))comment this out and replace it with this code:# _env = Environment(loader=PackageLoader('bokeh.core', '_templates'))if getattr(sys, 'frozen', False): # we are running in a bundle templatedir = sys._MEIPASSelse: # we are running in a normal Python environment templatedir = os.path.dirname(bokeh.core.__file__)_env = Environment(loader=FileSystemLoader(templatedir + '\\_templates'))(adapted from: https://pythonhosted.org/PyInstaller/runtime-information.html)What this does is that when the code is frozen, it redirects the jinja2 looking to sys._MEIPASS (which is the folder where your distribution is). Specifically it looks for the jinja2 templates at sys._MEIPASS_templates. When frozen, the file points to the wrong location, hence the original problem.So now, we have to make sure the jinja2 files end up at the _templates folder. To do this we edit the pyinstaller .spec. This works for compiling to one directory or one file. Edit the datas in your .spec file to:a = Analysis(['graphms-gui.py'], pathex=['C:\\Users\\choom.000\\Documents\\forcompile270218'], binaries=[], datas=[(r'your-python-env\Lib\site-packages\bokeh\core\_templates','_templates'), ], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher)What this does is it gets the contents of the core_template folder and copies it to dist_templates. Which is exactly where we've pointed the templates.py to look for the jinja2 files.This resolved the problem for me with pyinstaller==3.3.1, bokeh==0.12.9 and jinja2==2.10.
numpy, get maximum of subsets I have an array of values, said v, (e.g. v=[1,2,3,4,5,6,7,8,9,10]) and an array of indexes, say g (e.g. g=[0,0,0,0,1,1,1,1,2,2]).I know, for instance, how to take the first element of each group, in a very numpythonic way, doing:import numpy as npv=np.array([1,2,3,4,74,73,72,71,9,10])g=np.array([0,0,0,0,1,1,1,1,2,2])mask=np.concatenate(([True],np.diff(g)!=0))v[mask]returns:array([1, 74, 9])Is there any numpythonic way (avoiding explicit loops) to get the maximum of each subset?Tests:Since I received two good answers, one with the python map and one with a numpy routine, and I was searching the most performing, here some timing tests:import numpy as npimport timeN=10000000v=np.arange(N)Nelemes_per_group=10Ngroups=N/Nelemes_per_groups=np.arange(Ngroups)g=np.repeat(s,Nelemes_per_group)start1=time.time()r=np.maximum.reduceat(v, np.unique(g, return_index=True)[1])end1=time.time()print('END first method, T=',(end1-start1),'s')start3=time.time()np.array(list(map(np.max,np.split(v,np.where(np.diff(g)!=0)[0]+1))))end3=time.time()print('END second method, (map returns an iterable) T=',(end3-start3),'s')As a result I get:END first method, T= 1.6057236194610596 sEND second method, (map returns an iterable) T= 8.346540689468384 sInterestingly, most of the slowdown of the map method is due to the list() call. If I do not try to reconvert my map result to a list ( but I have to, because python3.x returns an iterator: https://docs.python.org/3/library/functions.html#map )
You can use np.maximum.reduceat:>>> _, idx = np.unique(g, return_index=True)>>> np.maximum.reduceat(v, idx)array([ 4, 74, 10])More about the workings of the ufunc reduceat method can be found here.Remark about performancenp.maximum.reduceat is very fast. Generating the indices idx is what takes most of the time here.While _, idx = np.unique(g, return_index=True) is an elegant way to get the indices, it is not particularly quick.The reason is that np.unique needs to sort the array first, which is O(n log n) in complexity. For large arrays, this is much more expensive than using several O(n) operations to generate idx. Therefore, for large arrays it is much faster to use the following instead:idx = np.concatenate([[0], 1+np.diff(g).nonzero()[0]])np.maximum.reduceat(v, idx)
Django Inline Formset - possible to follow foreign key relationship backwards? I'm pretty new to django so I apologize if this has an obvious answer.Say you have the following three models:models.pyclass Customer(models.Model): name = models.CharField() slug = models.SlugField()class Product(models.Model): plu = models.Charfield() description = models.Charfield()class Template(models.Model): customer = models.ForeignKey(Customer) product = models.ForeignKey(Product) price = models.DecimalField()The inline formset would look something like:TemplateFormSet = inlineformset_factory(Customer, Template, extra=0, fk_name='customer', fields=('price'))Is it possible to follow the Template formset's Product foreign key backwards so you could display the plu and description fields within the same table?For example something like this:<table> <tbody> {% for obj in customer.template_set.all %} <tr> <td>{{ obj.product.plu }}</td> <td>{{ obj.product.description }}</td> <td>{% render_field formset.form.price class="form-control form-control-sm" %}</td> </tr> {% endfor %} </tbody></table>The formset's fields appear with the html above but the data from the bound form instance doesn't appear and I can't save by editing the empty fields.I've also tried below but each formset is repeated for each object (for x formsets there are x*x rows):<tbody>{% for obj in customer.template_set.all %} {% for form in formset %} <tr> <td>{{ obj.product.plu }}</td> <td>{{ obj.product.description }}</td> <td>{% render_field form.price class="form-control" %}</td> </tr> {% endfor %}{% endfor %}</tbody>Basically I'm trying to go from the top portion of the image to the bottom
The formset functionality is only to show forms but something that you can do is create a custom form that display the 2 fields with the function of readonly like: class your_form(models.ModelForm): class Meta() model = Template fields = ['price', 'product'] def __init__(self, *args, **kwargs): super(ItemForm, self).__init__(*args, **kwargs) self.fields['product'].widget.attrs['readonly'] = TrueTemplateFormSet = inlineformset_factory(Customer, Template, extra=0, fk_name='customer', form=your_form)That's my best tried, if you wanna display both try returning in your models something like:class Product(models.Model): plu = models.Charfield() description = models.Charfield() def __str__(self, *args, **kwargs): a = '<td>' + self.plu + '</td><td>' + self.plu '</td>' return self.plu + self.description # Or return 'a'
How can I solve this problem that shows me the correct output? import mysql.connectorfrom tabulate import tabulatecnx = mysql.connector.connect(user = 'root' , password = '',host = 'localhost', database = 'karmand')c = cnx.cursor()c.execute('''CREATE TABLE IF NOT EXISTS employee ( Name text, Weight integer, Height integer )''')c.execute('SELECT * FROM employee ORDER BY Height DESC ')print(tabulate(c.fetchall()))cnx.close()output false;-------- -- ---Mahdi 90 190Amin 75 180Mohammad 75 175Ahmad 60 175-------- -- ---output true:-------- -- ---Mahdi 90 190Amin 75 180Ahmad 60 175Mohammad 75 175-------- -- ---How can I write a sql command that sorts items based on their height tall and low weight if height are Equal?
An Order By clause can contain multiple items quite simply. So to order by height and then by weight, do this:SELECT * FROM employee ORDER BY Height DESC, Weight ASCLive demo: https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=7f33f82b79724920be1e1499c4dff9daExamples of ordering by multiple fields can also be found in many places online, including in the official MySQL documentation.