questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
How to use fcntl reading status of lock file? I have a simple example:#!/usr/bin/pythonimport timeimport fcntlfile = open("e", "w")fcntl.lockf(file.fileno(),fcntl.LOCK_EX)file.write("foo")file.close()How to know locked file or not ? ( Not waiting until file is unlocked )
That's what fcntl.LOCK_NB is for. For example:import warningstry: fcntl.flock(myfile, fcntl.LOCK_EX|fcntl.LOCK_NB)except IOError: warnings.warn("can't immediately write-lock the file ($!), blocking ...") fcntl.flock(myfile, fcntl.LOCK_EX)From file access.
Trouble with Python3 imports This question was asked a lots of times but none of the solutions seem to help in my case.I have a directory structure like thismy_project/ main.py bootstrap/ __init__.py boot.py consumer/ __init__.py main.pyBeing at the toplevel directory (myproject) and executing python3 consumer/main.py throws an error:Traceback (most recent call last): File "consumer/main.py", line 7, in <module> from bootstrap.boot import MyClassImportError: No module named 'bootstrap'Weird thing is that importing that module using the interpreter works as expected. Running the code from PyCharm also works fine.I've tried importing with "full path" e.g. from my_project.bootstrap.boot import MyClass which fails with the same ImportError. I have also tried using relative imports e.g. from .bootstrap.boot import MyClass which also failed with SystemError: Parent module '' not loaded, cannot perform relative importOne hack that fixes this is when I add export PYTHONPATH="/root/my_project" at the bottom of virtualenv activate script
You are getting this error because module search path only includes the current directory, and not its parents; and since your other module is not in the PYTHONPATH it isn't available to import.You can find this out yourself by printing sys.path in your script.I created a directory t with the following:$ tree.├── a.py├── bar│   ├── __init__.py│   └── world.py└── foo ├── hello.py └── __init__.py2 directories, 5 filesHere is the source of hello.py:$ cat foo/hello.pyimport sysprint("I am in {}".format(__file__))for path in sys.path: print(path)from bar.world import varprint(var)Now watch what happens, when I execute foo/hello.py and try to import something from bar/world.py; $ python foo/hello.pyI am in foo/hello.py/home/burhan/t/foo/usr/lib/python2.7/usr/lib/python2.7/plat-x86_64-linux-gnu/usr/lib/python2.7/lib-tk/usr/lib/python2.7/lib-old/usr/lib/python2.7/lib-dynload/home/burhan/.local/lib/python2.7/site-packages/usr/local/lib/python2.7/dist-packages/usr/lib/python2.7/dist-packagesTraceback (most recent call last): File "foo/hello.py", line 6, in <module> from bar.world import varImportError: No module named bar.worldYou can tell from the paths printed that only the system-wide Python library paths, and the current directory of the script is listed. This is why it cannot find bar.world.To fix this issue, you can adjust the PYTHONPATH or use relative imports; for example:$ PYTHONPATH=../t python foo/hello.pyI am in foo/hello.py/home/burhan/t/foo/home/burhan/t/usr/lib/python2.7/usr/lib/python2.7/plat-x86_64-linux-gnu/usr/lib/python2.7/lib-tk/usr/lib/python2.7/lib-old/usr/lib/python2.7/lib-dynload/home/burhan/.local/lib/python2.7/site-packages/usr/local/lib/python2.7/dist-packages/usr/lib/python2.7/dist-packages42You notice here I am manually overriding the PYTHONTPATH and adding the common parent of the scripts (42 is coming from bar/world).To fix this using relative imports, you first have a to create a package in the top most directory, otherwise you'll get the famous Attempted relative import in non-package error; for more on this and details on how Python 3 importing works, have a look at: Relative imports in Python 3
facing date error while extraction weeks in pandas I all i write the below code for getting weekdays values from calendar date in pandas dataframe. but i am getting some errorcodetest['DATE'] = pd.to_datetime(codetest['DATE'], format = '%m/%d/%y')codetest['day_of_week'] = codetest['DATE'].dt.dt.day_name()ValueError: unconverted data remains: 12
Assuming that you have DATE: String variable, YYYY-MM-DD HH:MM:SSStep 1: Convert your "DATE" column to datetimecodetest['DATE'] = pd.to_datetime(codetest['DATE'])Step 2: Extract all required field in new column using below code codetest['Hour'] = codetest['DATE'].apply(lambda time: time.hour)codetest['Month'] = codetest['DATE'].apply(lambda time: time.month)codetest['Day of Week'] = codetest['DATE'].apply(lambda time: time.dayofweek)codetest['Year'] = codetest['DATE'].apply(lambda t: t.year)codetest['Date'] = codetest['DATE'].apply(lambda t: t.day)step 3: if you want your day of week in words than use map functiondmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}codetest['Day of Week'] = codetest['Day of Week'].map(dmap)
How to emit custom Events to the Event Loop in PyQt I am trying to emit custom events in PyQt. One widget would emit and another would listen to events, but the two widgets would not need to be related.In JavaScript, I would achieve this by doing// Component 1document.addEventListener('Hello', () => console.log('Got it'))// Component 2document.dispatchEvent(new Event("Hello"))Edit: I know about signals and slots, but only know how to use them between parent and child. How would I this mechanism (or other mechanism) between arbitrary unrelated widgets?
In PyQt the following instruction:document.addEventListener('Hello', () => console.log('Got it'))is equivalentdocument.hello_signal.connect(lambda: print('Got it'))In a similar way:document.dispatchEvent(new Event("Hello"))is equivalentdocument.hello_signal.emit()But the big difference is the scope of the "document" object, since the connection is between a global element. But in PyQt that element does not exist.One way to emulate the behavior that you point out is by creating a global object:globalobject.pyfrom PyQt5 import QtCoreimport functools@functools.lru_cache()class GlobalObject(QtCore.QObject): def __init__(self): super().__init__() self._events = {} def addEventListener(self, name, func): if name not in self._events: self._events[name] = [func] else: self._events[name].append(func) def dispatchEvent(self, name): functions = self._events.get(name, []) for func in functions: QtCore.QTimer.singleShot(0, func)main.pyfrom PyQt5 import QtCore, QtWidgetsfrom globalobject import GlobalObjectclass MainWindow(QtWidgets.QMainWindow): def __init__(self, parent=None): super().__init__(parent) button = QtWidgets.QPushButton(text="Press me", clicked=self.on_clicked) self.setCentralWidget(button) @QtCore.pyqtSlot() def on_clicked(self): GlobalObject().dispatchEvent("hello")class Widget(QtWidgets.QWidget): def __init__(self, parent=None): super().__init__(parent) GlobalObject().addEventListener("hello", self.foo) self._label = QtWidgets.QLabel() lay = QtWidgets.QVBoxLayout(self) lay.addWidget(self._label) @QtCore.pyqtSlot() def foo(self): self._label.setText("foo")if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) w1 = MainWindow() w2 = Widget() w1.show() w2.show() sys.exit(app.exec_())
how can does one declare a sub-class object inside of a base class without leading to recursion errors? I am trying to structure a Python 3 program as follows:Base Class: BodySub-Class: HeadA super-simple code representation is as follows:class Body: def __init__(self): self.head_obj = Head() # ...Set-up body... def body_actions: print('Body does something')class Head(Body): def __init__(self): super().__init__() # ...Set-up head... def head_actions: print('Head does something')I want to be able to create an instance of Body called 'body_obj' and then access the Head sub-class (and its properties/methods) as follows:body_obj = Body()body_obj.body_actions()body_obj.head_obj.head_actions()However, when I run the code, Python returns a maximum recursion depth RuntimeError.I need to do some setup with __ init __ in both the Body and Head classes, and Pycharm complains when I don't call a super() function within the Head class's __ init __ as this appears to be bad practice.What is the proper way to set up this kind of structure where a Class's sub-objects all need to be initialized when the base class is instantiated?I have looked into nesting the classes, but this also appears to be poor practice.
A head is not a kind of body, it's a part of the body AFAIK.So you should be using composition instead of inheritance:class Head: def __init__(self): # ...Set-up head... def head_actions(): print('Head does something')class Body: def __init__(self): self.head = Head() # ...Set-up body... def body_actions: print('Body does something')Now you can do:body = Body()body.body_actions()body.head.head_actions()The reason you get infinite recursion is that your Body is the superclass of Head, so when you call super().__init__() you instantiate a Body, which in your implementation creates a Head, which calls super().__init__() and so forth. Composition (if that's what you mean by 'nesting') is not bad practice, it's standard practice and often makes more sense than inheritance.Edit in response to commentTo access the Body methods from the Head you can pass a reference to the Body on creation. So redefine the classes as follows:class Head: def __init__(self, body): self.body = body def head_actions(): print('Head does something')class Body: def __init__(self): self.head = Head(self) def body_actions: print('Body does something')Now you can access the head from the body and the body from the head:body = Body()head = body.headhead.body.body_actions()Or evenbody.head.body.body_actions()
Keras regression prediction is not same dimension as output dimension Hello I'm trying to do Energy Disaggregation (predict the energy use of appliances while given the total energy consumption of a certain household.)Now I have an input dimension of 2 because of 2 main energy measurements.The output dimension of the Keras Sequential model should be 18 because I have 18 appliances I would like to make a prediction for.I have enough data using the REDD dataset (this is no problem).I have trained the model and gained reasonable loss and accuracy. But when I want to make a prediction for some test data, the prediction consists of values in a 1-dimensional array. Meanwhile the outputs are 18-dimensional?How is this possible or am I trying something that isn't really viable?Some code:model = Sequential()model.add(Dense(HIDDEN_LAYER_NEURONS,input_dim=2))model.add(Activation('relu'))model.add(Dense(18))model.compile(loss=LOSS, optimizer=OPTIMIZER, metrics=['accuracy'])model.fit(X_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=1, validation_split=VALIDATION_SPLIT)pred = model.predict(X_test).reshape(-1)pred.shape # prints the following 1 dimensional array: (xxxxx,) dimensionalThe ALL_CAPS variables are constants.X_train is 2-dimy_train is 18-dimAny help is appreciated!
Well you are reshaping the predictions and flattening them here:pred = model.predict(X_test).reshape(-1)The reshape(-1) effectively makes the array one-dimensional. Just take the predictions directly:pred = model.predict(X_test)
Creating a function to perform grouping and sorting based on columns in Pandas dataframe and Labeling import pandas as pdimport numpy as npdf = pd.DataFrame([[100, 'm1', 1, 4],[200, 'm2', 7, 5], [120, 'm1', 4, 4],[240, 'm2', 8, 5],[300, 'm3', 5, 4],[330, 'm3', 2, 4],[350, 'm3', 11, 4],[200, 'm4', 9, 4]],columns=['Col1', 'Col2', 'Col3', 'Col4'])I am wanting to group the data into two groups based on the Col2 group. However the first match should be assigned one value and the rest of the matches should be assigned a different value. Rahlf helped me to get a function created def my_function(x, val): if x.shape[0]==1: if x.iloc[0]>val: return 'high' else: return 'low' if x.iloc[0]>val and any(i<=val for i in x.iloc[1:]): return 'high' elif x.iloc[0]>val: return 'med' elif x.iloc[0]<=val: return 'low' else: return np.nanand then do df['Col5'] = df.sort_values(['Col2','Col1']).groupby('Col2')['Col3'].transform(my_function, (4))However, I need two modifications to the function. Instead of the val, it will take the corresponding value from the Col 4 and then return one value (like 'low' to the first match within a group (based on the sorted col1) and then say 'low_red' for the rest of the matches in the group.So my question is how can I modify the function to do that?My Input: Col1 Col2 Col3 Col4 100 m1 1 4 200 m2 7 5 120 m1 4 4 240 m2 8 5 300 m3 5 4 330 m3 2 4 350 m3 11 4 200 m4 9 4Expected Output: Col1 Col2 Col3 Col4 Col 5 100 m1 1 4 low 200 m2 7 5 med 120 m1 4 4 low_red 240 m2 8 5 med_red 300 m3 5 4 high 330 m3 2 4 high_red 350 m3 11 4 high_red 200 m4 9 4 high
You can create a higher level function (let's call it my_function()) that is called by transform(), which then calls a lower level function (let's call it deeper_logic()) that applies the previous logic outlined in your question, like so:def my_function(group): val = df.iloc[group.index]['Col4'] value = deeper_logic(group.iloc[0], val.iloc[0], group) return [value if i==0 else value + '_red' for i in range(group.shape[0])]def deeper_logic(x, val, group): if group.shape[0]==1: if x>val: return 'high' else: return 'low' if x>val and any(i<=val for i in group.iloc[1:]): return 'high' elif x>val: return 'med' elif x<=val: return 'low' else: return np.nandf['Col5'] = df.sort_values(['Col2','Col1']).groupby('Col2')['Col3'].transform(my_function)This yields: Col1 Col2 Col3 Col4 Col50 100 m1 1 4 low1 200 m2 7 5 med2 120 m1 4 4 low_red3 240 m2 8 5 med_red4 300 m3 5 4 high5 330 m3 2 4 high_red6 350 m3 11 4 high_red7 200 m4 9 4 highNote that transform() operates on series and returns a like-indexed NDFrame, which is the result that we want (i.e. retain the index of the original dataframe). Therefore, we can call transform() with our Col3 column, and then extract the corresponding Col4 column values from the original index using iloc in the function being called from transform().
List inside a List using python and Database I wish to create a document like --{ "name" : "John" , "DOB" : "21-Jun-1999" , "Sex" : "M" , "Skill" : ["C","PYTHON","COBOL"], "Location" : { "lat":12.3 , "Long" : "14.6" }} I am using cx_oracle to pull data out of my database .Here is a small part of my code --cur1.execute('select name,dob,sex,skills,long,lat from employee'); # to create the keys for i in range(len(cur1.description)): head=cur1.description[i][0] header = header + [head]for row in cur1.fetchall(): data_dict={} for i in range(len(row)): if i == 3 : c={} c=row[i] c=c.split(',') for j in range(len(c)): data_dict.setdefault(header[i],[]).append(c[j]) else: data_dict.setdefault(header[i],[]).append(row[i])print(data_dict) This is the output --{ "name" : "John" , "DOB" : "21-Jun-1999" , "Sex" : "M" , "Skill" : ["C","PYTHON","COBOL"]}How can I convert the last two fields from my SQL ( long and lat) and create another list .
I don't have cx_oracle installed, so this isn't tested but...From your desired output, it looks like you want the value of your Location field to be a dict, not a list. Given this, and assuming index 4 and 5 of row will always represent your lat and long columns, respectively, you may do the following:data_dict['Location'] = {'lat': row[4], 'long': row[5]}Your example output shows that you want longitude to be a string and latitude to be an integer. I assume this is a typo, but if not, you can ensure these values by using int(row[4]) and str(row[5]) for lat and long, respectively. Also, if you prefer to use the column names for the keys to your Location dictionary, you may replace lat and long with header[4] and header[5], respectively.You will also have to change your loop to make sure that you don’t add additional fields in data_dict for lat and long. And make sure you do something with data_dict so that it doesn’t get clobbered after every loop iteration:For Python2.7 and later:records = []for row in cur1.fetchall(): data_dict = {header[i]: row[i] for i in range(len(row)) if i < 3} data_dict[header[3]] = row[3].split(',') data_dict[‘Location’] = {header[4]: row[4], header[5]: row[5]} records.append(data_dict)For Python2.6 and before:records = []for row in cur1.fetchall(): data_dict = { header[3]: row[3].split(',') 'Location': {header[4]: row[4], header[5]: row[5]} } for i in range(len(row)): if i < 3 : data_dict[header[i]] = row[i] records.append(data_dict)For what it's worth, I believe cur1 is an iterator and that you can simply do for row in cur1. This will help reduce memory usage when there are many rows.
Returning a list of x and y coordinate tuples I am trying to return a list of x and y co-ordinate tuples after reading a text file with numbers in it for example:68,125113,6965,86108,149152,53I have got to the point where i return a list of numbers but not as pairs in a tuple.here is my code:def read_numbers(filename): numbers = [] input_file = open(filename, "r") content = input_file.readlines() numbers = [word.strip() for word in content] input_file.close() return numbersdef main(): numbers = read_numbers('output.txt') print(numbers)main()
You can read each line, then split each line by comma and convert each piece of that split to an int using map. Finally convert it into tuplecoords = [tuple(map(int, line.split(","))) for line in lines]This gives the output:[(68, 125), (113, 69), (65, 86), (108, 149), (152, 53)]Your full code might look something like this:with open("output.txt") as f: lines = f.readlines()coords = [tuple(map(int, line.split(","))) for line in lines]print(coords)
Python script not executing under lighttpd I'm under Debian,Installed python and lighttpd using "apt-get install"Here is my lighttpd conf file:server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_cgi")server.document-root = "/var/www/html"server.upload-dirs = ( "/var/cache/lighttpd/uploads" )server.errorlog = "/var/log/lighttpd/error.log"server.pid-file = "/var/run/lighttpd.pid"server.username = "www-data"server.groupname = "www-data"server.port = 80index-file.names = ( "index.php", "index.html", "index.lighttpd.html" )url.access-deny = ( "~", ".inc" )static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )compress.cache-dir = "/var/cache/lighttpd/compress/"compress.filetype = ( "application/javascript", "text/css", "text/html", "text/$# default listening port for IPv6 falls back to the IPv4 portinclude_shell "/usr/share/lighttpd/use-ipv6.pl " + server.portinclude_shell "/usr/share/lighttpd/create-mime.assign.pl"include_shell "/usr/share/lighttpd/include-conf-enabled.pl"# configuration cgi-python$HTTP["url"] =~ "^cgi-bin/" { cgi.assign = ( ".py" => "/usr/bin/python" )}I have this file under /var/www/html/cgi-bin/-rwxr-xr-x 1 www-data www-data 245 mai 19 12:09 hello.pyhello.py:#! /usr/bin/python#print "Content-Type: text/html\n\n"print '<html><head><meta content="text/html; charset=UTF-8" />'print '<title>Rapsberry Pi</title><p>'for count in range(1,100): print 'Hello World...'print "</p></body></html>"My problem is that the browser does not execute my file and display my .py code.No errors in the /var/log/lighttpd/error.log Anyone has an idea about what is going wrong ?
From the docs here:https://wiki.archlinux.org/index.php/lighttpd#CGIit appears you need also need to set cgi.assign, e.g.:cgi.assign = ( ".pl" => "/usr/bin/perl", ".cgi" => "/usr/bin/perl", ".rb" => "/usr/bin/ruby", ".erb" => "/usr/bin/eruby", ".py" => "/usr/bin/python", ".php" => "/usr/bin/php-cgi" )
How can I make a PyQt widget resizable by dragging? I have a QScrollArea containing a widget with a QVBoxLayout. Inside this layout are several other widgets. I want the user to be able to drag the lower borders of those widgets to resize them in the vertical direction. When they are resized, I don't want them to "steal" size from the other widgets in the scrolling area; instead I want the entire scrolled "page" to change its size. So if you enlarge one of the widgets, it should push the other widgets down (out of the viewport of the scroll area); if you shrink it, it should pull the other widgets up. Dragging the border of one widget should not change the size of any of the other widgets in the vertical scroll; it should just move them.I began by using a QSplitter. If I use that, I can drag to change the size of a widget, but there doesn't seem to be a way to get it to "push/pull" the others as I described above, rather than growing/shrinking them. But I can't find any other way to give a widget a draggable handle that will allow me to change its size. How can I accomplish this?Here is a simple example of what I'm doing. (In this example I've commented out the splitter, but if you uncomment it you can see what happens with that version.)import sysfrom PyQt4.QtCore import *from PyQt4.QtGui import *from PyQt4.Qsci import QsciScintilla, QsciLexerPythonclass SimplePythonEditor(QsciScintilla): def __init__(self, parent=None): super(SimplePythonEditor, self).__init__(parent) self.setMinimumHeight(50)class Chunk(QFrame): def __init__(self, parent=None): super(Chunk, self).__init__(parent) layout = QVBoxLayout(self) sash = QSplitter(self) layout.addWidget(sash) sash.setOrientation(Qt.Vertical) editor = self.editor = SimplePythonEditor() output = self.output = SimplePythonEditor() output.setReadOnly(True) sash.addWidget(editor) sash.addWidget(output) self.setLayout(layout) print(self.sizePolicy())class Widget(QWidget): def __init__(self, parent= None): global inout super(Widget, self).__init__() #Container Widget widget = QWidget() #Layout of Container Widget layout = QVBoxLayout(self) #sash = QSplitter(self) #layout.addWidget(sash) #sash.setOrientation(Qt.Vertical) for num in range(5): editor = SimplePythonEditor() editor.setText("Some stuff {}".format(num)) layout.addWidget(editor) #sash.addWidget(editor) widget.setLayout(layout) #Scroll Area Properties scroll = QScrollArea() scroll.setVerticalScrollBarPolicy(Qt.ScrollBarAlwaysOn) scroll.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff) scroll.setWidgetResizable(True) scroll.setWidget(widget) scroll.setMaximumHeight(500) #Scroll Area Layer add vLayout = QVBoxLayout(self) vLayout.addWidget(scroll) self.setLayout(vLayout)if __name__ == '__main__': app = QApplication(sys.argv) dialog = Widget() dialog.show() app.exec_()
class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.setWindowTitle("MainWindow") MainWindow.resize(500, 500) self.centralwidget = QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") MainWindow.setCentralWidget(self.centralwidget) QMetaObject.connectSlotsByName(MainWindow)class Ewindow(QMainWindow,QApplication): """docstring for App""" resized = pyqtSignal() def __init__(self,parent): super(Ewindow,self).__init__(parent=parent) self.setGeometry(500, 500, 800,800) self.setWindowTitle('Mocker') self.setWindowIcon(QIcon('icon.png')) self.setAttribute(Qt.WA_DeleteOnClose) ui2 = Ui_MainWindow() ui2.setupUi(self) self.resized.connect(self.readjust) def resizeEvent(self, event): self.resized.emit() return super(Ewindow, self).resizeEvent(event) def readjust(self): self.examForm.move(self.width()-self.examForm.width(),0) self.btn_skip.move(self.width()-self.btn_skip.width(),self.height()-100) self.btn_next.move(self.btn_showAnswers.x()+self.btn_showAnswers.width(),self.height()-100) self.btn_prev.move(0,self.height()-100) self.btn_showAnswers.move(self.btn_prev.x()+self.btn_prev.width(),self.height()-100) self.btn_home.move(self.width()-200,self.height()-150) self.lbscreen1.resize(self.width()-self.examForm.width(),self.height()-200) self.examForm.resize(200,self.height()-150) self.btn_skip.resize(self.examForm.width(),100) self.btn_next.resize(self.btn_prev.width(),100) self.btn_prev.resize(self.width()*0.25,100) self.btn_showAnswers.resize(self.btn_prev.width(),100) self.btn_home.resize(200,50)here is an example code of a resizable window it moves and stretches widgets as you resize the window. The idea is to keep widget coordinates and sizes relative to each other.so i had to make a class Ui_MainWindow and set it for my window class ui2.setupUi(self) and also declare the resized = pyqtSignal() which i'd be using to run the readjust function which resets size and coordinates of the widgets like so self.resized.connect(self.readjust).i hope this helps!
Represent a function by a mathematical function Is it possible to output a mathematical function directly from the function implementation ? class MyFunction: def __init__(self, func): self.func = func def math_representation(self): # returns a string representation of self.funcf = lambda x: 3*x**2myFunc = MyFunction(f)print(myFunc.math_reprentation()) #prints something like 3*x**2Of course constructing the object with the representation as a parameter is possible, and is a trivial solution. But the idea is to generate this representation.I could also build the function with objects representing the math operations, but the idea is to do it on a regular (lambda) function.I really don't see a way for this to happen, but I'm curious.Thanks for any help and suggestion
As I said, you can use SymPy if you want this to be more complex, but for simple functions (and trusted inputs), you could do something like this:class MathFunction(object): def __init__(self, code): self.code = code self._func = eval("lambda x: " + code) def __call__(self, arg): return self._func(arg) def __repr__(self): return "f(x) = " + self.codeYou can use it like this:>>> sq = MathFunction("x**2")>>> sqf(x) = x**2>>> sq(7)49This is a bit restricted, of course (only using the variable called "x", and only one parameter), but it can be, of course, expanded.
python lxml - simply get/check class of HTML element I use tree.xpath to iterate over all interesting HTML elements but I need to be able to tell whether the current element is part of a certain CSS class or not.from lxml import htmlmypage = """<div class="otherclass exampleclass">some</div><div class="otherclass">things</div><div class="exampleclass">are</div><div class="otherclass">better</div><div>left</div>"""tree = html.fromstring(mypage)for item in tree.xpath( "//div" ): print("testing") #if "exampleclass" in item.getListOfClasses(): # print("foo") #else: # print("bar")The overall structure should remain the same.What is a fast way to check whether or not the current div has the exampleclass class or not?In above example, item is of lxml.html.HtmlElement class, which has the property classes but I don't understand what this means: classes A set-like wrapper around the 'class' attribute. Get Method: unreachable.classes(self) - A set-like wrapper around the 'class' attribute. Set Method: unreachable.classes(self, classes)It returns a lxml.html.Classes object, which has a __iter__ method and it turns out iter() works. So I construct this code:for item in tree.xpath( "//div" ) match = False for classname in iter(item.classes): if classname == "exampleclass": match = True if match: print("foo") else: print("bar")But I'm hoping there is a more elegant method.I tried searching for similar questions but all I found were various "how do I get all elements of 'classname'", however I need all divs in the loop, I just want to treat some of them differently.
There is no need for iter, if "exampleclass" in item.classes: does the exact same thing, only more efficiently. from lxml import htmlmypage = """<div class="otherclass exampleclass">some</div><div class="otherclass">things</div><div class="exampleclass">are</div><div class="otherclass">better</div><div>left</div>"""tree = html.fromstring(mypage)for item in tree.xpath("//div"): if "exampleclass" in item.classes: print("foo")The difference is calling iter on a set makes the lookup linear so definitely not an efficient way to search a set, not much difference here but in some cases there would be a monumental diffrence:In [1]: st = set(range(1000000))In [2]: timeit 100000 in st10000000 loops, best of 3: 51.4 ns per loopIn [3]: timeit 100000 in iter(st)100 loops, best of 3: 1.82 ms per loopYou can also use css selectors using lxml:for item in tree.cssselect("div.exampleclass"): print("foo")Depending on the case, you may also be able to use contains:for item in tree.xpath("//div[contains(@class, 'exampleclass')]"): print("foo")
xlwings (0.10.0) Automation error while importing UDF i am using xlwings for the first time, i created a file using the "xlwings quickstart" command and added the following function in the python file @xlw.func def add(x,y): return 2 * (x+y)when i try to import this udf into the excel file, i get a Runtime error 440 Automation error I am using Python 3.5.2, my excel is 32 bit pro plus.
I solved same problem with:- under Excel, go to File/Options/Trust Center/Trust Center Settings- in Macro Settings make sure "Trust access to the VBA porject object model" is checked
How to document multiple return values using reStructuredText in Python 2? The Python docs say that "the markup used for the Python documentation is reStructuredText". My question is: How is a block comment supposed to be written to show multiple return values?def func_returning_one_value(): """Return just one value. :returns: some value :rtype: str """def func_returning_three_values(): """Return three values. How do I note in reStructuredText that three values are returned? """I've found a tutorial on Python documentation using reStructuredText, but it doesn't have an example for documenting multiple return values. The Sphinx docs on domains talks about returns and rtype but doesn't talk about multiple return values.
There is a compromised solution: just write in normal Markdown texts.e.g.def func(a, b): """ :param int a: first input :param int a: second input :returns: - x - first output - y - second output """ return x, yThis will generate the following document:Almost what we want, right?The shortcoming for this is that you cannot specify return type for every element. You would have to write it by yourself, such as """:returns: -x (:py:class:`int`) - first output"""
python - possible encoding and decoding values I'm trying to decode chatacters which have been encoded in the following way:&#number;I tried: s.decode("utf8")and: s.decode("unicode-escape")but both not seems to work.What is the encoding I should use to decode this kind?In general - where can I find a list of all valid encodings?
Python 2:import HTMLParserh = HTMLParser.HTMLParser()print h.unescape('£682m')£682mPython 3:import html.parserh = html.parser.HTMLParser()print(h.unescape('£682m'))£682m.encode and .decode works in a little bit different way then you expect i'm afraid.See the following:print 'å'.decode('iso-8859-1')u'\x86'The string were encoded in latin-1 when i inputted it into the console (å) but my end-point uses iso-8859-1 so i can re-encode it to fit my endpoint's character encoding.For more info in character encodings: http://en.wikipedia.org/wiki/Character_encoding
printing child nodes along with their xml tags in python I have a file called m.xml which has the following content:<volume name="sp" type="span" operation="create"> <driver>HDD1</driver> <driver>HDD2</driver> <driver>HDD3</driver> <driver>HDD4</driver></volume>I would like to get result as follows:<driver>HDD1</driver><driver>HDD2</driver><driver>HDD3</driver><driver>HDD4</driver>I am trying to use the following codeimport xml.etree.ElementTree as ETroot = ET.parse('m.xml')for nod in root.findall("./driver"): print nod.textI am getting the following result:HDD1HDD2HDD3HDD4How do I get the tags also and not just the textual values?
Use BeautifulSoup to parse XML. It's very simple: from bs4 import BeautifulSoup as Soupwith open("sample.xml", "r") as f: target_xml = f.read()# create a `Soup` objectsoup = Soup(target_xml, "xml") # loop through all <driver> returned as a list and prints all for d in soup.find_all("driver"): print(d)
Pandas: slicing a dataframe into multiple sheets of the same spreadsheet Say I have 3 dictionaries of the same length, which I combine into a unique pandas dataframe. Then I dump said dataframe into an Excel file. Example:import pandas as pdfrom itertools import izip_longestd1={'a':1,'b':2,'c':3,'d':4,'e':5,'f':6}d2={'a':1,'b':2,'c':3,'d':4,'e':5,'f':6}d3={'a':1,'b':2,'c':3,'d':4,'e':5,'f':6}dict_list=[d1,d2,d3]stats_matrix=[ tuple('dict{}'.format(i+1) for i in range(len(dict_list))) ] + list( izip_longest(*([ v for k,v in sorted(d.items())] for d in dict_list)) )stats_matrix.pop(0)mydf=pd.DataFrame(stats_matrix,index=None)mydf.columns = ['d1','d2','d3']writer = pd.ExcelWriter('myfile.xlsx', engine='xlsxwriter')mydf.to_excel(writer, sheet_name='sole') writer.save() This code produces an Excel file with a unique sheet:>Sheet1<d1 d2 d3 1 1 12 2 23 3 34 4 45 5 56 6 6My question: how can I slice this dataframe in such a way that the resulting Excel file has, say, 3 sheets, in which the headers are repeated and there are two rows of values in each sheet? EDITIn the example given here the dicts have 6 elements each. In my real case they have 25000, the index of the dataframe starting from 1. So I want to slice this dataframe into 25 different sub-slices, each of which is dumped into a dedicated Excel sheet of the same main file.Intended result: one Excel file with multiple sheets. Headers are repeated.>Sheet1< >Sheet2< >Sheet3<d1 d2 d3 d1 d2 d3 d1 d2 d3 1 1 1 3 3 3 5 5 52 2 2 4 4 4 6 6 6
First prep your dataframe for writing like this:prepdf = mydf.groupby(mydf.index // 2).apply(lambda df: df.reset_index(drop=True))prepdfYou can use this function to reset you index instead.def multiindex_me(df, how_many_groups=3, group_names=None): m = np.arange(len(df)) reset = lambda df: df.reset_index(drop=True) new_df = df.groupby(m % how_many_groups).apply(reset) if group_names is not None: new_df.index.set_levels(group_names, level=0, inplace=True) return new_dfUse it like this:new_df = multiindex_me(mydf)Or:new_df = multiindex_me(mydf, how_many_groups=4, group_names=['One', 'Two', 'Three', 'Four'])Then write each cross section to a different sheet like this:writer = pd.ExcelWriter('myfile.xlsx')for sheet in prepdf.index.levels[0]: sheet_name = 'super_{}'.format(sheet) prepdf.xs(sheet).to_excel(writer, sheet_name)writer.save()
XlsxWriter: add color to cells I try to write dataframe to xlsx and give color to that.I useworksheet.conditional_format('A1:C1', {'type': '3_color_scale'})But it's not give color to cell. And I want to one color to this cells.I saw cell_format.set_font_color('#FF0000')but there is don't specify number of cellssex = pd.concat([df2[["All"]],df3], axis=1)excel_file = 'example.xlsx'sheet_name = 'Sheet1'writer = pd.ExcelWriter(excel_file, engine='xlsxwriter')sex.to_excel(writer, sheet_name=sheet_name, startrow=1)workbook = writer.bookworksheet = writer.sheets[sheet_name]format = workbook.add_format()format.set_pattern(1)format.set_bg_color('gray')worksheet.write('A1:C1', 'Ray', format)writer.save()I need to give color to A1:C1, but I should give name to cell. How can I paint several cells of my df?
The problem is that worksheet.write('A1:C1', 'Ray', format) is used only to write a single cell. A possible solution to write more cells in a row, is use write_row().worksheet.write_row("A1:C1", ['Ray','Ray2','Ray3'], format)Remember that write_row() takes a list of string to write in cells.If you use worksheet.write_row("A1:C1", 'Ray', format), you have R in the first cell, a in second and y in the third.
Python: Can an exception class identify the object that raised it? When a Python program raises an exception, is there a way the exception handler can identify the object in which the exception was raised?If not, I believe I can find out by defining the exception class like this...class FoobarException(Exception) : def __init__(self,message,context) : ......and using it like this: raise FoobarException("Something bad happened!", self)I'd rather not have to pass "self" to every exception, though.
It quickly gets messy if you want the exception itself to figure out where in the stack it is. You can do something like this:import inspectframeinfo = inspect.getframeinfo(inspect.stack()[1][0])caller_name = frameinfo[2]file_name = frameinfo[0]This, however, will only really work if you are looking for the function or method where the exception was raised, not if you are looking for the class that owns it.You are probably better off doing something like this:class MyException(Exception): pass# ... somewhere inside a class raise MyException("Something bad happened in {}".format(self.__class__))That way you don't have to write any handling code for your Exception subclass either.
Creating a 2D array from Nested Dictionaries I am a student working with python dictionaries for the first time and I'm getting stuck on resorting them in to matrix arrays.I have a nested ordered dictionary describing the temperature and humidity week by week.weather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))])The overall key for each week indicates average humidity, and the individual values for each day are temperature.I am trying to create a single figure plot in matplotlib of lines of temperature vs. day that will use humidity as a third variable to indicate the color from a colorbar. It seems that LineCollection will do this with a 2D array of day and temperature. But when I try to pull out the 2D array from the nested dictionary, I cannot seem to get it into the necessary Nx2 shape for LineCollection.Any help is greatly appreciated!Here's the code I have so far:plt.figure()x=[]y=[]z=[]ticks=[]for humidity, data_dict in weather.iteritems(): x.append(range(len(data_dict))) y.append(data_dict.values()) z.append(humidity) ticks.append(data_dict.keys())for ii in x,y,z: ii = np.array(ii)lines=np.array(zip(x,y))print lines.shapeAnd this returns that the shape is (3, 2, 4) instead of (3, 2)EDIT:I'm hoping for lines in an output that looks like this, so numpy can recognize it as a 3x2 2D-array: [[(0 1 2 3), (79 85 87 83)], [(0 1 2 3), (65 71 74 68)], [(0 1 2 3), (83 84 82 80)]]
You need to loop through the nested dictionaries appending values to a list. You also should store the day number so as to have something to plot temperature against. The colour for humidity should also be stored for each day. You then need to define the axis label to display the days as strings. The code to do this looks like, from collections import OrderedDictimport matplotlib.pyplot as pltweather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))])Temp = []Humidity = []Day = []Dayno = []for h, v in weather.items(): j = 0 for d, T in v.items(): Temp.append([T]) Humidity.append([h]) Day.append([d]) Dayno.append([j]) j += 1fig,ax = plt.subplots(1,1)cm = ax.scatter(Dayno, Temp, c=Humidity, vmin=90., vmax=100., cmap=plt.cm.RdYlBu_r)ax.set_xticks(Dayno[0:4])ax.set_xticklabels(Day[0:4])plt.colorbar(cm)plt.show()which plots,UPDATE: If you want to use plots, you need to separate the data into an array for each week and then plot these as single line. You can then set the colour for each line and label. I've attached a version using numpy and array slicing (although probably not simplest solution),from collections import OrderedDictimport matplotlib.pyplot as pltimport matplotlibimport numpy as npweather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))])Temp = []; Humidity = []Day = []; Dayno = []; weekno = []i = 0for h, v in weather.items(): j = 0 for d, T in v.items(): Temp.append(T) Humidity.append(h) Day.append(d) Dayno.append(j) weekno.append(i) j += 1 i += 1#Swtich to numpy arrays to allow array slicingTemp = np.array(Temp)Humidity = np.array(Humidity)Day = np.array(Day)Dayno = np.array(Dayno)weekno = np.array(weekno)#Plot linesfig,ax = plt.subplots(1,1)vmin=90.; vmax=97.; weeks=3; daysperweek=4colour = ['r', 'g', 'b']for i in range(weeks): ax.plot(Dayno[weekno==i], Temp[weekno==i], c=colour[i], label="Humidity = " + str(Humidity[daysperweek*i]))ax.set_xticks(Dayno[0:4])ax.set_xticklabels(Day[0:4])plt.legend(loc="best")plt.show()Which looks like,
Using struct timeval in Python I have a C program containing a structurestruct S{ int x; struct timeval t; };and a functionint func(struct S s1, struct S s2)I need to call this function from my python program.I am using ctypes.The parallel structure on Pythonimport ctypesfrom ctypes import *class S(Structure): _fields_ = [("x",c_int), ("t", ?)]Now, my question is what will I write in the ? place and any dependencies related to it.Thanks in advance.
Find the definition of struct timeval in your platform's C include files (the Internet suggests sys/time.h), then transcode that into a ctypes structure.On my platform a struct timeval isstruct timeval { long tv_sec; long tv_usec;};(and I suppose this is the standard anyway), soclass timeval(Structure): _fields_ = [("tv_sec", c_long), ("tv_usec", c_long)]class S(Structure): _fields_ = [("x",c_int), ("t", timeval)]would probably fit the bill.
Transform list of strings of commit-details to structured dictionary applying grouping by name and date From the data I have, I want to show in such form where a commit key will have an arrayof commits that are done on the particular date. This is what I am expecting my output to be{ "Dan Ab": [ { "2014-05-2": { "commit_count": "1", "commit": [{ 'commit_hash': {'lines_added': 10, 'lines_removed': 4 }}] }, "2014-05-3": { "commit_count": "2", "commit": [ { 'commit_hash': {'lines_added': 10, 'lines_removed': 4 }}, { 'commit_hash': {'lines_added': 14, 'lines_removed': 0 }}, ] }, } ], "John": [ "2020-10-14": { "commit_count": "1", "commit": [{ 'commit_hash': {'lines_added': 1740, 'lines_removed': 10 }}] } ]}However, the same date are shown multiple times instead of appending commit related information as in above for a particular date, and for particular author.This is how I have done and is not workingimport remerged_result = [ "43f4cc160;Dan Ab;2021-06-17; 1 file changed, 10 insertions(+), 19 deletions(-)", "6cbf2a8b3;Dan Ab;2021-06-15; 1 file changed, 14303 insertions(+)", "c0a77029c;Dan Ab;2021-06-15; 1 file changed, 1 insertion(+), 1 deletion(-)", "f283d7524;Dan Ab;2021-06-15; 1 file changed, 5260 deletions(-)", "03c5314b4;Dan Ab;2021-06-15; 5 files changed, 5265 insertions(+), 12690 deletions(-)", "daf38ecdf;Dan Ab;2020-12-11; 1 file changed, 8 insertions(+)", "b5eabd543;Dan Ab;2020-10-14; 1 file changed, 17 insertions(+)", "6d50a9d09;Dan Ab;2020-10-14; 43 files changed, 15740 insertions(+), 1 deletion(-)", "7d59n9d09;John;2020-10-14; 4 files changed, 1740 insertions(+), 10 deletion(-)"]coding_days = {}total_lines = 0total_lines_added = 0total_lines_removed = 0total_files_changed = 0def getstatsummarycounts(line): """ 1 file changed, 5 insertions(+), 1 deletion(-) - returns ['1', '5', '1'] """ numbers = re.findall("\d+", line) if len(numbers) == 1: # neither insertions nor deletions: may probably only happen # for "0 files changed" numbers.append(0) numbers.append(0) elif len(numbers) == 2 and line.find("(+)") != -1: numbers.append(0) # only insertions were printed on line elif len(numbers) == 2 and line.find("(-)") != -1: numbers.insert(1, 0) # only deletions were printed on line return numbersfor result in merged_result: [commit_hash, author, commit_date, logs] = result.split(";") numbers = getstatsummarycounts(logs) if len(numbers) == 3: (files_changed, inserted, deleted) = map(lambda el: int(el), numbers) total_lines += inserted total_lines -= deleted total_lines_added += inserted total_lines_removed += deleted total_files_changed += files_changed if author not in coding_days: coding_days[author] = [] else: if commit_date not in coding_days[author]: coding_days[author].append({commit_date: []}) else: coding_days[author][0][commit_date].append({ commit_hash: { "lines_added": inserted, "lines_deleted": deleted, } }) else: (files_changed, inserted, deleted) = (0, 0, 0)I have created a repl of it as well and here it ishttps://replit.com/@xedikaki/EnragedHotSymbol#main.py
You can parse and restructure your data like so:merged_result = [ "43f4cc160;Dan Ab;2021-06-17; 1 file changed, 10 insertions(+), 19 deletions(-)", "6cbf2a8b3;Dan Ab;2021-06-15; 1 file changed, 14303 insertions(+)", "c0a77029c;Dan Ab;2021-06-15; 1 file changed, 1 insertion(+), 1 deletion(-)", "f283d7524;Dan Ab;2021-06-15; 1 file changed, 5260 deletions(-)", "03c5314b4;Dan Ab;2021-06-15; 5 files changed, 5265 insertions(+), 12690 deletions(-)", "daf38ecdf;Dan Ab;2020-12-11; 1 file changed, 8 insertions(+)", "b5eabd543;Dan Ab;2020-10-14; 1 file changed, 17 insertions(+)", "6d50a9d09;Dan Ab;2020-10-14; 43 files changed, 15740 insertions(+), 1 deletion(-)", "6d50a9d09;Dan Ab;2020-10-14; Steak and Fries, no Salad",]Program:import regrouped = {}pattern = r"(\d+) file[^,]*(?:\, (\d+) ins[^,]+)?(?:\, (\d+) del.+)?$"for line in merged_result: tag, name, date, changes = line.split(";", 3) try: # this will throw "NoneType" has no .groups() if not matched files, inserts, deletes = re.search(pattern, changes).groups() inserts, deletes = inserts or "0", deletes or "0" except AttributeError as a: print("Skipping: '",line, "': cannot match data by regex to get changed/inserted/deleted\n", a) continue nameDict = grouped.setdefault(name, {}) dateDict = nameDict.setdefault(date, {}) dateDict.setdefault("commit_count", 0) dateDict["commit_count"] += 1 commList = dateDict.setdefault("commit", []) commList.append({"commit_hash": {"tag": tag, "files": files, "lines_added": inserts, "lines_removed": deletes}})print(grouped)Output:Skipping: ' 6d50a9d09;Dan Ab;2020-10-14; Steak and Fries, no Salad ': cannot match data by regex to get changed/inserted/deleted{'Dan Ab': {'2021-06-17': {'commit_count': 1, 'commit': [{'commit_hash': {'tag': '43f4cc160', 'files': '1', 'lines_added': '10', 'lines_removed': '19'}}]}, '2021-06-15': {'commit_count': 4, 'commit': [{'commit_hash': {'tag': '6cbf2a8b3', 'files': '1', 'lines_added': '14303', 'lines_removed': '0'}}, {'commit_hash': {'tag': 'c0a77029c', 'files': '1', 'lines_added': '1', 'lines_removed': '1'}}, {'commit_hash': {'tag': 'f283d7524', 'files': '1', 'lines_added': '0', 'lines_removed': '5260'}}, {'commit_hash': {'tag': '03c5314b4', 'files': '5', 'lines_added': '5265', 'lines_removed': '12690'}}]}, '2020-12-11': {'commit_count': 1, 'commit': [{'commit_hash': {'tag': 'daf38ecdf', 'files': '1', 'lines_added': '8', 'lines_removed': '0'}}]}, '2020-10-14': {'commit_count': 2, 'commit': [{'commit_hash': {'tag': 'b5eabd543', 'files': '1', 'lines_added': '17', 'lines_removed': '0'}}, {'commit_hash': {'tag': '6d50a9d09', 'files': '43', 'lines_added': '15740', 'lines_removed': '1'}}]}}}Resulting dict reformatted (1st hit on google):{ "Dan Ab":{ "2021-06-17":{ "commit_count":1, "commit":[ { "commit_hash":{ "tag":"43f4cc160", "files":"1", "lines_added":"10", "lines_removed":"19" } } ] }, "2021-06-15":{ "commit_count":4, "commit":[ { "commit_hash":{ "tag":"6cbf2a8b3", "files":"1", "lines_added":"14303", "lines_removed":"0" } }, { "commit_hash":{ "tag":"c0a77029c", "files":"1", "lines_added":"1", "lines_removed":"1" } }, { "commit_hash":{ "tag":"f283d7524", "files":"1", "lines_added":"0", "lines_removed":"5260" } }, { "commit_hash":{ "tag":"03c5314b4", "files":"5", "lines_added":"5265", "lines_removed":"12690" } } ] }, "2020-12-11":{ "commit_count":1, "commit":[ { "commit_hash":{ "tag":"daf38ecdf", "files":"1", "lines_added":"8", "lines_removed":"0" } } ] }, "2020-10-14":{ "commit_count":2, "commit":[ { "commit_hash":{ "tag":"b5eabd543", "files":"1", "lines_added":"17", "lines_removed":"0" } }, { "commit_hash":{ "tag":"6d50a9d09", "files":"43", "lines_added":"15740", "lines_removed":"1" } } ] } }}Explanation for r"(\d+) file[^,]*(?:\, (\d+) ins[^,]+)?(?:\, (\d+) del.+)?$":I am using re.search so it is not bound to start at the begin of the string - I am looking for:(\d+) file[^,] a number followed by "file" consuming anything up to (excluding) the next "," capturing the number in a group(?:\, (\d+) ins[^,]+)?(?:\, (\d+) del.+)? are similar: 0 to 1 occurence of ", " followed by a captured number followed by a space and some text after "ins" we capture anyting up to excluding the next "," after "del" we simply capture anything$ followed by end of stringthe optional groups will result in None if not present, hence converting them to 0 using inserts, deletes = inserts or "0", deletes or "0"If you need more speed you could use defaultdict() but dict.setdefault does the trick as well.
How to convert integer months to year in python I have inputs start date and end date. I want output like 2 years 7 monthsfrom dateutil.relativedelta import relativedeltardelta = relativedelta(now, birthdate)print 'years - ', rdelta.yearsprint 'months - ', rdelta.monthsin this method, I got output like>>> years - 2>>> months - 18I prefer output like I want output like 2 years 7 months
This worked for me.def format_date_range(start: datetime.date, end: datetime.date): rdelta = relativedelta(end, start) return f"{rdelta.years} years, {rdelta.months} months"What's the start and end dates here?
Running Django Rest Framework inside Apache I have a web server running Apache, and I need to implement a RESTful API on the same domain, and I'd like to use Django Restful Framework to serve the REST calls.For example: going to http://myawesomedomain.com/ in a browser serves a good old fashioned web page delivered by Apache, but I need requests to http://myawesomedomain.com/api/customers/... to be handled by my Django Restful application.Can someone please point me in the right direction. Is there an apache mod I need to activate to get it to serve Python? Do I have to redirect those requests to another service on the server?Not looking for a comprehensive tutorial. I just don't know where to start.Thanks in advance
I did some more digging and found the answer myself.You use mod_wsgi.Here is a perfect tutorial to get started: https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/modwsgi/
How to read a tab delimited file into Python with rows of unequal length? I have a text file which is the results of measurements. When the object is not in the correct place to be measured it cannot take the full suite of measurements, which gives rows of unequal length in the text file.How can this be read in Python? Do I have to fill in the spaces in the text file with blanks?What the data looks like:Code I tried:from numpy import loadtxtlines = loadtxt(file_to_read, comments="#", delimiter="\t", unpack=False)But it gave an error:ValueError: could not convert string to float: 'Height\tLength\tVolume\tSpeed\tWeight'Then I tried:file_to_read = ('/Users/path/to/file//dummy_data.txt')file_object = open(file_to_read, 'r')file_object.read()print(file_object)But it returned nothing, I like to see the data to see if it has the correct format.
The error message indicates that you are trying to import the header row. Use the skiprows parameter to loadtxt to skip this row:lines = loadtxt(file_to_read, comments="#", delimiter="\t", skiprows=1, unpack=False)You can read more about the loadtxt function in the manual.
Vercel: Cannot import other functions with Python Serverless API I am trying to import helper functions into my Serverless Flask Api but am unable to do so with Vercel using the vercel dev command.My folder structure is:api _utils/ common.py app.pyHowever, when I try to import my helper function into my app.py file I get an error saying module cannot be found.Below is sample code from my app.pyfrom flask import Flask, Responsefrom _utils.common import helper_functionapp = Flask(__name__)
I moved my _utils file to the root of my project and now in my api/index.py I import as followsfrom _utils.common import helper_functionMy vercel.json file looks like:{ "routes": [ { "src": "/api/(.*)", "dest": "api/index.py" } ]}
Keras merge/concatenate models outputs as a new layers I want to use pretrained models' convolutionnal feature maps as input features for a master model. inputs = layers.Input(shape=(100, 100, 12))sub_models = get_model_ensemble(inputs)sub_models_outputs = [m.layers[-1] for m in sub_models]inputs_augmented = layers.concatenate([inputs] + sub_models_outputs, axis=-1)Here is the key part of what I do in get_model_ensemble():for i in range(len(models)): model = models[i] for lay in model.layers: lay.name = lay.name + "_" + str(i) # Remove the last classification layer to rather get the underlying convolutional embeddings model.layers.pop() # while "conv2d" not in model.layers[-1].name.lower(): # model.layers.pop() model.layers[0] = new_input_layerreturn modelsAll this gives: Traceback (most recent call last): File "model_ensemble.py", line 151, in <module> model = get_mini_ensemble_net() File "model_ensemble.py", line 116, in get_mini_ensemble_net inputs_augmented = layers.concatenate([inputs] + sub_models_outputs, axis=-1) File "/usr/local/lib/python3.4/dist-packages/keras/layers/merge.py", line 508, in concatenate return Concatenate(axis=axis, **kwargs)(inputs) File "/usr/local/lib/python3.4/dist-packages/keras/engine/topology.py", line 549, in __call__ input_shapes.append(K.int_shape(x_elem)) File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 451, in int_shape shape = x.get_shape()AttributeError: 'BatchNormalization' object has no attribute 'get_shape'Here is type info: print(type(inputs))print(type(sub_models[0]))print(type(sub_models_outputs[0]))<class 'tensorflow.python.framework.ops.Tensor'><class 'keras.engine.training.Model'><class 'keras.layers.normalization.BatchNormalization'>Note: the models I get from get_model_ensemble() have got their compile() function already called. So, how should I concatenate my models properly? Why wont it work? I guess that maybe that has something to do with how would the inputs be fed to the sub-models and how I hot-swapped their input layers. Thanks for the help!
The thing works if we do: sub_models_outputs = [m(inputs) for m in sub_models]rather than:sub_models_outputs = [m.layers[-1] for m in sub_models]TLDR: models needs to be called as a layer.
Several functions without global variable I want to know how I can make this code without global variables.I have tried myself but it seems like it involves return, but then It won't go back to the "menu" (main_list). The point of this code is to always return to the menu except when pressing "3" (exit program).Sorry for the big (and bad) code, I appreciate all the help I can get.import sysword = []desc = []def main_list(): print "\nMenu for list \n" print "1: Insert" print "2: Lookup" print "3: Exit program" choice = raw_input() print "Choose alternative: ", choice if choice.isdigit(): choice = int(choice) if choice == 1: insert() elif choice == 2: look() elif choice == 3: sys.exit() else: print "Error: Not a valid choice \n", main_list() else: print "Error: Not a valid choice \n", main_list()def insert(): ins = raw_input("Word to insert: ") if ins not in word: word.append (ins) else: print "Error: Word already exist \n", main_list() desc.append(raw_input ("Description of word: ")) main_list()def look(): up = raw_input("Word to lookup: ") if up not in word: print "Error: Word not found \n", main_list() i = 0 while up != word[i]: i += 1 if up == word[i]: print "Description of word: ", desc[i] main_list()
As Xeno said, you need a while loop to continually loop over the input. For your case, I would suggest a do-while loop, but Python does not have a built-in do-while, so you will need to emulate one, possibly something like this:while True: # do stuff if condition: breakTo get rid of the global variables, you will need to pass variables into your methods and return out of them.def insert(word, desc): # do stuffNow, I noticed you call main_list() at the end of insert() and look(). Do not do this. You do not need a new instance every time, you need to return back to the current instance. So, set up something like this:def main_list(): # do stuff while True: # do more stuff if condition: break # do more stuffdef insert(): # do stuff - return any new value; otherwise, just let it auto-returndef look(): # do stuff - return any new value; otherwise, just let it auto-return
How to extend SQLite with Python functions in Django? It's possible to define new SQL functions for SQLite in Python. How can I do this in Django so that the functions are available everywhere?An example use case is a query which uses the GREATEST() and LEAST() PostgreSQL functions, which are not available in SQLite. My test suite runs this query, and I'd like to be able to use SQLite as the database backend when running tests.
Here's a Django code example that extends SQLite with GREATEST() and LEAST() methods by calling Python's built-in max() and min():from django.db.backends.signals import connection_createdfrom django.dispatch import receiver@receiver(connection_created)def extend_sqlite(connection=None, **kwargs): connection.connection.create_function("least", 2, min) connection.connection.create_function("greatest", 2, max)I only needed this in the tests, so I put this in my test_settings.py. If you have it elsewhere in your code, you may need to test that connection.vendor == "sqlite".
Python/Regex - How to extract date from filename using regular expression? I need to use python to extract the date from filenames. The date is in the following format:month-day-year.somefileextensionExamples:10-12-2011.zipsomedatabase-10-04-2011.sql.tar.gzThe best way to extract this would be using regular expressions?I have some code:import rem = re.search('(?<=-)\w+', 'derer-10-12-2001.zip')print m.group(0)The code will print '10'. Some clue on how to print the date?Best Regards,
Assuming the date is always in the format: [MM]-[DD]-[YYYY].re.search("([0-9]{2}\-[0-9]{2}\-[0-9]{4})", fileName)
Increasing the counter in a list every two elements I have a list of elements:list = ['elem1', 'elem2', 'elem3', 'elem4', 'elem5']which I use in pairs in this way:for x in range(0, len(list)-1): print(list[x], list[x+1])This works and returns:('elem1', 'elem2')('elem2', 'elem3')('elem3', 'elem4')('elem4', 'elem5')I would like to increase a counter every time I print a row. How can I do this?
First of all don't use list as a variable name, that can cause some serious problems.Second, the simple way is just initial a counter and increment it inside the loop.li = ['elem1', 'elem2', 'elem3', 'elem4', 'elem5']cnt = 0for index in range(0, len(li)-1): cnt += 1 print(li[index], li[index+1])print cntAnother elegant way to create the required output is:li = ['elem1', 'elem2', 'elem3', 'elem4', 'elem5']for cnt, (v, w) in enumerate(zip(li[:-1], li[1:])): print [v, w]print cntHere we separate it into two lists, the first one is ['elem1', 'elem2', 'elem3', 'elem4'], the second is ['elem2', 'elem3', 'elem4', 'elem5'].Each iteration we take one from the first list and the second from the second list.
SyntaxError: invalid syntax [in python code] code is:cat_list = [k for k, v in cat_counter.()[:50]]Error is as follows: File "", line 1 cat_list = [k for k, v in cat_counter.()[:50]] SyntaxError: invalid syntax
The cat_counter function would be defined like this:def cat_counter(): # Make function Thus, simply remove the dot to properly call the function:cat_list = [k for k, v in cat_counter()[:50]]
Find a substring in cells across multiple columns in a Pandas dataframe I have a large DataFrame with 50+ columns which I'm simplifying here below:students = [('Samurai', 34, '777.0', 'usa--->jp', 'usd--->yen') , ('Jack', 31, '555.5','usa','usd') , ('Mojo', 16,'488.1','n/a','n/a') , ('Jojo', 32,'119.11','uk--->usa','pound--->usd')]# Create a DataFrame objectdf = pd.DataFrame(students, columns=['Name', 'Age', 'Balance', 'Country','Currency'])I'm trying to finda) whether there are any instances of '--->' in any of the cells across the DataFrame?b) if so where? (optional)So far I've tried 2 approachesboolDf = df.isin(['--->']).any().any()this only works for strings not substringscolumns = list(df)for col in columns: df[col].str.find('--->', 0).any()I get:AttributeError: Can only use .str accessor with string values!(I believe this may only work for columns with string types)Would appreciate any help. Open to other approaches as well.
You can use .applymap() to test each individual value in a dataframe.>>> df Name Age Balance Country Currency0 Samurai 34 777.0 usa--->jp usd--->yen1 Jack 31 555.5 usa usd2 Mojo 16 488.1 n/a n/a3 Jojo 32 119.11 uk--->usa pound--->usd>>> df.applymap(lambda x: isinstance(x, str) and '--->' in x) Name Age Balance Country Currency0 False False False True True1 False False False False False2 False False False False False3 False False False True TrueTo use the .str accessor you can:>>> df.select_dtypes(object).apply(lambda col: col.str.contains('--->')) Name Balance Country Currency0 False False True True1 False False False False2 False False False False3 False False True TrueThe output differs a little - note the Age column is not there.
hadoop-streaming: reducer doesn't seem to be running when mapred.reduce.tasks=1 I am running a basic Map Reduce program via hadoop-streamingThe Map looks like import sysindex = int(sys.argv[1])max = 0for line in sys.stdin: fields = line.strip().split(",") if fields[index].isdigit(): val = int(fields[index]) if val > max: max = valelse: print maxI run it as hadoop jar /usr/local/Cellar/hadoop/1.0.3/libexec/contrib/streaming/hadoop-streaming-1.0.3.jar -D mapred.reduce.tasks=1 -input input -output output -mapper '/Users/hhimanshu/code/p/java/hadoop-programs/hadoop-programs/src/main/python_scripts/AttributeMax.py 8' -file /Users/me/code/p/java/hadoop-programs/hadoop-programs/src/main/python_scripts/AttributeMax.pyI read in Hadoop in Action, mapred.reduce.tasks=1 is As we haven’t specified any particular reducer, it will use the default IdentityReducer. As its name implies, IdentityReducer passes its input straight to output.When I see my console, I see 12/07/30 16:01:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable12/07/30 16:01:33 WARN snappy.LoadSnappy: Snappy native library not loaded12/07/30 16:01:33 INFO mapred.FileInputFormat: Total input paths to process : 112/07/30 16:01:34 INFO streaming.StreamJob: getLocalDirs(): [/Users/me/app/hadoop/tmp/mapred/local]12/07/30 16:01:34 INFO streaming.StreamJob: Running job: job_201207291003_003712/07/30 16:01:34 INFO streaming.StreamJob: To kill this job, run:12/07/30 16:01:34 INFO streaming.StreamJob: /usr/local/Cellar/hadoop/1.0.3/libexec/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201207291003_003712/07/30 16:01:34 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201207291003_003712/07/30 16:01:35 INFO streaming.StreamJob: map 0% reduce 0%12/07/30 16:01:51 INFO streaming.StreamJob: map 100% reduce 0%It does't make any progress, just keeps on running. It seems it is not working, how do I fix this?UPDATEwhen D mapred.reduce.tasks=0I see two files part-00000 and part-00001 both of the files has one line 0when D mapred.reduce.tasks=1 and -reduce 'cat'the behavior is same as if reduce is not doing anything when I run cat file | python AttibuteMax.py 8I get 868which means D mapred.reduce.tasks=0 and cat file | python AttributeMax.py 8 are also not producing the same output(but they should , right?)What would be causing the difference in the behavior when input data is also same?UPDATE 1when D mapred.reduce.tasks=0I see 4 files part-00000, part-00001, part-00002 and part-00002 with single line 268, 706, 348, 868 respectivelyand when I run $ cat ~/Downloads/hadoop/input/apat63_99.txt | python ../../../src/main/python_scripts/AttributeMax.py 8 | cat I do see desired output as 868
do you get the expected output when you set mapred.reduce.tasks=0? What if you specify -reducer 'cat' with mapred.reduce.tasks=1? One of the neat things about streaming is that you can test it pretty effectively from the command-line using pipes:cat input | python mapper.py | sort | python reducer.pybut it seems like your app is not producing any output.
I want to get product of 2 list without using for loop. As with for loop it is taking a lot of time I want to get product of 2 list without using for loop. As with for loop it is taking a lot of time.from itertools import productfrom string import ascii_lowercase,ascii_uppercasekeywords = [a+b+c for a,b,c in product(ascii_lowercase, repeat=3)]keywords1 = [a+b for a,b in product(ascii_uppercase, repeat=2)]for i in keywords: for j in keywords1: print(i+j)
If you are looking for the product of the two lists:from itertools import productfrom string import ascii_lowercase,ascii_uppercasekeywords = [a+b+c for a,b,c in product(ascii_lowercase, repeat=3)]keywords1 = [a+b for a,b in product(ascii_uppercase, repeat=2)]def fast_list(): return [a+b for a,b in product(keywords,keywords1)]def med_list(): return [a+b for a in keywords for b in keywords1]def slow_list(): li = [] for i in keywords: for j in keywords1: li.append(i+j) return liI get that the 'quickest' is fast_list, followed by med_list followed by slow_list:>>> %timeit fast_list()1.98 s ± 68.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)>>> %timeit med_list()2.12 s ± 333 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)>>> %timeit slow_list()2.28 s ± 66.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)However, they are all too close to be definitive -- I would say that they are all about the same time wise. My personal preference is med_list, as it uses fewer characters.
How do you multiply each digit by different numbers in python? I want to multiply my 1st digit by 3 then my 2nd digit by 1 then my 3rd digit by 3 then my 4th digit by 1 then my 5th digit by 3 then my 6th digit by 1 then my 7th digit by 1. Im stuck on how to do this
If I understand your question correctly, you want to do something like this:number = 7568934multiplier = [3, 1, 3, 1, 3, 1, 1]for idx, digit in enumerate(str(number)): print('Res: ' + str(int(digit) * multiplier[idx]))
object has no attributes. New to classes in python import prawimport timeclass getPms(): r = praw.Reddit(user_agent="Test Bot By /u/TheC4T") r.login(username='*************', password='***************') cache = [] inboxMessage = [] file = 'cache.txt' def __init__(self): cache = self.cacheRead(self, self.file) self.bot_run(self) self.cacheSave(self, self.file) time.sleep(5) return self.inboxMessage def getPms(self): def bot_run(): inbox = self.r.get_inbox(limit=25) print(self.cache) # print(r.get_friends())#this works for message in inbox: if message.id not in self.cache: # print(message.id) print(message.body) # print(message.subject) self.cache.append(message.id) self.inboxMessage.append(message.body) # else: # print("no messages") def cacheSave(self, file): with open(file, 'w') as f: for s in self.cache: f.write(s + '\n') def cacheRead(self, file): with open(file, 'r') as f: cache1 = [line.rstrip('\n') for line in f] return cache1 # while True: #threading is needed in order to run this as a loop. Probably gonna do this in the main method though # def getInbox(self): # return self.inboxMessageThe exception is: cache = self.cacheRead(self, self.file)AttributeError: 'getPms' object has no attribute 'cacheRead'I am new to working with classes in python and need help with what I am doing wrong with this if you need any more information I can add some. It worked when it was all functions but now that I attempted to switch it to a class it has stopped working.
Your cacheRead function (as well as bot_run and cacheSave) is indented too far, so it's defined in the body of your other function getPms. Thus it is only accessible inside of getPms. But you're trying to call it from __init__.I'm not sure what you're trying to achieve here because getPms doesn't have anything else in it but three function definitions. As far as I can tell you should just take out the def getPms line and unindent the three functions it contains so they line up with the __init__ method.
key error when no dictionary is required I have a function which sets the shutter on a camera and takes a float as input:def changeShutter(value): global camera, shutter shutter['abs_value']+=value try: camera.set_property(**shutter) except: print "could not set shutter"where shutter is a dictionary containing all the properties required for the shutter, and abs_value is the key whose value needs to be changed then set. I can call this easily enough in a jupyter notebook I use for development with changeShutter(0.05) and it works just fine.I then created a simple html button on a web page which sends a message to a flask-socket server containing the changeShutter function and, depending on the button pressed and the message therefore sent, it parses 0.05 or -0.05 like below:@socketio.on('shutter request', namespace='/test')def changeShutter(message): request = message['data'] print 'Shutter request received: %s' %request if str(request) == "shutter increase": changeShutter(0.05) elif str(request) == "shutter decrease": changeShutter(-0.05)I clearly receive one of the 2 possible options and correctly enter the correct if statement (I have tried debugging with extra print statements), but it throws a key error: 0.05 at me for some reason.When the function does not require a dictionary input, why do I get a key error?
As mentioned in the comments above, this was a stupid error where I had 2 different functions of the same name.
Fast sorting of large nested lists I am looking to find out the likelihood of parameter combinations using Monte Carlo Simulation.I've got 4 parameters and each can have about 250 values.I have randomly generated 250,000 scenarios for each of those parameters using some probability distribution function.I now want to find out which parameter combinations are the most likely to occur.To achieve this I have started by filtering out any duplicates from my 250,000 randomly generated samples in order to reduce the length of the list.I then iterated through this reduced list and checked how many times each scenario occurs in the original 250,000 long list.I have a large list of 250,000 items which contains lists, as such :a = [[1,2,5,8],[1,2,5,8],[3,4,5,6],[3,4,5,7],....,[3,4,5,7]]# len(a) is equal to 250,000I want to find a fast and efficient way of having each list within my list only occurring once.The end goal is to count the occurrences of each list within list a.so far I've got:'''Removing duplicates from list a and storing this as a new list temp'''b_set = set(tuple(x) for x in a)temp = [ list(x) for x in b_set ]temp.sort(key = lambda x: a.index(x) ) ''' I then iterate through each of my possible lists (i.e. temp) and count how many times they occur in a'''most_likely_dict = {}for scenario in temp: freq = list(scenario_list).count(scenario) most_likely_dict[str(scenario)] = freq at the moment it takes a good 15 minutes to perform ... Any suggestion on how to turn that into a few seconds would be greatly appreciated !!
You can take out the sorting part, as the final result is a dictionary which will be unordered in any case, then use a dict comprehension:>>> a = [[1,2],[1,2],[3,4,5],[3,4,5], [3,4,5]]>>> a_tupled = [tuple(i) for i in a]>>> b_set = set(a_tupled)>>> {repr(i): a_tupled.count(i) for i in b_set}{'(1, 2)': 2, '(3, 4, 5)': 3}calling list on your tuples will add more overhead, but you can if you want to>>> {repr(list(i)): a_tupled.count(i) for i in b_set}{'[3, 4, 5]': 3, '[1, 2]': 2}Or just use a Counter:>>> from collections import Counter>>> Counter(tuple(i) for i in a)
Python Urllib2 SSL error Python 2.7.9 is now much more strict about SSL certificate verification. Awesome!I'm not surprised that programs that were working before are now getting CERTIFICATE_VERIFY_FAILED errors. But I can't seem to get them working (without disabling certificate verification entirely).One program was using urllib2 to connect to Amazon S3 over https.I download the root CA certificate into a file called "verisign.pem" and try this:import urllib2, sslcontext = ssl.create_default_context()context.load_verify_locations(cafile = "./verisign.pem")print context.get_ca_certs()urllib2.urlopen("https://bucket.s3.amazonaws.com/", context=context)and I still get CERTIFICATE_VERIFY_FAILED errors, even though the root CA is printed out correctly in line 4.openssl can connect to this server fine. In fact, here is the command I used to get the CA cert:openssl s_client -showcerts -connect bucket.s3.amazonaws.com:443 < /dev/nullI took the last cert in the chain and put it in a PEM file, which openssl reads fine. It's a Verisign certificate with:Serial number: 35:97:31:87:f3:87:3a:07:32:7e:ce:58:0c:9b:7e:daSubject key identifier: 7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33SHA1 fingerprint: F4:A8:0A:0C:D1:E6:CF:19:0B:8C:BC:6F:BC:99:17:11:D4:82:C9:D0Any ideas how to get this working with validation enabled?
To summarize the comments about the cause of the problem and explain the real problem in more detail:If you check the trust chain for the OpenSSL client you get the following: [0] 54:7D:B3:AC:BF:... /CN=*.s3.amazonaws.com [1] 5D:EB:8F:33:9E:... /CN=VeriSign Class 3 Secure Server CA - G3 [2] F4:A8:0A:0C:D1:... /CN=VeriSign Class 3 Public Primary Certification Authority - G5[OT] A1:DB:63:93:91:... /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification AuthorityThe first certificate [0] is the leaf certificate sent by the server. The following certifcates [1] and [2] are chain certificates sent by the server. The last certificate [OT] is the trusted root certificate, which is not sent by the server but is in the local storage of trusted CA. Each certificate in the chain is signed by the next one and the last certificate [OT] is trusted, so the trust chain is complete.If you check the trust chain instead by a browser (e.g. Google Chrome using the NSS library) you get the following chain: [0] 54:7D:B3:AC:BF:... /CN=*.s3.amazonaws.com [1] 5D:EB:8F:33:9E:... /CN=VeriSign Class 3 Secure Server CA - G3[NT] 4E:B6:D5:78:49:... /CN=VeriSign Class 3 Public Primary Certification Authority - G5Here [0] and [1] are again sent by the server, but [NT] is the trusted root certificate. While this looks from the subject exactly like the chain certificate [2] the fingerprint says that the certificates are different. If you would take a closer looks at the certificates [2] and [NT] you would see, that the public key inside the certificate is the same and thus both [2] and [NT] can be used to verify the signature for [1] and thus can be used to build the trust chain.This means, that while the server sends the same certificate chain in all cases there are multiple ways to verify the chain up to a trusted root certificate. How this is done depends on the SSL library and on the known trusted root certificates: [0] (*.s3.amazonaws.com) | [1] (Verisign G3) --------------------------\ | | /------------------ [2] (Verisign G5 F4:A8:0A:0C:D1...) | | | | certificates sent by server | .....|...............................................................|................ | locally trusted root certificates | | | [OT] Public Primary Certification Authority [NT] Verisign G5 4E:B6:D5:78:49 OpenSSL library Google Chrome (NSS library)But the question remains, why your verification was unsuccessful.What you did was to take the trusted root certificate used by the browser (Verisign G5 4E:B6:D5:78:49) together with OpenSSL. But the verification in browser (NSS) and OpenSSL work slightly different:NSS: build trust chain from certificates send by the server. Stop building the chain when we got a certificate signed by any of the locally trusted root certificates.OpenSSL_ build trust chain from the certificates sent by the server. After this is done check if we have a trusted root certificate signing the latest certificate in the chain.Because of this subtle difference OpenSSL is not able to verify the chain [0],[1],[2] against root certificate [NT], because this certificate does not sign the latest element in chain [2] but instead [1]. If the server would instead only sent a chain of [0],[1] then the verification would succeed.This is a long known bug and there exist patches and hopefully the issue if finally addressed in OpenSSL 1.0.2 with the introduction of the X509_V_FLAG_TRUSTED_FIRST option.
Removing the .0's off data when Python reads data from a csv file I have successfully gotten the volumes to add up correctly, but it is returning the volume as a decimal. All volumes in the CSV file are whole numbers. I would like to have them without the decimal part.Code is below.import pandas as pddatagrid = pd.read_csv("Daily Receipts.csv")daily_vols = datagrid.groupby("Txn")["Scan Volume"].sum()print(daily_vols)
When you sum with pandas it converted the results to float.Use astype(int) <--- Link to Docsimport pandas as pddatagrid = pd.read_csv("Daily Receipts.csv")daily_vols = datagrid.groupby("Txn")["Scan Volume"].sum().astype(int)print(daily_vols)
Python Argparse: Use an empty flag I'm trying to write a python script using argparse which sets a value to True if -d has been set. Here is what I'm trying:parser.add_argument("-d", "--dynamic", required=False)dynamic = Falseif args.dynamic is not None: dynamic = TrueI get the following error: usage: psd.py [-h] -f FILE [-d DYNAMIC] psd.py: error: argument -d/--dynamic: expected one argumentHow do I set the flag to expect 0 arguments?
Use the action:parser.add_argument("-d", "--dynamic", action='store_true')You may drop the "required" kwarg.
Creating python scrabble function I'm trying to create a scrabble function that takes a string of letters and returns the score based on the letters.This is my code so far: def scrabble_score(rack): count = 0 for letter in rack: if letter == "EAIONRTLSU": count += 1 return count elif letter == "DG": count += 2 return count elif letter == "BCMP": count += 3 return count elif letter == "FHVWY": count += 4 return count elif letter == "K": count += 5 return count elif letter == "JX": count += 8 return count else: letter == "QZ" count += 10 return countHowever when I tried to call the function scrabble_score("AABBWOL")it returns 10 when it should return 14?
The code has two issues:The return statement should be called only one time, after the for iteration ends, so it can sum all the letter valueseach if statement should check if the letter is in the string, not if it is equal to the list of letters with a specific value.Then it would look like:def scrabble_score(rack): count = 0 for letter in rack: print letter if letter in "EAIONRTLSU": count += 1 elif letter in "DG": count += 2 elif letter in "BCMP": count += 3 elif letter in "FHVWY": count += 4 elif letter in "K": count += 5 elif letter in "JX": count += 8 else: count += 10 return countreturning: 14
Facebook messenger bot not sending messages (Python/Django) I've followed this tutorial to implement a Facebook Messenger bot that simply echoes what you type. It hooks up ok with Facebook, but I can't make it work beyond that and I can't find the problem. Can you please help me? This is the code so far (with minor modifications compared with the code in the tutorial).class BotsView(generic.View): def get(self, request, *args, **kwargs): if self.request.GET.get('hub.verify_token') == '1111111111': return HttpResponse(self.request.GET.get('hub.challenge')) else: return HttpResponse('Error, invalid token') def post_facebook_message(fbid, recevied_message): post_message_url = 'https://graph.facebook.com/v2.6/me/messages?access_token=<access-token>' response_msg = json.dumps({"recipient":{"id":fbid}, "message":{"text":recevied_message}}) requests.post(post_message_url, headers={"Content-Type": "application/json"},data=response_msg) return HttpResponse() @method_decorator(csrf_exempt) def dispatch(self, request, *args, **kwargs): return generic.View.dispatch(self, request, *args, **kwargs) def post(self, request, *args, **kwargs): # Converts the text payload into a python dictionary incoming_message = json.loads(self.request.body) # Facebook recommends going through every entry since they might send # multiple messages in a single call during high load for entry in incoming_message['entry']: for message in entry['messaging']: # Check to make sure the received call is a message call # This might be delivery, optin, postback for other events if message.has_key('message'): # Print the message to the terminal # pprint(message) # Assuming the sender only sends text. Non-text messages like stickers, audio, pictures # are sent as attachments and must be handled accordingly. post_facebook_message(message['sender']['id'], message['message']['text']) return HttpResponse()
Try placing the entire functiondef post_facebook_message(fbid, recevied_message): ....outside of the BotsView class. If you keep it within the class, it must take in "self" as its first parameter and they must be accessed within the class as self.post_facebook_message(.....)However, it might not be the best thing to put this function in the Django View Class. p.s - Thanks for this, I will update the tutorial to make this point explicit.
How do I make an infinite for loop in Python (without using a while loop)? Is there way to write an infinite for loop in Python?for t in range(0,10): if(t == 9): t= 0 # will this set t to 0 and launch infinite loop? No! print(t)Generally speaking, is there way to write infinite pythonic for loop like in java without using a while loop?
The itertools.repeat function will return an object endlessly, so you could loop over that:import itertoolsfor x in itertools.repeat(1): pass
update ListProperty variable after for loop kivy I am using a for loop to cycle the months of the year and then append them to a list rather than manually type out each month. The variable self.mylist updates mylist perfectly fine.When for i in range(1,13): is run it updates self.mylist perfectly fine.But because self.mylist isn't called again it doesn't update mylist after the for loop. Or so i believe is my issue.I think this method is necessary because ListProperty cannot be appended to but can be assigned to?So my question is after the for loop is run how can i update mylist again with self.mylistThe kv file includes only the relevant part of the problem. Which functions as intended, to grab a value from the list and display it with text:.py fileclass Telldate(AnchorLayout): todayday= ObjectProperty('') mylist=ListProperty(['','','','','','','']) print(mylist) def __init__(self,*args, **kwargs): super().__init__(*args, **kwargs) self.todayday=strftime('%A') self.mylist= ['this', 'does','work', 'but'] print(self.mylist) for i in range(1,13): self.mylist.append(calendar.month_name[i]) print(self.mylist)class PlannerApp(App): # def updater(self): # Clock.schedule_interval(self.monthcyclewithglobal, 0.5) def build(self): return Telldate()if __name__ == '__main__': PlannerApp().run().kv file <Telldate>: --------- ----- -- text:root.mylist[3]Things I've tried but don't seem to work.so I could define another function and use a return statement. def monthcycle(self): self.mylist= ['this', 'does','work', 'but'] print(self.mylist) for i in range(1,13): self.mylist.append(calendar.month_name[i]) print(self.mylist) return self.mylistOr I could use global variables which doesn't seem to encouraged def monthcyclewithglobal(): global mylist mylist= ['this', 'does','work', 'but'] print(mylist) for i in range(1,13): mylist.append(calendar.month_name[i]) print(mylist) monthcyclewithglobal() #I am aware this bit is probably terrible codeHard coding the months in works fine. But I would like automation.Like soself.mylist= [ 'January', 'February', 'March', 'April', 'May', 'June', 'July', \ 'August', 'September', 'October', 'November', 'December']kivy V1.10.0 python V3.6.2 using IDLE V3.6.2Thanks for your patience! Edit1:For clarification this does not work. mylist=ListProperty(['','','','','','','']) for i in range(1,13): mylist.append(calendar.month_name[i])asAttributeError: 'kivy.properties.ListProperty' object has no attribute 'append'
So I solved the issue with List comprehensions.Instead of using a loop outside the variable i want to manupilate list comprehension cut down on length of the code and made updating my variables a non issue.I realised this while reading the docs. class Telldate(AnchorLayout): mylist=ListProperty(['','','','','','','','','','','','','']) print(mylist) def __init__(self,*args, **kwargs): super().__init__(*args, **kwargs) self.mylist= [calendar.month_name[i][0:3] for i in range(1,13)] Hope this helps someone someday!
numpy: finding all pairs of numbers in a matrix that suffice on neighboring condition Suppose you have a matrix: import numpy as npmat = np.array([[0, 0, 1], [2, 0, 1], [1, 0, 3]])and you want to retrieve all pairs of numbers in this matrix that are neighboring each other, not equal and ignoring zero. In this case this would be 3 & 1 and 2 & 1, but I want to be able to apply this to a very large matrix. Any help greatly appreciated, thanks!
This should do the trick, though admittedly, it's not the most elegant; I tested it on a 1000x1000 matrix of random integers, and it was pretty fast (just over a second). I'm not sure how you are thinking about the output, so I put it into a list called res.import numpy as np# To test on larger arraymat = np.array(np.random.random_integers(0, 9, 1000 * 1000)).reshape(1000, 1000)res = []for a in mat: # Take out the zeros no_zeros = a[a!=0] if len(no_zeros) > 1: for i in range(len(no_zeros) - 1): # Only append pairs of non-equal neighbours if no_zeros[i] != no_zeros[i+1]: res.append((no_zeros[i], no_zeros[i+1]))
Python CIM_DataFile search for file by full path So, I am trying to write a script that will be able to connect to remote systems and query the CIM_DataFile among other things.For the sake of testing, I wrote the following code to test on my local machine. I have two files (ns.txt and dns.txt) in the root of my C: drive, however, the queries are not working correctly for Name= (which is the full path).import wmiwmiService = wmi.WMI()for f in wmiService.CIM_DataFile(Name="c:\ns.txt"): print "NAME '" + f.Name + "'"for f in wmiService.CIM_DataFile(Name="c:\dns.txt"): print "NAME '" + f.Name + "'"for f in wmiService.CIM_DataFile(FileName="ns", Extension="txt", Drive="c:"): print "FILENAME '" + f.Name + "'"for f in wmiService.CIM_DataFile(FileName="dns", Extension="txt", Drive="c:"): print "FILENAME '" + f.Name + "'"The output of the above code is:NAME 'c:\ns.txt'FILENAME 'c:\ns.txt'FILENAME 'c:\dns.txt'Why is it not showing c:\dns.txt for the Name= query? I have also tested on other files located in different places on my system and most of them do not show up for the Name= query.
The reason for the file:wmi.py inside the path:Python27\Lib\site-packages.I changed this file.My problem has been resolved.In fact, the problem is with a library that is installed.
python, replace if/elif with dictionary Folks, How would you rewrite the if/elif in the 'checkme' function with a dictionary?def dosomething(queue): ...def checkme(queue): """ Consume Message """ if queue == 'foo': username = 'foo' password = 'vlTTdhML' elif queue == 'bar': username = 'bar' password = 'xneoYb2c' elif queue == 'baz': username = 'baz' password = 'wnkyVsBI' ... dosomething(queue)def main(): checkme('foo') checkme('bar') checkme('baz')
You could do something like this:CHECK_ME = {'foo': 'vlTTdhML', 'bar': 'xneoYb2c', 'baz': 'wnkyVsBI'}def checkme(queue): username, password = queue, CHECK_ME.get(queue) #May be some more check here, like if not password: print 'password is none' #Or do something more relevant here #rest of the code.
Get current server ip or domain in Django I have a util method in Python Django project:def getUserInfo(request): user = request.user user_dict = model_to_dict(user) user_dict.pop("password") user_dict.pop("is_superuser") user_dict["head_img"] = user.head_img.url # there is `/media/images/users/head_img/blob_NOawLs1`I want to add my server domain or ip in the front of it, like:http://www.example.com:8000/media/images/users/head_img/blob_NOawLs1How to get current server ip( or domain )? EDITI am not going to get the remote ip, I just want to get the server ip. I mean, I write the Django as backend server, when it is running, how can I get the server ip? or domain.
You can get the hostname from the request like this (docs):request.get_host()and the remote IP of the client like this (docs):request.META['REMOTE_ADDR']To get the server IP is a bit tricky, as shown in this SO answer,which gives this solution:import socket# one or both the following will work depending on your scenariosocket.gethostbyname(socket.gethostname())socket.gethostbyname(socket.getfqdn())
Extracting data from specific columns of numpy array for each row I am trying to obtain the value corresponding to column b[i] for each row i in ACan I do this without using the for loop?A = np.array([[35, 2, 23, 22], [44, 21, 15, 4], [44, 21, 15, 4], [37, 4, 17, 41], [33, 4, 4, 18], [35, 2, 23, 22]])b = np.array([0,1,1,2,3,0])C = zeros(len(b),1)for i in range(6): C[i] = A[i][b[i]]
Since you want to sequentially index the rows of A, you can index with np.arange(len(A)) in addition to b to get your desired output:A[np.arange(len(A)), b]# array([35, 21, 21, 17, 18, 35])Showing how this works:# A np.arange(len(A)) barray([[35, 2, 23, 22], [0, 0] [44, 21, 15, 4], [1, 1] [44, 21, 15, 4], [2, 1] [37, 4, 17, 41], [3, 2] [33, 4, 4, 18], [4, 3] [35, 2, 23, 22]]) [5, 0]
How to find all occurrence of a key in nested dict, but also keep track of the outer dict key value? I've searched over stackoverflow and found the following code that allow me to search for a key values in nested dict recursively. However, I also want to keep track of the outer dict's key value. How should I do that?from Alfe's answer in the below link, I can use the code below get all the values of the key in nested dict.Find all occurrences of a key in nested python dictionaries and listsdata = {'item1': { 'name': 'dummy', 'type': 'dummy1'},'item2': { 'name': 'dummy', 'type': 'dummy1', 'label':'label2'},'item3': { 'name': 'dummy', 'type': 'dummy1', 'label':'label3'},'item4': { 'name': 'dummy', 'type': 'dummy1'}} def find(key, dictionary): for k, v in dictionary.items(): if k == key: yield v elif isinstance(v, dict): for result in find(key, v): yield result elif isinstance(v, list): for d in v: for result in find(key, d): yield resultIn[1]:list(find('label', data))Out[1]: ['label2', 'label3']However, I also need to keep record of the outer dict key as below. How should I do this? Also my data can potentially have more than one layer.{'item2':'label2','item3':'label3'}I also find the recursive_lookup in this link very neatly written. However, it's returning None when I tried to run it.Find keys in nested dictionarydef recursive_lookup(k, d): if k in d: return d[k] for v in d.values(): if isinstance(v, dict): return recursive_lookup(k, v) return NoneIt's returning None when I call recursive_lookup('label', data).If anyone can point out for me why the above code is not working that would be great too!
functions returns the path as well as value as a list of tuple. def dict_key_lookup(_dict, key, path=[]): results = [] if isinstance(_dict, dict): if key in _dict: results.append((path+[key], _dict[key])) else: for k, v in _dict.items(): results.extend(dict_key_lookup(v, key, path= path+[k])) elif isinstance(_dict, list): for index, item in enumerate(_dict): results.extend(dict_key_lookup(item, key, path= path+[index])) return resultsHope this helps.
Python project: Create a program that keeps track of the items that a wizard can carry show doesn't work and it won't show any of my itemsIn the first file of my code I have the following content:items.py:list(inventory_list):inventory = ["a wooden staff", "a wizard hat", "a cloak of invisibility","some elven bread", "an unknown potion", "a scroll of uncursing","a scroll of invisibility", "a crossbow", "a wizard's cloak"]item = inventory.pop()item = inventory.pop(1)item = inventory.pop(2)item = inventory.pop(3)item = inventory.pop(4)item = invnetory.pop(5)item = inventory.pop(6)item = inventory.pop(7)item = inventory.pop(8)In my other file which which is the main.py files looks like this. import randomimport items as iinventory_list = 0def list(inventory_list): inventory = ["a wooden staff", "a wizard hat", "a cloak of invisibility", "some elven bread", "an unknown potion", "a scroll of uncursing", "a scroll of invisibility", "a crossbow", "a wizard's cloak"] item = inventory.pop() item = inventory.pop(1) item = inventory.pop(2) item = inventory.pop(3) item = inventory.pop(4)def display_menu(inventory_list): random.shuffle(inventory_list) print("The Wizard Inventory Program") print() print("COMMAND MENU") print("show - Show all items") print("grab - Grab an item") print("edit - Edit an item") print("drop - Drop an item") print("exit - Exit program")def show(inventory_list): i = 1 for item in inventory_list: print(str(i) + ". " + item) i += 1 print()def grab(inventory_list): item = input("Name: ") inventory_list.append(item) print(item + " was added.\n")def drop(inventory_list): number = int(input("Number: ")) if number < 1 or number > len(inventory_list): print("Invalid item number.\n") else: number = inventory_list.pop(number-1) print(item + " was deleted.\n")def edit(inventory_list): number = int(input("Number: ")) if number < 1 or number > len(inventory_list): print ("Invalid item number.\n") else: number = inventory_list.pop(input()) print( item + "was edited to.\n")def main(): inventory_list = ["a wooden staff", "a wizard hat", "a cloak of invisibility", "some elven bread", "an unknown potion", "a scroll of uncursing", "a scroll of invisibility", "a crossbow", "a wizard's cloak"] display_menu(inventory_list) while True: command = input("Command: ") if command.lower() == "show": list(inventory_list) elif command.lower() == "grab": grab(inventory_list) elif command.lower() == "drop": drop(inventory_list) elif command.lower() == "exit": break else: print("Not a valid command. Please try again.\n") print("Bye!")if __name__== "__main__": main()Another question is do I need to have separate files or can I put it all in one?
I think this is more along the lines of what you're looking for. Also, I will help you out because I can see from your code a few places you are struggling, but please review the rules regarding posting questions to SO, because as mentioned this does not fit the profile. And also examine what I'm doing differently and try to grasp why.import randominventory_list = []def display_menu(inventory_list): random.shuffle(inventory_list) print("The Wizard Inventory Program") print() print("COMMAND MENU") print("show - Show all items") print("grab - Grab an item") print("edit - Edit an item") print("drop - Drop an item") print("exit - Exit program")def invalid_number(num): try: x = inventory_list[num] return False except IndexError: return Truedef show(inventory_list): for i, item in enumerate(inventory_list): print("{}. {}".format(i, item)) print()def grab(inventory_list): item = input("Name: ") inventory_list.append(item) print(item + " was added.\n")def drop(inventory_list): number = int(input("Number: ")) if invalid_number(number): print("Invalid item number.\n") else: orig_inp = inventory_list[number] del inventory_list[number] print("'{}' was deleted.\n".format(orig_inp))def edit(inventory_list): number = int(input("Number: ")) if invalid_number(number): print ("Invalid item number.\n") else: orig_inp = inventory_list[number] new_inp = input("What would you like to call '{}' instead? ".format(orig_inp)) inventory_list[number] = new_inp print("'{}' was edited to '{}'.\n".format(orig_inp, new_inp))def main(): inventory_list = ["a wooden staff", "a wizard hat", "a cloak of invisibility", "some elven bread", "an unknown potion", "a scroll of uncursing", "a scroll of invisibility", "a crossbow", "a wizard's cloak"] display_menu(inventory_list) while True: command = input("Command: ").lower() if command == "show": show(inventory_list) elif command == "grab": grab(inventory_list) elif command == "drop": drop(inventory_list) elif command == "edit": edit(inventory_list) elif command == "exit": break else: print("Not a valid command. Please try again.\n") print("Bye!")if __name__== "__main__": main()
Distribute executable with python pip I am trying to distribute a CLI tool for public use. My code contains a executable (written in golang) and a helper python script (used by the executable).My initial approach was to call the executable from python using this, where main is the entrypoint of the cli command.import osimport subprocessimport sysdef main(): dst = os.path.dirname(os.path.realpath(__file__))+'/golangexec' arg_list = [dst,"myclitool"] cmd_args = sys.argv[1:] args = arg_list + cmd_args subprocess.call(args) return`My package is thisproject │ setup.py │ └───myclitool│ │ golangexec│ │ __init__.py| | pyhelper.py| | run.pyWith setup.py being:from setuptools import setupsetup(name='mypkg',packages=['myclitool'],version='0.1',entry_points=''' [console_scripts] mycli=myclitool.run:main''')However, this doesn't install my executable at the same location with the rest of the files. I have tried to place everything inside package data, but then I face permission denied error when running exe using subprocess.What am I doing wrong?
Not a pythonic solution, but for anyone struggling having the same problem, npm allows a bin param in the package.json file, where you can directly link up your executable. { "name": "myclipkg", "version": "1.0.0", "description": "", "main": "index.js", "author": "", "license": "ISC", "bin": { "myclitool": "./golangexec" }, "homepage": "https://gitlab.com/myclipkg/cli#README"}
Should a python file always include a class? I have been coding in python for a couple months now, and something has always been on my mind. I know you can have classes in your .py file, but you don't have to. My question is, is it good practice to always have your code in a class, or is it not necessary? FYI: I have been coding in Java for a few years, so I'm used to always having a "main" class and a main method that runs everything.
It depends on what your file is. In theory everything (saying this with some hesitation) can be written as a class. But it is a bit overkill to do that just for the sake of being "correct" and will probably make your code look strange rather than clear. In general i would make the following distinctions between casesIf it is the source for a big project which makes sense to be organized in an object oriented fashion, then you would have a class which defines exactly that. This is great because then you can inherit the class for variants or child projects.If you are creating a list of utility functions to use for all your projects, such as array manipulations or little tools that are always handy, then a function-only file is the way to goIf you are writing a script which is designed in order to execute a specific task in the way a script would, then i would define task-specific source in a .py file and include the code related to the execution under the statementif name == 'main':
Simple batch request for Graph API returning Unsupported Post Request error I'm trying to get public engagement metrics via the Graph API for a list of links. Since there are a lot of them, a batch request is necessary to avoid hitting the rate limits. Using the engagement endpoint for links and the batch API guide provided by Facebook, I formatted the batch request as a list of dictionaries and submitted via POST instead of get. But when I run my code (see below) I get an Unsupported Post Request error.I'm behind and exhaused and any help would be greately appreciated.Here's my code:import requestsimport json from fbauth import fbtokenlink1 ='https://www.nytimes.com/2018/07/02/world/europe/angela-merkel-migration-coalition.html'link2 ='https://www.nytimes.com/2018/07/02/world/europe/trump-nato.html'# input dictionary for requestbatch=[{"method":"GET", "relative_url": '/v3.0/url/?id={0}'.format(link1)}, {"method":"GET", "relative_url": '/v3.0/url/?id={0}'.format(link2)}]url = 'https://graph.facebook.com'payload = json.dumps(batch)headers = {'access_token : {0}'.format(fbtoken)}response = requests.post(url, data=payload)Robj = response.json()print(Robj)And here's the error:{'error': {'message': 'Unsupported post request. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api', 'type': 'GraphMethodException', 'code': 100, 'error_subcode': 33, 'fbtrace_id': 'AcxF9FGKcV/'}}
You need to pass the batch requests using the batch parameter like so:payload = {'batch': json.dumps(batch)}
extract inner values of a nested dictionary I have a nested dictionary A and trying to collect all inner values which are basically the float numbers.A={0:{1:2.3, 2:4.3, 6:2.1}, 1:{3:2.6, 4:4.1, 6:8.1}, 3:{0:2.2, 2:9.3, 4:3.1},5:{1:2.8, 2:5.3, 6:2.1}}I am usingcol=[A[key][values] for values in A[key]]but this only gives the inner values of key=0Do you know why this happen and what should be the approach I get a get all inner values?
Try this:[x for y in A.values() for x in y.values() ]Output:[2.3, 4.3, 2.1, 2.6, 4.1, 8.1, 2.2, 9.3, 3.1, 2.8, 5.3, 2.1]
how to get "if suite" along with my "else suite" I have written these simple lines of code to get the current day name, and then using if/else statement wanted to check if it is the current day which is Sunday,and I should receive my if suite , because today is exactly Sunday, but the terminal gives me the else suite. I was wonder what is wrong with my code:import datetimenow= datetime.datetime.now()weeks=["Saturday","Sunday","Monday","Tuesday","Wednesday"]print ("Today is {}, So:".format(now.strftime("%A")))if now in weeks: print("Gossh, You are now in your school days..")else: print("rest and recover!")and the result is:Today is Sunday, So:rest and recover!
You are testing if now (a datetime object) is in weeks (string objects). now is not in weeks. No datetime object is in weeksThe formatted day name (now.strftime("%A")) might be, but you're not testing that.from datetime import datetimenow = datetime.now()dayname = now.strftime("%A")weekdays = ["Saturday", "Sunday", "Monday", "Tuesday", "Wednesday"]print(f"Today is {dayname}, so:")if dayname in weekdays: print("Gossh, You are now in your school days...")else: print("rest and recover!")That being said, both weeks and weekdays is a bad name for a variable that contains this particular list of day names. Find a better variable name.
How to send file from Nodejs to Flask Python? Hope you are doing well.I'm trying to send pdfs file from Nodejs to Flask using Axios.I read files from a directory (in the form of buffer array) and add them into formData (an npm package) and send an Axios request. const existingFile = fs.readFileSync(path) console.log(existingFile) const formData = new nodeFormData() formData.append("file", existingFile) formData.append("fileName", documentData.docuName) try { const getFile = await axios.post("http://127.0.0.1:5000/pdf-slicer", formData, { headers: { ...formData.getHeaders() } }) console.log(getFile)} catch (e) {console.log(e, "getFileError")}On flask side:I'm trying to get data from the request. print(request.files) if (request.method == "POST"): file=request.form["file"] if file: print(file)in request.file, I'm getting ImmutableMultiDict([])but in request.form["file"], I'm getting data something like this:how can I handle this type of file format or how can I convert this file format to python fileObject.
I solved this issue by updating my Nodejs code.We need to convert formData file into octet/stream format.so I did minor change in my formData code :before: formData.append("file", existingFile)after: formData.append("file", fs.createReadStream(existingFile)Note: fs.createReadStream only accepts string or uint8arraywithout null bytes. we cannot pass the buffer array.
How to shuffle data at each epoch using tf.data API in TensorFlow 2.0? I am getting my hands dirty using TensorFlow 2.0 to train my model. The new iteration feature in tf.data API is pretty awesome. However, when I was executing the following codes, I found that, unlike the iteration features in torch.utils.data.DataLoader, it did not shuffle data automatically at each epoch. How do I achieve that using TF2.0?import numpy as npimport tensorflow as tfdef sample_data(): ...data = sample_data()NUM_EPOCHS = 10BATCH_SIZE = 128# Subsample the datamask = range(int(data.shape[0]*0.8), data.shape[0])data_val = data[mask]mask = range(int(data.shape[0]*0.8))data_train = data[mask]train_dset = tf.data.Dataset.from_tensor_slices(data_train).\ shuffle(buffer_size=10000).\ repeat(1).batch(BATCH_SIZE)val_dset = tf.data.Dataset.from_tensor_slices(data_val).\ batch(BATCH_SIZE)loss_metric = tf.keras.metrics.Mean(name='train_loss')optimizer = tf.keras.optimizers.Adam(0.001)@tf.functiondef train_step(inputs): ...for epoch in range(NUM_EPOCHS): # Reset the metrics loss_metric.reset_states() for inputs in train_dset: train_step(inputs) ...
The batch needs to be reshuffled:train_dset = tf.data.Dataset.from_tensor_slices(data_train).\ repeat(1).batch(BATCH_SIZE)train_dset = train_dset.shuffle(buffer_size=buffer_size)
Does "df['var'].map(df2)" and "df.var.map(df2)" always produce the same result? I have a dataframe df with a column var, and another dataframe df2 with columns var and var2. Both columns var in 2 dataframes are exactly the same.In my example, df['var'].map(df2) and df.var.map(df2) yield the same result. I would like to ask if this is just a coincidence in my particular dataset, or it always holds.Thank you so much!Update: In my example, these below codes also produce the same result.df.groupby('parent_id')['parent_id'].transform('count').tolist()anddf.groupby('parent_id').parent_id.transform('count').tolist()This gives me a feeling that df.groupby('parent_id')['parent_id'] and df.groupby('parent_id').parent_id produce the same result.
Yes (as long as the column exists in your data). It's syntactic sugar called attribute access. See the pandas documentation here.
How to adapt tensorflow recommender system tutorial to own data? Issues with Dataset and MapDataset I am working on a recommender system in tensorflow. What I am trying to do is something similar to tensorflow's quickstart example. However I cannot seem to understand how to replace the Dataset structure(s) with my own data correctly, as doing so raises errors either in the dataset mapping phase or in the model fitting phase. I am running Python 3.7.13 on Google Colab and Tensorflow 2.8.0.So, let's say this is a music recommender. Note that my data is all integer IDs. In order to follow the tutorial, I limit my data in a similar mannerI figured that I can actually load my data with tf.data.Dataset.from_tensor_slices():rating = tf.data.Dataset.from_tensor_slices(df[['song_id', 'user_id']].values)songs = tf.data.Dataset.from_tensor_slices(df[['song_id']].values)This works, so I go on to map the dataset:rating = rating.map(lambda x:{'song_id':x['song_id'],'user_id':x['user_id']})songs = songs.map(lambda x: x['song_id'])However, this raises the following:TypeError: Only integers, slices (:), ellipsis (...), tf.newaxis (None) and scalar tf.int32/tf.int64 tensors are valid indices, got 'song_id'I am not sure as to why I need to map the dataset in the first place... I assume it's something tied to the default data structure used in the examples?So let's say I don't map. I go on using IntegerLookup() instead of StringLookup(mask_token=None) to preprocess my data, since all I have is integers:user_id_vocabulary = tf.keras.layers.IntegerLookup()user_id_vocabulary.adapt(rating) songs_vocabulary = tf.keras.layers.IntegerLookup()songs_vocabulary.adapt(songs)Then I build the model Class following the tutorial, just changing variable names, and define the users model, the songs model and the retrieval task:class MyModel(tfrs.Model): def __init__( self, user_model: tf.keras.Model, song_model: tf.keras.Model, task: tfrs.tasks.Retrieval): super().__init__() # Set up user and song representations. self.user_model = user_model self.song_model = song_model # Set up a retrieval task. self.task = task def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: # Define how the loss is computed. user_embeddings = self.user_model(features["user_id"]) song_embeddings = self.song_model(features["song_id"]) return self.task(user_embeddings, song_embeddings)users_model = tf.keras.Sequential([user_id_vocabulary, tf.keras.layers.Embedding(user_id_vocabulary.vocabulary_size(),64)]) songs_model = tf.keras.Sequential([songs_vocabulary, tf.keras.layers.Embedding(songs_vocabulary.vocabulary_size(),64)]) task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK( rooms.batch(128).map(room_model)))Lastly, I compile and fit the model:model = MyModel(users_model,songs_model,task)model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))model.fit(rating.batch(4096), epochs=3)But this still raises the following on the .fit line:TypeError: Only integers, slices (:), ellipsis (...), tf.newaxis (None) and scalar tf.int32/tf.int64 tensors are valid indices, got 'song_id'What am I missing? Thanks in advance!
This reply is a little late, but if you take a look at the value of ratingafter the line of coderating = tf.data.Dataset.from_tensor_slices(df[['song_id', 'user_id']].values)You will notice there are no keys in the result, see in the example below how the struture has changed.So the map statement will not work, as there are no keys.rating = rating.map(lambda x:{'song_id':x['song_id'],'user_id':x['user_id']})You can change this line in your code to the line below, which will produce a two dimentional TensorSliceDataset.rating = ratings.map(lambda x:{'song_id':x[0],'user_id':x[1]})The map is used to transform the dataset into elements of type dictionary. Other structures like tuples, named tuples are supported too, take a look at the api docs for the tf dataset:tf dataset
Pyspark String to Decimal Conversion along with precision and format like Java decimal formatter I am trying to convert String to decimal.I may receive decimal data as below sometimes 1234.6789- (- at the end) In java i can specify format like below to parse above ,DecimalFormat dfmt = new DecimalFormat("0000.0000;0000.0000-") so that i get decimal value as -1234.6789do we have equivalent in Python or Pyspark like aboveI have created UDF def getDecimalVal(myString): return Decimal(myString)ConvertToDec = udf(getDecimalVal, DecimalType(4))I am invoking this in my below code Employee = Row("firstName", "lastName", "email", "salary","salaryday")employee1 = Employee('steve', 'mill', 'bash@elean.co', "0012.7590","2020-04-30")employee2 = Employee( 'jack','neil', 'daniel@ssl.edu', "0013.2461","2020-04-30" )employees=[employee1,employee2]dframe = spark.createDataFrame(employees)dframe=dframe.withColumn('decimalval',ConvertToDec(col('salary'))) dframe.show()Below is the output+---------+--------+--------------+---------+----------+---------+----------+|firstName|lastName| email| salary| salaryday|finalname|decimalval|+---------+--------+--------------+---------+----------+---------+----------+| len|armbrust| bash@learn.co| 0012.75|2020-04-30| len| 13|| dem| meng|daniel@uda.edu|0013.2461|2020-04-30| dem| 13|+---------+--------+--------------+---------+----------+---------+----------+I have below problems 1) The decimal value instead of being 12.7590 and 13.2461 is being round of to 13 2) if i change precession in UDF as DecimalType(4,4) i get below error Py4JJavaError: An error occurred while calling o2598.showString.java.lang.IllegalArgumentException: requirement failed: Decimal precision 6 exceeds max precision 4How do i retain precision and how to retain precision
You could regexp_reaplace first to move the - sign in front and then cast to DecimalType. Like that you avoid having to use a UDF. Something like this should work:from pyspark.sql.functions import regexp_replace...dframe = dframe.withColumn( 'decimalval', regexp_replace('salary', r'([0-9\.]+)\-', '-$1').cast("DECIMAL(8,4)"))Note that given you have 8 digits in your decimal number you should use DecimalType(8, 4) and not DecimalType(4, 4). From the pyspark doc hereprecision – the maximum total number of digits (default: 10)scale – the number of digits on right side of dot. (default: 0)
Should a custom keras true positive metric always return an integer? I'm working with a non-standard dataset, where my y_true is (batch x 5 x 1), and y_pred is (batch x 5 x 1). A batch sample i is "true" if any value of y_true[i] > 0., and it is predicted "true" if an y_pred[i] >= b where b is a threshold between 0 and 1.I've defined this custom keras metric to calculate the number of true positives in a batch:def TP(threshold=0.0): def TP_(Y_true, Y_pred): Y_true = tf.where(Y_true > 0., tf.ones(tf.shape(Y_true)), tf.zeros(tf.shape(Y_true))) Y_pred_true = tf.where(Y_pred >= threshold, tf.ones(tf.shape(Y_pred)), tf.zeros(tf.shape(Y_pred))) Y_true = K.sum(Y_true, axis=1) Y_pred_true = K.sum(Y_pred_true, axis=1) Y_true = tf.where(Y_true > 0., tf.ones(tf.shape(Y_true)), tf.zeros(tf.shape(Y_true))) Y_pred_true = tf.where(Y_pred_true > 0., tf.ones(tf.shape(Y_pred_true)), tf.zeros(tf.shape(Y_pred_true))) Y = tf.math.add(Y_true, Y_pred_true) tp = tf.where(Y == 2, tf.ones(tf.shape(Y)), tf.zeros(tf.shape(Y))) tp = K.sum(tp) return tp return TP_When training, I sometimes get non-integer values. Is this because keras is averaging the values from all batches?I have similar custom metrics for true negatives, false positives, and false negatives. Should the sum of all four of these values during training be an integer?
A two part answer: Yes, the metrics are averaged over the batches. You will see the same behavior with the built-in metrics, eg tensorflow.keras.metrics.TruePositive, but at the end of each epoch it will be an integer.However, you are not persisting state for your metric, so TensorFlow just takes the mean of your returned metric. Consider subclassing tf.keras.metrics.Metric like so:class TP(tf.keras.metrics.Metric): def __init__(self, threshold=0.5, **kwargs): super().__init__(**kwargs) self.threshold = threshold self.true_positives = self.add_weight(name='true_positives', initializer='zeros', dtype=tf.int32) def update_state(self, y_true, y_pred, sample_weight=None): y_true = tf.where(y_true > self.threshold, tf.ones(tf.shape(y_true)), tf.zeros(tf.shape(y_true))) y_pred_true = tf.where(y_pred >= self.threshold, tf.ones(tf.shape(y_pred)), tf.zeros(tf.shape(y_pred))) y_true = K.sum(y_true, axis=1) y_pred_true = K.sum(y_pred_true, axis=1) y_true = tf.where(y_true > self.threshold, tf.ones(tf.shape(y_true)), tf.zeros(tf.shape(y_true))) y_pred_true = tf.where(y_pred_true > self.threshold, tf.ones(tf.shape(y_pred_true)), tf.zeros(tf.shape(y_pred_true))) Y = tf.math.add(y_true, y_pred_true) tp = tf.where(Y == 2, tf.ones(tf.shape(Y), dtype=tf.int32), tf.zeros(tf.shape(Y), dtype=tf.int32)) tp = K.sum(tp) self.true_positives.assign_add(tp) def result(self): return self.true_positives def get_config(self): return {'threshold': self.threshold}
Unpacking SequenceMatcher loop results What is the best way to unpack SequenceMatcher loop results in Python so that values can be easily accessed and processed? from difflib import *orig = "1234567890"commented = "123435456353453578901343154"diff = SequenceMatcher(None, orig, commented)match_id = []for block in diff.get_matching_blocks(): match_id.append(block)print(match_id)String integers represent Chinese Characters.The current iteration code stores match results in a list like this: match_id[Match(a=0, b=0, size=4), Match(a=4, b=7, size=2), Match(a=6, b=16, size=4), Match(a=10, b=27, size=0)]I'd eventually like to mark out the comments with "{{" and "}}" like so: "1234{{354}}56{{3534535}}7890{{1343154}}"Which means, I am interested in unpacking the above SequenceMatcher results and do some calculations on specific b and size values to yield this sequence: rslt = [[0+4,7],[7+2,16],[16+4,27]]which is a repetition of [b[i]+size[i],b[i+1]].
1. Unpacking SequenceMatcher results to yield a sequenceYou can unzip match_id and then use a list comprehension with your expression.a, b, size = zip(*match_id)# a = (0, 4, 6, 10)# b = (0, 7, 16, 27)# size = (4, 2, 4, 0)rslt = [[b[i] + size[i], b[i+1]] for i in range(len(match_id)-1)]# rslt = [[4, 7], [9, 16], [20, 27]]Reference for zip, a Python built-in function: https://docs.python.org/3/library/functions.html#zip2. Marking out the comments with "{{" and "}}"You can loop through rslt and then nicely append the match-so-far and mark out the comments.rslt_str = ""prev_end = 0for start, end in rslt: rslt_str += commented[prev_end:start] if start != end: rslt_str += "{{%s}}" % commented[start:end] prev_end = end# rslt_str = "1234{{354}}56{{3534535}}7890{{1343154}}"
Predicting the Trajectories of Planets Using Polyfit I'm simulating the three body problem and graphed the trajectories in 3D. I'm trying to figure out how I can predict the trajectories of these planets by extending the plot lines using np.polyfit. I have experience in doing this with dataframes and on 2D plots, but not in 3D and without using any sort of dataframe. I provided the whole entire code and the extension attempts are below the graph, including the error message. I'm looking for any suggestions on how to modify my current code, particularly the portion of code that extends the plots, to make this work.Code:from scipy.integrate import odeintimport numpy as npimport matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3D%matplotlib inline# Universal Gravitational Const.G = 6.674e-11# Defining Massm1 = 0.9m2 = 3.5m3 = 1.6# Init positions in graph (array)pos1 = [-5,3,1]pos2 = [5,12,10]pos3 = [-7,1,27]p01 = np.array(pos1)p02 = np.array(pos2)p03 = np.array(pos3)# Init velocities (array)vi1 = [10,-2,3]vi2 = [-1,3,2]vi3 = [3,-1,-6]v01 = np.array(vi1)v02 = np.array(vi2)v03 = np.array(vi3)#Functiondef derivs_func(y,t,G,m1,m2,m3): d1 = np.array([y[0],y[1],y[2]]) #Unpacking the variables d2 = np.array([y[3],y[4],y[5]]) d3 = np.array([y[6],y[7],y[8]]) v1 = np.array([y[9],y[10],y[11]]) v2 = np.array([y[12],y[13],y[14]]) v3 = np.array([y[15],y[16],y[17]]) #Distance between objects dist12 = np.sqrt((pos2[0]-pos1[0])**2 + (pos2[1]-pos1[1])**2 + (pos2[2]-pos1[2])**2) dist13 = np.sqrt((pos3[0]-pos1[0])**2 + (pos3[1]-pos1[1])**2 + (pos3[2]-pos1[2])**2) dist23 = np.sqrt((pos3[0]-pos2[0])**2 + (pos3[1]-pos2[1])**2 + (pos3[2]-pos2[2])**2) #Derivative equations: change in velocity and position dv1dt = m2 * (d2-d1)/dist12**3 + m3 * (d3-d1)/dist13**3 dv2dt = m1 * (d1-d2)/dist12**3 + m3 * (d3-d2)/dist23**3 dv3dt = m1 * (d1-d3)/dist13**3 + m2 * (d2-d3)/dist23**3 dd1dt = v1 dd2dt = v2 dd3dt = v3 derivs = np.array([dd1dt,dd2dt,dd3dt,dv1dt,dv2dt,dv3dt]) #Adding derivatives into an array derivs3 = derivs.flatten() #Turning the array into a 1D array return derivs3 #Returning the flattened arrayyo = np.array([p01, p02, p03, v01, v02, v03]) #Initial conditions for position and velocityy0 = yo.flatten() #Turning the array into a 1D arraytime = np.linspace(0,500,500) #Defining timesol = odeint(derivs_func, y0, time, args = (G,m1,m2,m3)) #Calling the odeint functionx1 = sol[:,:3]x2 = sol[:,3:6]x3 = sol[:,6:9]fig = plt.figure(figsize = (15,15)) #Creating a 3D plotax = plt.axes(projection = '3d')ax.plot(x1[:,0],x1[:,1],x1[:,2],color = 'b') #Plotting the paths each planet takesax.plot(x2[:,0],x2[:,1],x2[:,2],color = 'r')ax.plot(x3[:,0],x3[:,1],x3[:,2],color = 'g')ax.scatter(x1[-1,0],x1[-1,1],x1[-1,2],color = 'b', marker = 'o', s=45, label = 'Mass 1') ax.scatter(x2[-1,0],x2[-1,1],x2[-1,2],color = 'r', marker = 'o',s=200, label = 'Mass 2') ax.scatter(x3[-1,0],x3[-1,1],x3[-1,2],color = 'g', marker = 'o',s=100, label = 'Mass 3')ax.legend()fig = plt.figure(figsize = (15,15))ax = plt.axes(projection = '3d')fit1 = np.poly1d(np.polyfit(x1[:,0],x1[:,1],7))fit12 = np.poly1d(np.polyfit(x1[:,0],x1[:,2],7))fit2 = np.poly1d(np.polyfit(x2[:,0],x2[:,1],7))fit22 = np.poly1d(np.polyfit(x2[:,0],x2[:,2],7))fit3 = np.poly1d(np.polyfit(x3[:,0],x3[:,1],7))fit32 = np.poly1d(np.polyfit(x3[:,0],x3[:,2],7))y1 = fit1(x1[:,0])y12 = fit12(x1[:,0])y2 = fit2(x2[:,0])y22 = fit22(x2[:,0])y3 = fit3(x3[:,0])y32 = fit32(x3[:,0])extended1 = np.linspace(x1[-1,0], x1[-1,0] + 300, 1)extended2 = np.linspace(x2[-1,0], x2[-1,0] + 300, 1)extended3 = np.linspace(x3[-1,0], x3[-1,0] + 300, 1)yex1 = fit1(extended1)yex12 = fit12(extended1)yex2 = fit2(extended2)yex22 = fit22(extended2)yex3 = fit3(extended3)yex32 = fit32(extended3)ax.plot(x1[:,0],x1[:,1],x1[:,2])ax.plot(x1[:,0],yex1,yex12)ax.plot(x2[:,0],x2[:,1],x2[:,2])ax.plot(x2[:,0],yex2,yex22)ax.plot(x3[:,0],x3[:,1],x3[:,2])ax.plot(x3[:,0],yex3,yex32)ERROR MESSAGE:Traceback (most recent call last)<ipython-input-98-a55893800c7b> in <module> 28 29 ax.plot(x1[:,0],x1[:,1],x1[:,2])---> 30 ax.plot(x1[:,0],yex1,yex12) 31 ax.plot(x2[:,0],x2[:,1],x2[:,2]) 32 ax.plot(x2[:,0],yex2,yex22)~\Downloads\Anaconda\lib\site-packages\mpl_toolkits\mplot3d\axes3d.py in plot(self, xs, ys, zdir, *args, **kwargs) 1530 zs = np.broadcast_to(zs, len(xs)) 1531 -> 1532 lines = super().plot(xs, ys, *args, **kwargs) 1533 for line in lines: 1534 art3d.line_2d_to_3d(line, zs=zs, zdir=zdir)~\Downloads\Anaconda\lib\site-packages\matplotlib\axes\_axes.py in plot(self, scalex, scaley, data, *args, **kwargs) 1664 """ 1665 kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D._alias_map)-> 1666 lines = [*self._get_lines(*args, data=data, **kwargs)] 1667 for line in lines: 1668 self.add_line(line)~\Downloads\Anaconda\lib\site-packages\matplotlib\axes\_base.py in __call__(self, *args, **kwargs) 223 this += args[0], 224 args = args[1:]--> 225 yield from self._plot_args(this, kwargs) 226 227 def get_next_color(self):~\Downloads\Anaconda\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self, tup, kwargs) 389 x, y = index_of(tup[-1]) 390 --> 391 x, y = self._xy_from_xy(x, y) 392 393 if self.command == 'plot':~\Downloads\Anaconda\lib\site-packages\matplotlib\axes\_base.py in _xy_from_xy(self, x, y) 268 if x.shape[0] != y.shape[0]: 269 raise ValueError("x and y must have same first dimension, but "--> 270 "have shapes {} and {}".format(x.shape, y.shape)) 271 if x.ndim > 2 or y.ndim > 2: 272 raise ValueError("x and y can be no greater than 2-D, but have "ValueError: x and y must have same first dimension, but have shapes (500,) and (1,)
np.polyfit returns an array of coefficients:>>> np.polyfit(np.arange(4), np.arange(4), 1)array([1.00000000e+00, 1.12255857e-16])To turn this into a callable polynomial, use np.poly1d on the result:>>> p = np.poly1d(np.polyfit(np.arange(4), np.arange(4), 1))>>> p(1)1.0000000000000002So in your project, change these lines:fit1 = np.polyfit(x1[:,0],x1[:,1],7)# etc.tofit1 = np.poly1d(np.polyfit(x1[:,0],x1[:,1],7))# etc.Edit: Your new error seems to stem from the fact that your extended axes have 2 dimensions each:extended1 = np.linspace(x1[-1,:], x1[-1,:] + 300, 1) # extended1.ndim == 2 !extended2 = np.linspace(x2[-1,:], x2[-1,:] + 300, 1)extended3 = np.linspace(x3[-1,:], x3[-1,:] + 300, 1)If I understand your code correctly, this is what you want to do instead:extended1 = np.arange(x1[-1, 0], x1[-1, 0] + 300)extended2 = np.arange(x2[-1, 0], x2[-1, 0] + 300)extended3 = np.arange(x3[-1, 0], x3[-1, 0] + 300)And further below:ax.plot(x1[:,0],x1[:,1],x1[:,2])ax.plot(extended1,yex1,yex12)ax.plot(x2[:,0],x2[:,1],x2[:,2])ax.plot(extended2,yex2,yex22)ax.plot(x3[:,0],x3[:,1],x3[:,2])ax.plot(extended3,yex3,yex32)
Merge 2 dataframes using the first column as the index df 1: Condition Currency Total Hours 0 Used USD 1001 Used USD 752 Used USD 133 Used USD NaNdf 2: Condition Currency Total Hours 1 Used USD 993 New USD 1000Desired Result: Condition Currency Total Hours 0 Used USD 1001 Used USD 992 Used USD 133 New USD 1000How would I merge the two dataframes using the first column as the index (index) and overwrite the values of df1 with those of df2?I have tried a variety of variations and nothing seems to work. A few examples I tried:pd.merge(df, df1) = result is an empty dataframedf.combine_first(df1) = the result is a dataframe but with the same values as df1
Try update:df.update(df2)print(df)Output: Condition Currency Total Hours0 Used USD 100.01 Used USD 99.02 Used USD 13.03 New USD 1000.0
problems with using string.join() operator I tested out the string.join() method on a few lines of code:a = 1b = 1c = 0superpower = []if a == 1: superpower.append("flying")if b == 1: superpower.append("soaring")if c == 1: superpower.append("high")", ".join(superpower)print superpowerbut the result always comes back as just a regular list, not a string. How can I fix this? I'm new to python, and would appreciate the help.
", ".join(superpower) returns a string, it doesn't convert the input iterable into a string. You aren't doing anything with that return value:superpower_str = ', '.join(superpower)print(superpower_str)is probably what you want.
Sampled softmax loss eval code works but function call results in ValueError I am implementing the skip-gram model in a federated learning setup. I get the inputs and label in the following way:train_inputs_embed = tf.nn.embedding_lookup(variables.weights, batch['target_id']) train_labels = tf.reshape(batch['context_id'], [-1, 1])When I define the loss as followsloss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights=variables.nce_weights, biases=variables.bias, inputs=train_inputs_embed, labels=train_labels, num_sampled=5, num_true=1, num_classes=vocab_size))I get the following error ValueError: Shape must be rank 2 but is rank 3 for 'sampled_softmax_loss/concat_4' (op: 'ConcatV2') with input shapes: [?,1], [?,?,5], [].But, the following code (taken from eval section of sampled_softmax_loss function) works for the same inputs and labels !!logits = tf.matmul(train_inputs_embed, tf.transpose(variables.nce_weights))logits = tf.nn.bias_add(logits, variables.bias)labels_one_hot = tf.one_hot(train_labels, vocab_size)loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels_one_hot, logits=logits))How to fix resolve this issue?
Reshaping the train_inputs_embed resolved the errortrain_inputs_embed = tf.reshape(tf.nn.embedding_lookup(variables.weights, batch['target_id']), [-1, embedding_size])
Django model default value in response I want to have a default value in my django model response valueSample model querymyModel.objects.filter().values("username", "user_gender")I want to have a default value in responseIt must be likeSelect username, user_gender, 'delhi' as country from mytablePlease let me know if any suggestions
You can add additional value to queryset using Value expression:from django.db.models import CharField, ValuemyModel.objects.filter().values("username", "user_gender", default_city=Value('delhi', output_field=CharField()))You can find more details about values() method here.
Group strings in a pandas dataframe column which share a common parent three or more times I'm trying to find and group strings in a dataframe column which share a common parent three or more times.I have two columns taken from a google search. One containing the keyword used in the search, and another containing the domain returned.If a keyword shares the same domain four times with another keyword, I'd like to group them in a separate column named 'Cluster' and tag them sequentially for each cluster. (Cluster 1, Cluster 2 and so on).In the example, below news and weather would be clustered together because they share the same url three or more times (www.bbc.co.uk)Example Dataframeurl keywordwww.bbc.co.uk newswww.bbc.co.uk newswww.bbc.co.uk newswww.bbc.co.uk newswww.ccn.com newswww.dailymail.com newswww.googlenews.com newswww.guardian.co.uk newswww.thesun.com newswww.dailymail.com newswww.weatherchannel.com weather forecastwww.bbc.co.uk weather forecastwww.bbc.co.uk weather forecastwww.bbc.co.uk weather forecastwww.bbc.co.uk weather forecastwww.weatheronline.com weather forecastwww.youtube.com weather forecastwww.youtube.com weather forecastwww.weatheronline.com weather forecastwww.reddit.com/r/weather weather forecastwww.stopwatchonline count down timewww.countdownonline.com count down timewww.youtube.com count down timewww.clock.com count down timeDesired OutputKeyword Clusternews Cluster 1weather forecast Cluster 1Minimum Reproducible Exampleimport pandas as pdd= { 'url': ["www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.ccn.com", "www.dailymail.com", "www.googlenews.com", "www.guardian.co.uk", "www.thesun.com", "www.dailymail.com", "www.weatherchannel.com", "www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.weatheronline.com", "www.youtube.com", "www.youtube.com", "www.weatheronline.com", "www.reddit.com/r/weather", "www.stopwatchonline", "www.countdownonline.com", "www.youtube.com", "www.clock.com", "www.whatisthetimer.com", "www.youtube.com", "www.timerit.net", "www.whatisthetimer.com"], 'keyword': ["news", "news", "news", "news", "news", "news", "news", "news", "news", "news", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer"]}# Create the DataFramedf = pd.DataFrame(d)print(df)I've looked at a lot of solutions, including python, pandas, How to find connections between each group but I'm really stuck!
Here is a solution that works for you example. Count the number of occurrences of each url/keyword, keep counts of 3+ and then keep only url which have at least 2 keywords meeting that criteria.We can then turn each url into a number using cat.codesimport pandas as pdd= { 'url': ["www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.ccn.com", "www.dailymail.com", "www.googlenews.com", "www.guardian.co.uk", "www.thesun.com", "www.dailymail.com", "www.weatherchannel.com", "www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.bbc.co.uk", "www.weatheronline.com", "www.youtube.com", "www.youtube.com", "www.weatheronline.com", "www.reddit.com/r/weather", "www.stopwatchonline", "www.countdownonline.com", "www.youtube.com", "www.clock.com", "www.whatisthetimer.com", "www.youtube.com", "www.timerit.net", "www.whatisthetimer.com"], 'keyword': ["news", "news", "news", "news", "news", "news", "news", "news", "news", "news", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "weather forecast", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer", "count down timer"]}# Create the DataFramedf = pd.DataFrame(d)df = df.groupby(['url','keyword']).size().reset_index(name='count')df = df.loc[df['count'].ge(3)].groupby('url').filter(lambda x: len(x)>=2)df['cluster'] = 'Cluster ' + (df['url'].astype('category').cat.codes+1).astype(str)print(df[['keyword','cluster']])Output keyword cluster0 news Cluster 11 weather forecast Cluster 1
Python to fit a linear-plateau curve I have curve that initially Y increases linearly with X, then reach a plateau at point C.In other words, the curve can be defined as:if X < C: Y = k * X + belse: Y = k * C + bThe training data is a list of X ~ Y values. I need to determine k, b and C through a machine learning approach (or similar), since the data is noisy and refection point C changes over time. I want something more robust than get C through observing the current sample data.How can I do it using sklearn or maybe scipy?
WLOG you can say the second equation isY = Clooks like you have a linear regression to fit the line and then a detection point to find the constant.You know that in the high values of X, as in X > C you are already at the constant. So just check how far back down the values of X you get the same constant.Then do a linear regression to find the line with value of X, X <= C
How do I ignore blank space in a spreadsheet while making a DataFrame? I have an Excel file that looks like this-I want to ignore all blank rows INCLUDING the BSE_IDA.INTV_R (Temporary Table) part and use the column headers beneath that and all the values as the rows in the DataFrame. How do I do that?
the pandas read_excel function has a skiprows parameter, it helps you to specify the row to skip when reading your file.From the docs it's said : skiprows : list-like Rows to skip at the beginning (0-indexed)try import pandas as pdpd.read_excel('/your/path', skiprows = 20 )
How to loop with Python sockets? I was planning to do a Hostname to Ip script using socket.gethostbyname() but it seems like it's not functioning I've tried to input the host's list from a .txt file and loop through it, it worked only when there is no more than one host in the list which that makes it trash. I tried combining the hosts into one python-list but the gethostbyname() function cannot accept itf = open("host.txt", "r")contents = f.readlines()for content in contents: print(content) # Debugging try: output = socket.gethostbyname(content) print(output) except: print("ERROR!");f.close()When the text file has one host it works fine but when I add like :youtube.comgoogle.comthe output is :Traceback (most recent call last): File "sockets.py", line 16, in <module> output = socket.gethostbyname(content)socket.gaierror: [Errno 11001] getaddrinfo failed
The result of readlines() include the newline characters (\n or \r\n) of every line in the file, so you are actually passing 'youtube.com\n' to gethostbyname().Using strip() on the parameter will remove any trailing whitespace:output = socket.gethostbyname(content.strip())
How to debug skflow code (tensorflow) gmm_ops.py? Hi I am new to tensorflow. I want to debug Tensorflow (skflow) gmm_ops.py (Gaussian Mixture Model). I am getting ERROR:tensorflow:Model diverged with loss = NaN.How should I do it ? Is there any example? raise NanLossDuringTrainingErrortensorflow.python.training.basic_session_run_hooks.NanLossDuringTrainingError: NaN loss during training.
Usually a NanLoss means something overflowed or underflowed during training. Things such as normalizing the examples or processing a subset of the data tend to help debugging what could have caused this.
List all S3 keys between start date(inclusive) and end date(exclusive) Is there a way to list all s3 files between specified dates. The start date can be passed as a prefix. I have a confusion as to how to pass the end date. Please can any help.import boto3def get_matching_s3_objects(bucket, prefix=''): """ Generate objects in an S3 bucket. :param bucket: Name of the S3 bucket. :param prefix: Only fetch objects whose key starts with this prefix """ s3 = boto3.client('s3') kwargs = {'Bucket': bucket} if isinstance(prefix, str): kwargs['Prefix'] = prefix while True: # The S3 API response is a large blob of metadata. # 'Contents' contains information about the listed objects. resp = s3.list_objects_v2(**kwargs) try: contents = resp['Contents'] except KeyError: return for obj in contents: key = obj['Key'] if key.startswith(prefix) and key.endswith(suffix): yield obj # The S3 API is paginated, returning up to 1000 keys at a time. # Pass the continuation token into the next response, until we # reach the final page (when this field is missing). try: kwargs['ContinuationToken'] = resp['NextContinuationToken'] except KeyError: breakdef get_matching_s3_keys(bucket, prefix=''): """ Generate the keys in an S3 bucket. :param bucket: Name of the S3 bucket. :param prefix: Only fetch keys that start with this prefix (optional). :param suffix: Only fetch keys that end with this suffix (optional). """ for obj in get_matching_s3_objects(bucket, prefix, suffix): yield obj['Key']
AFAIK there is no direct way to filter by date using boto3, the only filter available are Bucket, Delimiter, EncodingType, Marker, MaxKeys, Prefix and RequestPayer.So you need to loop over the keys/objects to compare your start/end date to the object last_modified datetime value, so to get all objects in a specific bucket between a week ago(included) and today(excluded), I'll do something like from datetime import datetime, timedeltaimport boto3from pytz import UTC as utc# NOTE: We need timezone aware objects, because the s3 object one will be.today = utc.localize(datetime.utcnow())since = today - timedelta(weeks=1)# WARNINGS: # - You may need to provide proper credentials when calling boto3.resource...# - Error management will need to be added, in case the bucket doesn't exist.keys = [ o for o in boto3.resource('s3').Bucket(name='some_bucket').objects.all() if o.last_modified < today and o.last_modified >= since]
Trying to load a page and cycle through proxies each time I'm currently trying to learn Python by doing small little silly projects to try and get my head around certain bits but I have hit a bit of a brick wall. I'm wanting to make something that will visit a page using a proxy list I have in a .txt file. I want it to load up the web page with the first proxy in the file, then load up the page with the second proxy and so on. However, I keep on getting this error:Traceback (most recent call last): File "c:\Users\Admin.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\ptvsd_launcher.py", line 43, in main(ptvsdArgs) File "c:\Users\Admin.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\lib\python\ptvsd__main__.py", line 434, in main run() File "c:\Users\Admin.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\lib\python\ptvsd__main__.py", line 312, in run_file runpy.run_path(target, run_name='main') File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "c:\Users\Admin\Documents\PythonScripts\ebay-traffic.py", line 10, in r = requests.get(url, proxies = line) File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\api.py", line 75, in get return request('get', url, params=params, **kwargs) File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 524, in request prep.url, proxies, stream, verify, cert File "C:\Users\Admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 699, in merge_environment_settings no_proxy = proxies.get('no_proxy') if proxies is not None else None AttributeError: 'str' object has no attribute 'get'The proxy file looks like this: I've tried various stupid things like putting the proxy file in the int(), but that obviously doesn't work (but I was trying a lot of silly things).import requestsproxyList = 'proxies.txt'file = open(proxyList, "r")url = input('Website: ')for line in file: print(line, end="") r = requests.get(url, proxies = line)print('Finished.')input()I expect it to print each line of the proxy file when it loads up the page when connected to the proxy.
You need to pass proxies as a dictimport requestsproxyList = 'proxies.txt'file = open(proxyList, "r")url = input('Website: ')for line in file: print(line, end="") proxies = {'http': line.strip(), 'https': line.strip()} r = requests.get(url, proxies=proxies)print('Finished.')input()
six: cannot import name python_2_unicode_compatible With six 1.10.0 installed under Python and pip 2.6, an old Django 1.0.4 app is not able to import python_2_unicode_compatible even though it finds six 1.10.0 just fine:>>> import six>>> six.__version__'1.10.0'>>> from six import python_2_unicode_compatible>>> I've confirmed with python code within the app that it does indeed have access to six:['appdirs==1.4.3', 'argparse==1.4.0', 'astkit==0.5.4', 'beautifulsoup==3.2.1','coverage==4.3.4', 'django-cms==2.0.0a0', 'django==1.0.4', 'dnspython==1.12.0','flup==1.0.2', 'importlib==1.0.4', 'iniparse==0.3.1', 'instrumental==0.5.3','mako==1.0.6', 'markupsafe==1.0', 'minimock==1.2.8', 'mysql-python==1.2.5','nose==1.3.7', 'packaging==16.8', 'pillow==3.4.2', 'pip==9.0.1', 'pluggy==0.4.0','py==1.4.33', 'pyparsing==2.2.0', 'python-dateutil==2.6.0', 'pyzor==1.0.0','setuptools==35.0.1', 'six==1.10.0', 'sorl-thumbnail==12.3', 'tox==2.7.0','uwsgi==2.0.15','virtualenv==15.1.0', 'wheel==0.29.0']I am tasked to move a very old site that was running django 1.0.4 (you read that right, 1.0.4) and django_cms 2.0.0 Alpha to a new server. The old server croaked, so all I have is the backup of the main website files and dependencies that were installed long ago.I am Dockerizing it to help document and deploy this in the future.Ubuntu 14.04Python 2.6 (same results with 2.7)Django 1.0.4 (installed via local zip)django_cms 2.0.0a0 (installed via local zip)I have tried Apache mod_wsgi, gunicorn (pip2.6 installed) and currently using uwsgi (preferred, pip2.6 installed) to load the app. Nginx is running in another Docker container with proxy_pass, and will the frontend proxy and TLS.uwsgi starts the site up with the custom wsgi. Upon loading the / index page, I had many import errors. Slowly, I am resolving each and every one of them (mostly related to the Django "MIDDLEWARE_CLASSES", which I have yet to find their definition).I am currently stuck on the following error:Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 230, in __call__ self.load_middleware() File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 41, in load_middleware raise exceptions.ImproperlyConfigured, 'Error importing middleware %s: "%s"' % (mw_module, e)django.core.exceptions.ImproperlyConfigured: Error importing middleware cms.middleware.user: "cannot import name python_2_unicode_compatible"uwsgi starts up with the specified python2.6 just fine:web_1 | [uWSGI] getting INI configuration from uwsgi.iniweb_1 | *** Starting uWSGI 2.0.15 (64bit) on [Wed Apr 26 16:27:43 2017] ***web_1 | Python version: 2.6.9 (default, Oct 22 2014, 19:53:49) [GCC 4.8.2]web_1 | Python main interpreter initialized at 0xef1050web_1 | python threads support enabledAlso, python2.7 was originally configured and had the exact same error. I thought I read where python_2_unicode_compatible was deprecated in 2.7 or something, so I went back to the original version the site was running under.Do I need to install virtualenv? I don't usually do it under Docker, and just install everything globally. I can't see how that would make a difference.If six was not found, wouldn't I get an error that it could not import six, instead of python_2_unicode_compatible?
The python_2_unicode_compatible method was originally in Django, then added to six in 1.9.One of your installed packages may be trying to import python_2_unicode_compatible from django.utils.encoding, rather than from the six package.
How to print loss value which was set in keras backend function I am new to keras.The below code snippet is for policy gradient loss function.I tried to print the loss value to see if the loss value could be negative for policy gradient. but I couldn't.Is there any way to print it?I found some ways, but it uses keras history and seems like you can get history from model.fit function.The below code does not use model.fit function. from keras import backend as Kmodel = Sequential()model.add(Dense(24, input_dim=self.state_size, activation='relu'))model.add(Dense(24, activation='relu'))model.add(Dense(self.action_size, activation='softmax'))model.summary()------------------------------------------------action_prob = K.sum(action * self.model.output, axis=1)cross_entropy = K.log(action_prob) * discounted_rewardsloss = -K.sum(cross_entropy)optimizer = Adam(lr=self.learning_rate)updates = optimizer.get_updates(self.model.trainable_weights,[], loss)train = K.function([self.model.input, action, discounted_rewards], [],updates=updates)
you may use for loop for train like epochs:for epoch in epochs: train=K.function(..) K.print_tensor(loss, message='{} Epochs Training Loss = '.format(epoch))
getting socket id of a client in flask socket.io Is there a way to get socket id of current client? I have seen it is possible to get socket id in node.js. But it seems flask's socket.io extension is a little bit different than node's socketio.
From Flask-SocketIO documentation: The request object defines request.namespace as the name of the namespace being handled, and adds request.sid, defined as the unique session ID for the client connection
Python Logical Error in Loop while reading Dictionary I am new to python and OOPS.I am expecting my module add_book to increment if book is already present in dictionary. Please help me .Not sure why for loop is not working as expected.https://github.com/amitsuneja/Bookstore/commit/4aefb378171ac326aacb35f355051bc0b057d3be
You should not append to the list while you are still iterating it. Also, your code will append the new item for each item already in the list that has a different name. Instead, you should use a for/else loop. Here, the else case will only be triggered if you do not break from the loop.for recordlist in self.mybooksinventory: if self.name == recordlist['name']: recordlist['quantity'] += 1 break # break from the loopelse: # for/else, not if/else ! self.mybooksinventory.append({'name':self.name,'stuclass':self.stuclass,'subject':self.subject,'quantity':1})
Use a list in prepared statement I want to use a Python list (or a set actually) in my execute, but I don't quite get it.BRANDS = { 'toyota', 'ford', 'dodge', 'spyker'}cur = connection.cursor()cur.execute("SELECT model FROM cars WHERE brand IN (%s)", (list(BRANDS),)How can I use a set or list in an IN clause in psycopg2?
psycopg2 converts lists to arrays, and (%s) means a single value inside a tuple, so that's obviously not correct.What you want to do is either:let postgres convert a tuple to a tuplecur.execute("SELECT model FROM cars WHERE brand IN %s", (tuple(BRANDS),))use array operators with an arraycur.execute("SELECT model FROM cars WHERE brand = any(%s)", (list(BRANDS),))The performances should be equivalent, I usually recommend =any() because the typing makes more sense and it works even if the parameter is empty, postgres does not like empty tuples so brand in () generates an error. brand = any('{}') however works fine.Oh and psycogp2's execute is completely happy with a list of parameters, I find that much more readable and less error prone so I'd recommend it:cur.execute( "SELECT model FROM cars WHERE brand = any(%s)", [ list(BRANDS)])
How do i watch python source code files and restart when i save? When I save a python source code file, I want to re-run the script. Is there a command that works like this (sort of like nodemon for node)?
While there are probably ways to do this within the python ecosystem such as watchdog/watchmedo ( https://github.com/gorakhargosh/watchdog ), and maybe even linux scripting options with inotifywait ( https://linux.die.net/man/1/inotifywait ), for me, the easiest solution by far was... to just use nodemon! What I didn't know is that although the github tagline of nodemon is "Monitor for any changes in your node.js application and automatically restart the server - perfect for development" actually nodemon is a delicously generic tool and knows that .py files should be executed with python for example. Here's where I think the magic happens: https://github.com/remy/nodemon/blob/c1211876113732cbff78eb1ae10483eaaf77e5cf/lib/config/defaults.jsEnd result is that the command line below totally works. Yay!$ nodemon hello.py[nodemon] starting `python hello.py`
how to update the new module in openerp 7 in ubuntu 12.0? Done,all the possible ways for updating the new module in openerp 7 in ubuntu 12.0.Is there any other way to update the new module in openerp 7 in ubuntu 12.0 ? can anyone help me..
Put your module under addons/ directory restart your serverGo to OpenERP Menu Setting -> Modules -> Update Modules List and Update thanGo to OpenERP Menu Setting -> Modules -> Installed Modules and search your module nameHope this will help you to find your module.And Still you Not Find your module thanLeft click on module go to PropertiesGive Permissions to Fill access: Read and WriteApply Permission to Enclosed Files
Pandas: plot multiple columns to same x value Followup to a previous question regarding data analysis with pandas. I now want to plot my data, which looks like this:PrEST ID Gene Sequence Ratio1 Ratio2 Ratio3HPRR12 ATF1 TTPSAXXXXXXXXXTTTK 6.3222 4.0558 4.958 HPRR23 CREB1 KIXXXXXXXXPGVPR NaN NaN NaN HPRR23 CREB1 ILNXXXXXXXXGVPR 0.22691 2.077 NaNHPRR15 ELK4 IEGDCEXXXXXXXGGK 1.177 NaN 12.073 HPRR15 ELK4 SPXXXXXXXXXXXSVIK 8.66 14.755 NaNHPRR15 ELK4 IEGDCXXXXXXXVSSSSK 15.745 7.9122 9.5966 ... except there are a bunch more rows, and I don't actually want to plot the ratios but some other calculated values derived from them, but it doesn't matter for my plotting problem. I have a dataframe that looks more or less like that data above, and what I want is this:Each row (3 ratios) should be plotted against the row's ID, as pointsAll rows with the same ID should be plotted to the same x value / ID,but with another colour The x ticks should be the IDs, and (ifpossible) the corresponding gene as well (so some genes will appearon several x ticks, as they have multiple IDs mapping to them)Below is an image that my previous, non-pandas version of this script produces:... where the red triangles indicate values outside of a cutoff value used for setting the y-axis maximum value. The IDs are blacked-out, but you should be able to see what I'm after. Copy number is essentially the ratios with a calculation on top of them, so they're just another number rather than the ones I show in the data above.I have tried to find similar questions and solutions in the documentation, but found none. Most people seem to need to do this with dates, for which there seem to be ready-made plotting functions, which doesn't help me (I think). Any help greatly appreciated!
Skipping some of the finer points of plotting, to get:Each row (3 ratios) should be plotted against the row's ID, as pointsAll rows with the same ID should be plotted to the same x value / ID, but with another colourThe x ticks should be the IDs, and (if possible) the corresponding gene as well (so some genes will appear on several x ticks, as they have multiple IDs mapping to them)I suggest you try using matplotlib to handle the plotting, and manually cycle the colors. You can use something like:import matplotlib.pyplot as pltimport pandas as pdimport itertools#datadf = pd.DataFrame( {'id': [1, 2, 3, 3], 'labels': ['HPRR1234', 'HPRR4321', 'HPRR2345', 'HPRR2345'], 'g': ['KRAS', 'KRAS', 'ELK4', 'ELK4'], 'r1': [15, 9, 15, 1], 'r2': [14, 8, 7, 0], 'r3': [14, 16, 9, 12]})#extra setupplt.rcParams['xtick.major.pad'] = 8#plotting style(s)marker = itertools.cycle((',', '+', '.', 'o', '*'))color = itertools.cycle(('b', 'g', 'r', 'c', 'm', 'y', 'k'))#plotfig = plt.figure()ax = fig.add_subplot(111)ax.plot(df['id'], df['r1'], ls='', ms=10, mew=2, marker=marker.next(), color=color.next())ax.plot(df['id'], df['r2'], ls='', ms=10, mew=2, marker=marker.next(), color=color.next())ax.plot(df['id'], df['r3'], ls='', ms=10, mew=2, marker=marker.next(), color=color.next())# set the tick labelsax.xaxis.set_ticks(df['id'])ax.xaxis.set_ticklabels(df['labels'])plt.setp(ax.get_xticklabels(), rotation='vertical', fontsize=12)plt.tight_layout()fig.savefig("example.pdf")If you have many rows, you will probably want more colors, but this shows at least the concept.
Is random.randint() in Python truly random? So I was using the random module in Python with some loops and was printing out a batch of numbers to check what they looked like. I noticed that when I input: random.randint(0,100000)most of the numbers would be six figure numbers with a few at five figures and fewer at 4. There were barely any single figures at all. It makes me question how random rand.int really is.
Between 0 and 100000, 90% of the numbers have 5 figures! Only 0.01% have 1 figure. So the behavior is what I'd expect.EDIT: And note what ignacio says. The numbers are definitely not "truly" random as that would require some sort of quantum event. They are "pseudo" random numbers.
Django: How to sort Query set based on another function outside of class I am trying to sort my Post objects based on a function that gets the score of every object. My score function is:def score(up, down): return up - downz = log(max(abs(score), 1), 10)And I was trying to get the QuerySet by using:Post.objects.all().annotate(score=score('up','down')).order_by('-score')However, this does not seem to be working. and I get various errors. I researched and it seems you cannot use any functions other than the ones provided by Django Database functions.What would be the most efficient way to do this and how can I achieve the queryset sorted based on each object score?EDIT I am trying to write functions that uses more advanced mathematical calculation which are not possible to do with F that's why I wanted to know how I can do a python function on a query set?
considering your up and down is two different integer field meaning two separate columns in your table you can do this with F Objectsfrom django.db.models import Fscores = Post.objects.annotate(score=(F("up") - F("down"))).order_by('-score')
How to remove elements from a list under some conditions? I want to implement Binary Search algorithm in python considering the following list:Fibonacci_Seq = [1,1,2,3,5,8,13,21,34,55,89]So, I wrote a function to do the calculation but when I came down to this block of code, I didn't know what to do:min = Fibonacci_Seq[0] #is 1max = Fibonacci_Seq[-1] #is 89goal = Fibonacci_Seq[4] #Equals to 5guess = (min+max)//2 #Equals to 45if guess > goal: #Is true (guess-1)//2 #Equals to 22 del Fibonacci_Seq[the elements that are less than 22]Instead of "The elements that are less than 22", what can I write to eliminate the the numbers less than 22 since putting basic '<=>' signs don't work?
Do a modified Binary Search to find the index of that first element which is greater than or equal to 22 and then just slice the existing list. In your case the index is 8 such that Fibonacci_Seq[8]=34 and then just slice your list as Fibonacci_Seq=Fibonacci_Seq[8:].
Pandas drop rare entries I'm new to Pandas. To simplify, I have a data frame with two columns: product_id and rating. Each entry is a new review for the given product.Now I want to get a new data frame in which lines corresponding to the product which received less then 20 reviews (ie. appears less then 20 times in the original data frame) are removed.I can count the number of occurences with:a = data.groupby('product_id').count()b = a.loc[a['rating']>20]but that gives me back a 1D data frame. When displayed, each product_id has its count, but I'm unable to access the actual product_id's to use them to filter the original table. For instace,b.valuesgives back a 1D array of the counts, but no the product_ids.
You want to filter:a = data.groupby('product_id').filter(lambda x: len(x) > 20)
Django ORM for select max(field1) from table group by (field2); I have the following model:class Transition(models.Models): id = models.AutoField(primary_key=True) transition_type = models.IntegerField(dbcolumn='transition_typeid') instance = models.IntegerField(dbcolumn='instanceid') ts = models.DateTimeField() class Meta: managed=False db_table = 'transitions'I'd like to issue the following query using Django's ORM:select max(id) from transitions group by(instanceid);I know I can use a raw query as follows:Transition.objects.raw('select max(id) from transitions group by (instanceid)')However, the downside is that the query seems to be getting executed instantly anddoesn't lend it self to further filtering, for example I'd like to get a queryset to which a further filter on say timestamp can be applied.Is there way to use a purely ORM way to issue the select statement above without using Django's raw queries?
This query should work for you Transition.objects.values('instance').annotate(Max('id'))docs : https://docs.djangoproject.com/en/1.8/topics/db/aggregation/
How to add extra sign to already existing x-ticks label matplotlib? Currently, my histogram plot's x-tick labels are [200. 400. 600. 800. 1000. 1200. 1400.]. It represents the rate in dollars. I want to add $ in prefix of these ticks like [$200 $400 $600 $800 $1000 $1200 $1400].I tried this axs.xaxis.get_majorticklocs() to fetch x-tick labels and axs.xaxis.set_ticks([f"${amount:.0f}" for amount in axs[0, 1].xaxis.get_majorticklocs()]) to set the updated one. I understand that It will throw error as x-tick labels are now string.Need help! kindly assist how can I do this.
You can use automatic StrMethodFormatter like this:import numpy as npimport matplotlib.pyplot as pltfig, ax = plt.subplots()ax.plot(100*np.random.rand(20))# Use automatic StrMethodFormatterax.xaxis.set_major_formatter('${x:1.2f}')plt.show()
Connect to raspberry pi from a web server I have a Raspberry pi on my home network. This is set up on my router, so it has a 192.168.x.x IP address. I have a python server running on my pi that is listening for incoming connections on a fixed port (48000).I would like to connect to this raspberry pi from a machine that is on my work network (IP address 10.x.x.x.) My work PC can connect to the internet, but when I am on my work PC I don't know the external IP address of my home router.Any ideas on how I can do this without having to set up a static IP address and port forwarding on my home router?I'm not en expert, but I have some python code that can connect to the Pi when I am on same local network as the pi, but it doesn't work when I am on a network that is not the same as which my raspberry pi is on.Any ideas on what approach I can take?I initially thought about setting up a service on the pi that will post it's local IP address by email if the IP address changes, but this is useless since the local IP address is not routable.
You should register with a free DNS service, such as no-ip (https://www.noip.com/managed-dns) and configure dynamic dns with your router (given it is able to do so). Then your router is always available at a given hostname. A potential domain for you could be e.g. user3308997.no-ip.orgPort forwarding or NAT must be setup in your router so, that e.g. the url http://user3308997.no-ip.org:8001 could be forwarded to your PI server.
Python regex insert I have a String ss = "x01777"Now I want to insert a - into s at this position:s = "x01-777"I tried to do this with re.sub() but I can't figure out how to insert the - without deleting my regex (I need this complex structure of regex because the String I want to work with is much longer).Actually, it looks something like this:re.sub('\w\d\d\d\d\d', 'here comes my replacement', s)How do I have to set up my insertion?
Capture the first three characters into a group and then the next three to another group. In the replacement part just add - after the first captured group followed by the second captured group.>>> import re>>> s = "x01777">>> m = re.sub(r'(\w\d\d)(\d\d\d)', r'\1-\2', s)>>> m'x01-777'>>>
CSV Load Error with Pandas Can someone help me figure out what this error is telling me? I don't understand why this csv won't load.Code:import pandas as pdimport numpy as npenergy = pd.read_csv('Energy Indicators.csv')GDP = pd.read_csv('world_bank_new.csv')ScimEn = pd.read_csv('scimagojr-3.csv')Error: UnicodeDecodeError Traceback (most recent call last)<ipython-input-2-65661166aab4> in <module>() 10 11 ---> 12 answer_one()<ipython-input-2-65661166aab4> in answer_one() 4 energy = pd.read_csv('Energy Indicators.csv') 5 GDP = pd.read_csv('world_bank_new.csv')----> 6 ScimEn = pd.read_csv('scimagojr-3.csv') 7 8
The read_csv function takes an encoding option. You're going to need to tell Pandas what the file encoding is. Try encoding = "ISO-8859-1".
GridLayout Not Scrolling in Kivy Please tell me why this doesn't work, the whole program works fine, it uses a function inside the main program to get it's text, but it won't scroll so the user won't be able to view the entire output.<AnswerScreen@Screen>: input_textb: input_textb ScrollView: size_hint: (1, None) do_scroll_y: True do_scroll_x: False bar_width: 4 GridLayout: padding: root.width * 0.02, root.height * 0.02 cols: 1 size_hint_y: None size_hint_x: 1 height: self.minimum_height Label: id: input_textb text: "" font_size: root.height / 25 text_size: self.width, NoneEdit: I had already tried doing the same as many previous answers, in the particular one mentioned in the comments, I got an error saying "NoneType" has no attribute "bind".I removed the size hint, it still doesn't work, but thanks anyway.The text is definitely long enough.
I believe the label's size is not set, which i agree can be confusing at first, Label has a widget size (size as all widgets) and a texture_size, which is set to the actual size of the displayed text, kivy doesn't relate these two in any particular way at first, and it's up to you to decide how one influences the other, you did half of the work in setting text_size to (width, None), which forces the texture into having the width of the widget, but you are missing the other part of the deal, which is that you want to be the widget to be as tall as the generated texture. For this size to be effective, you also have to disable size_hint_y for Label, since it's in a GridLayout. Label: id: input_textb text: "" font_size: root.height / 25 text_size: self.width, None height: self.texture_size[1] size_hint_y: Noneand you should be all set.
Stream Analytics deserialising JSON from Python via Event Hub I have set up an Azure Event Hub and I am sending AMQP messages in JSON format from a Python script, and am attempting to stream those messages to Power BI using Stream Analytics.The messages a very simple device activity from and IoT deviceThe Python snippet ismsg = json.dumps({ "Hub": MAC, "DeviceID": id, "DeviceUID": ouid, "Signal": text, "Timestamp": dtz }, ensure_ascii=False, encoding='utf8')message.body = msgmessenger.put(message)messenger.send()I have used the example C# message reader in the MS tutorial to read the data back from the event hub with no problem, the output is:Message received. Partition: '2', Data: '??{"DeviceUID": "z_70b3d515200002e7_0", "Signal": "/on?1", "DeviceID": "1", "Hub": "91754623489", "Timestamp": "2016-07-15T07:56:50.277440Z"}'But when I try to test the Stream Analytics input from the Event Hub, I get an errorDiagnostics: Could not deserialize the input event as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization formatI'm not sure what Malformed Events means - I have assumed that Stream Analytics can cope with data sent to an Event Hub via AMQP?I can't see anything wrong with the JSON as received by the C# app - unless the BOM symbol is causing a problem?This is my first attempt at all this, and I have searched for any similar posts with no avail, so I'd really appreciate if someone could point me in the right direction.CheersRob
This is caused by client API incompatibility. Python uses Proton to send the JSON string in the body of an AMQP Value message. The body is encoded as an AMQP string (AMQP type encoding bytes + utf8 encoded bytes of string). Stream Analytics uses Service Bus .Net SDK which exposes AMQP message as EventData and its body is always byte array. For AMQP value message, it includes the AMQP type encoding bytes as without them it is not possible to decoded the following value. These extra bytes at the beginning will cause JSON serialization to fail.To achieve interoperability on message body, the application should ensure the publisher and consumer agree on its type and encoding. In this case the publisher should send raw bytes in an AMQP Data message. With the Proton Python API, you can try this:message.body = msg.encode('utf-8')The other workaround is to send simple types (e.g. string) in application properties.Other people also ran into this issue.https://github.com/Azure/amqpnetlite/issues/117
Python - appending to multiple arrays If I wanted to perform something like Levene's test of equal variances via scipy stats, which produces two outputs (the test statistic and p-value) for all the data in a dictionary, how would I append the outputs for each test to two different lists? I tried the code below:test_stat[]p_value[]for i in range(0, n_data): for j in range(1, n_name): test_stat[i], p_value[i] = scipy.stats.levene(data[i][name[j-1]], data[i][name[j]], center='median')But this clearly isn't the way to go about it, as I keep getting anIndexError because the list assignment index out of range.Any suggestions would be greatly appreciated. Thanks!
Not everything needs to be in a single line... This should work fine:test_stats = []p_values = []for i in range(0, n_data): for j in range(1, n_name): test_stat, p_value = scipy.stats.levene(data[i][name[j-1]], data[i][name[j]], center='median') test_stats.append(test_stat) p_values.append(p_value)Though of course this will add n_data * n_name rows.
Anaconda OpenCV Arch Linux libselinux.so error I have installed Anaconda 64 bit on a relatively fresh install of Arch.I followed the instructions here to set up a virtual environment for opencv:conda create -n opencv numpy scipy scikit-learn matplotlib python=3source activate opencvconda install -c https://conda.binstar.org/menpo opencv3When I run "import cv2" on the activated virtual environment I get:ImportError: libselinux.so.1: cannot open shared object file: No such file or directoryI have no clue how to fix this - do I need to make kernel changes? Thanks for any help.
Fixed with installing the libselinux package in the AUR:yaourt -S libselinuxI now have another problem:ImportError: /usr/lib/libpangoft2-1.0.so.0: undefined symbol: FcWeightToOpenTypeSolved as in issue 368conda install -c asmeurer pango
How to go back to a line of code in python if a condition is met? Basically, I am trying to create a game. It's going well so far! However, I am trying to figure out how to make code go back. I am a beginner at python, so I think its something to do with def start(): and start(): - or something along those lines - but I'm not sure.If you do not know what I am talking about, I am basically saying I want something like: (1) if (variable) == (whatever) (2) print ("Cool you win") (4) (code to go back and repeat line 1) (5) else: (6) print ("You loose!") (7) (code to go back and repeat line 1)So, basically like a loop: How can I do this?
You want to use a while loopwhile <condition>:(1) if (variable == whatever):(2) print ("Cool you win")(4) #nothing needed here(5) else:(6) print ("You loose!")(7) #nothing needed hereIf you don't have a particular condition, you can just loop forever with while True: