questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
Sum value in a row based on the head of the columns I have a dataset like this:I want to calculate the sum of apple_*_C,apple_*_Cr, apple_*_Cu in each row, respectively, with the following code.for test in ['apple']: df[f'{test}_C']=df.filter(regex=f'^{test}_\d_C').sum(1) df[f'{test}_C']=df.filter(regex=f'^{test}_\d_Cr').sum(1) df[f'{test}_C']=df.filter(regex=f'^{test}_\d_Cu').sum(1)However,df[f'{test}_C']=df.filter(regex=f'^{test}_\d_C').sum(1) will calculate the sum of apple_*_C andapple_*_Cr, apple_*_Cu in a row, rather than only apple_*_C.Please advise how should I modify it. | import pandas as pddata = { "Apple_1_C" : [1,2], "Apple_2_C" : [2,3], "Apple_3_C" : [3,4], "Apple_1_Cr" : [4,5], "Apple_1_Cr" : [5,6], "Apple_1_Cu" : [6,7], "Apple_2_Cu" : [7,8], }df = pd.DataFrame(data)dffor i, test in enumerate(['Apple']): df[f'{test}_C_sum']=df.filter(regex=f'^{test}_\d_C$').sum(1) df[f'{test}_Cr_sum']=df.filter(regex=f'^{test}_\d_Cr').sum(1) df[f'{test}_Cu_sum']=df.filter(regex=f'^{test}_\d_Cu').sum(1)df |
Insert weekend dates into dataframe while keeping prices in index location I have a pandas DataFrame with dates, open and close prices of USD that looks something like this:Date Open Close2021-12-08 0.88707 0.886802021-12-07 0.88617 0.886002021-12-06 0.88475 0.884582021-12-03 0.88442 0.884472021-12-02 0.88342 0.883432021-12-01 0.88261 0.88259I am wanting to insert the weekend dates and keep my open and close values at the same dates and fill the empty Open and Close values with NaN, something like this:Date Open Close2021-12-08 0.88707 0.886802021-12-07 0.88617 0.886002021-12-06 0.88475 0.884582021-12-05 NaN NaN2021-12-04 NaN NaN2021-12-03 0.88261 0.882592021-12-02 0.88342 0.883432021-12-01 0.88261 0.88259I have tried various techniques such as creating a new DataFrame with my open and close values then reindexing by the date:df_USD_new = {'Open':open_USD, 'Close':close_USD}df = pd.DataFrame(df_USD_new)dfdate_index = pd.date_range('2016-12-01', '2021-12-08', freq='D')df.reindex(date_index)But this gives me a tables with all dates filled as NaN such as:2021-12-04 NaN NaN2021-12-05 NaN NaN2021-12-06 NaN NaN2021-12-07 NaN NaN2021-12-08 NaN NaNAm I doing something wrong or missing a step? Would appreciate the help. | You can use:df['Date'] = pd.to_datetime(df['Date'])df.set_index('Date', inplace=True)df = df.reindex(pd.date_range(df.index.min(), df.index.max())).sort_index(ascending=False).reset_index().rename(columns={'index': 'Date'})OUTPUT Date Open Close0 2021-12-08 0.88707 0.886801 2021-12-07 0.88617 0.886002 2021-12-06 0.88475 0.884583 2021-12-05 NaN NaN4 2021-12-04 NaN NaN5 2021-12-03 0.88442 0.884476 2021-12-02 0.88342 0.883437 2021-12-01 0.88261 0.88259Or you can also use this:df['Date'] = pd.to_datetime(df['Date'])df.set_index('Date', inplace=True)df = df.asfreq('d').sort_index(ascending=False).reset_index().rename(columns={'index': 'Date'}) |
open_sharded_output_tfrecords causes FailedPreconditionError Writer is not open I am trying to implement the Tensorflow Detection API following mainly the tutorial and I am running into an issue when trying to generate the TFRecord.I have gotten to a point where I generate the tfexamples and want to write them to a list of tfrecord files. I have seen a couple examples using the open_sharded_output_tfrecords function like this:with contextlib2.ExitStack() as tf_record_close_stack: output_records = tf_record_creation_util.open_sharded_output_tfrecords( tf_record_close_stack, FLAGS.output_file, FLAGS.num_shards)This returns a list of TFRecords writers which can later be used like this:output_records[shard_index].write(tf_example)where shard_index is an integer and tf_example is the tfexample.When I try to implement it I get an error (see full report on the bottom).FailedPreconditionError: Writer is closed.It creates the files:Any idea or hint what I might be doing wrong with open_sharded_output_tfrecords and how to correct it?Thank you very much in advance for any help.This is my code:def convert_to_tfrecord_error(df,output_folder,num_shards): import contextlib2 from object_detection.dataset_tools import tf_record_creation_util #Step 1: Initialize utils for sharded with contextlib2.ExitStack() as tf_record_close_stack: output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords( tf_record_close_stack, output_folder_test, num_shards) image_nr = 0 #Step 2: Write record to shard for index,_ in df.iterrows(): #generate the example tf_example = generate_tf_example(df,index) #get the shard shard_index = image_nr % num_shards #write to shard output_tfrecords[shard_index].write(tf_example) #update image number image_nr = image_nr +1 #notify after 100 images if image_nr%100 == 0: print(f"{image_nr} images written")Full report: | The problem is because the writer has exited after the with statement finishes:with contextlib2.ExitStack() as tf_record_close_stack: output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords( tf_record_close_stack, output_folder_test, num_shards)This makes the insertions outside of the tf_record_close_stack constraint. Therefore, just keep those following codes inside the with statement or you have to replace with contextlib2.ExitStack() as a tf_record_close_stack = contextlib2.ExitStack() variable and close it after finishing the for loop by tf_record_close_stack.close(). |
Pandas access the first entry of a series of matrix I have a pandas df that looks like this: beta0 matrix([[1], [2], [3]])1 matrix([[2], [3], [4]])2 matrix([[0], [0], [0]]):999 matrix([[2], [1], [3]])And I want to access the first entry of df['beta'], idealy either a list or a np array that looks like:[1, 2, 0, ..., 2]What I have tried so far: Convert the series to a list of listsb_t = list(df['beta_t'].apply(lambda x: x.flatten().tolist()[0]))b_t = [row[0] for row in b_t]This works, but it takes a while in large datasets. I also tried:b_t = list(df['beta_t'].apply(lambda x: np.array(x.flatten()[0])))but this results in a series of series and I don't know how to continue from here.Any suggestions on how I should improve my codes? Thanks in advance! | I believe you need:df['beta_t'].apply(lambda x: x[0][0][0])Or:[x[0][0][0] for x in df['beta_t']] |
How do I return the sum of values from a file? I am trying to convert a string of plus and minus signs from a file to +1 and -1 respectively, and then print the resulting sum of all of the plus and minus 1's.Here is my attempt at the problem:Problem statement and attempted codeI think the issue is in my return statements, but I haven't been able to figure it out. | You are returning the first operator you come across before iterating through all of them.You should remove both the return statements inside the for loop i.ereturn num_plusreturn num_minus |
Call pyqt widget from Qt widgets application I am trying to extend my Qt application with python scripts plugins.It works fine if I call any not pyqt script.It works fine too if I call any pyqt script from c++ function, but I am outside a Qt Widget Application. Something like that:#include "/usr/include/python3.5m/Python.h"int CargaPlugins(const char* ruta, const char* nombremodulo, const char* nombrefuncion);int main(int argc, char** argv){ std::string path = "PYTHONPATH="; path.append(argv[1]); putenv ((char*)path.c_str()); Py_Initialize(); CargaPlugins(argv[1],"plugin_loader","iniciar"); Py_Finalize(); return 0;}int CargaPlugins(const char* ruta, const char* nombremodulo, const char* nombrefuncion){ PyObject *pName, *pModule, *pDict, *pFunc; PyObject *pArgs, *pValue; pName = PyUnicode_DecodeFSDefault(nombremodulo); /* Error checking of pName left out */ pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, nombrefuncion); /* pFunc is a new reference */ if (pFunc && PyCallable_Check(pFunc)) { pArgs = PyTuple_New(1); pValue = PyUnicode_FromString(ruta); if (!pValue) { Py_DECREF(pArgs); Py_DECREF(pModule); fprintf(stderr, "Cannot convert argument\n"); return 1; } /* pValue reference stolen here: */ PyTuple_SetItem(pArgs, 0, pValue); pValue = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (pValue != NULL) { printf("Result of call: %ld\n", PyLong_AsLong(pValue)); Py_DECREF(pValue); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); fprintf(stderr,"Call failed\n"); return 1; } } else { if (PyErr_Occurred()) PyErr_Print(); fprintf(stderr, "Cannot find function \"%s\"\n", nombrefuncion); } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); fprintf(stderr, "Failed to load \"%s\"\n", nombremodulo); return 1; }}And the pyqt5 module:plugin_loader.py#!/usr/bin/python# -*- coding: utf-8 -*-import impimport osimport sysfrom PyQt5 import QtCore, QtGui, QtWidgetsfrom DialogoImprimir import DialogoImprimirdef iniciar(ruta): import sys if not hasattr(sys,'argv'): sys.argv = [] app = QtWidgets.QApplication(sys.argv) myapp = DialogoImprimir(getPlugins(ruta)) myapp.show() sys.exit(app.exec_())DialogoImprimir.pyclass DialogoImprimir(QtWidgets.QDialog): def __init__(self, datos): QtWidgets.QDialog.__init__(self) self.datos = datos self.GeneraUI(datos) -------------------Well, my problem is that if I insert the C++ int CargaPlugins(const char* ruta, const char* nombremodulo, const char* nombrefuncion) into my Qt Widgwets application like a function, I get this error when called: QCoreApplication::exec: The event loop is already runningI think that the solution would be to pass a pointer of the current QApplication to python script, or if there are any way to do it, get the current QApplication when the script python is running and use it, but I don't know how could do that.Edit:The snippet of the code when I call the function inside Qt:mainwindow.cppvoid MainWindow::ActionImprimir(){ Imprimir impresor("/home/user/pathofpythonmodules/","plugin_loader","iniciar");imprimir.cppImprimir::Imprimir(const char* ruta, const char* nombremodulo, const char* nombrefuncion){ std::string path = "PYTHONPATH="; path.append(ruta); putenv ((char*)path.c_str()); Py_Initialize(); pFuncion = CargarPlugins(ruta,nombremodulo,nombrefuncion); if (pFuncion) { //more things }} (and CargarPlugins() is the same function that before) | Since you have a QApplication it is not necessary to create another one so the solution is:#!/usr/bin/python# -*- coding: utf-8 -*-import importlibimport osimport sysfrom PyQt5 import QtCore, QtGui, QtWidgetsfrom DialogoImprimir import DialogoImprimirdef iniciar(ruta): app = QtWidgets.QApplication.instance() if app is None: app = QtWidgets.QApplication([]) myapp = DialogoImprimir(getPlugins(ruta)) return myapp.exec_()A complete example you find here |
How to convert the string '100+20*50/10-9' to float value using python Haii friendsHow can I convert the string expression value to float valueex: s1 = '100+20*50/10-9' this s1 convert to float value. Based on the arithmetic operator priority rule it should give 191. But the string expression is not convert to float.I had use float('100+20*50/10-9') and it raises error: Traceback (most recent call last): File "", line 1, in ValueError: could not convert string to float: '100+20*50/10-9`Thanks! | If you are trying to parse a calculation string to a value, you have to evaluate the string first. Like this:eval('100+20*50/10-9')Then you may convert it to a float like this float(eval('100+20*50/10-9')).However, if the calculation string looked like this 101+20*50/3, thus returning 434,3333(3), then you should think of a better solution, parsing the values within to a float before doing the calculation. See the difference:print float(eval('101+20*50/3')) '434.0print float(eval('101.0+20.0*50.0/3.0')) '434.333333333 |
Django: Form with multiple inputs is invalid My template html has the following inputs (multiple):<form method="post" enctype="multipart/form-data">{% csrf_token %}<input name="image_field" type="file"><input name="image_field" type="file">My view is:def add_listing(request): if request.method == 'POST': image_form = ImageForm(request.FILES) files = request.FILES.getlist('image_field') if image_form.is_valid(): if object = Object.create() # since all images should relate to this object e.g. this object is the foreign key for f in files: # add images Image.objects.create(pk=None, object=object, image=f) object.save() return render(request, 'dashboard/add_listing.html', {'image_form': image_form})Forms.pyclass ImageForm(forms.ModelForm): image = forms.ImageField(widget=forms.ClearableFileInput(attrs={'multiple': True})) class Meta: model = ListingImage fields = ['image', ]Some debugging output:(Pdb) image_form.is_valid()False(Pdb) files[<InMemoryUploadedFile: WhatsApp Image 2018-04-08 at 20.11.17.jpeg (image/jpeg)>](Pdb) My goal is to process the form which can take n inputs with the same name, and for each file in the input, validate it and create an object. | I needed to replace image_form = ImageForm(request.FILES) withimage_form = ImageForm(request.POST, request.FILES).I'm not sure why this was required as the listing_form contains only File field, and any explanation would be appreciated! |
Trying to send form data from React.js form to Flask API, yet it doesn't allow to submit to MongoDB I am trying to make a simple create-User form in order to understand the functions of the API using react.js, python(Flask) and mongodb. However, I keep getting error that none of the input are getting sent to the Flask backend. Any way I can resolve the issue?This is the identity.py where the post get's handle using Flask_Restfulclass NewUser(Resource):def post(self): name = request.form.get("name") email = request.form.get("email") password = request.form.get("password") user = User(name=name, email=email, password=password) if not name: return {'Error': "Name Not Included"}, 403 if not email: return {'Error': "Email Not Included"}, 404 if not password: return {'Error': "Password Not Included"}, 405 user.hash_password() user.save() id = user.id return {'id': str(id)}, 200Over Here is the app.js from React_app being connected with proxy. Proxy works because it is able to send the title from the backend without any error. import React, {Component} from 'react';class App extends Component { constructor(props) { super(props); this.state = { name: '', email: '', password: '', titleData: [], userData: [] }; this.handleChange = this.handleChange.bind(this) this.handleSubmit = this.handleSubmit.bind(this) } handleChange(event) { this.setState({ name: event.target.value, email: event.target.value, password: event.target.value }) } ///Find a way to submit POST form to python_flask async handleSubmit(event) { event.preventDefault() console.log('Submit') await fetch('/user/join', { method: 'POST' }) .then(res => res.json()) .then(json => { this.setState({ userData: json }) }) .catch(() => { console.log("Error in Data") }) } async getData() { await fetch('/get') .then(response => response.json()) .then(json => { this.setState({ titleData: json }) }) } async componentDidMount() { this.getData() } render() { return ( <div className="App"> <header> <h1>{this.state.titleData.title}</h1> </header> <div> <p>heeheehe</p> <form onSubmit={this.handleSubmit}> <h3>New User? Sign in here!</h3> <div> <input type="text" name="name" placeholder="Name" value={this.props.value} onChange={this.props.handleChange} /> </div> <div> <input name="email" placeholder="Email" value={this.props.value} onChange={this.props.handleChange} /> </div> <div> <input name="password" placeholder="Password" value={this.props.value} onChange={this.props.handleChange} /> </div> <button>Press Me</button> </form> </div> </div> ) }}export default App; | `await fetch('/user/join', { method: 'POST'})`You are only making a post request and not sending anything to it. That's why nothing is being received on the other end.Refer to this post to know how to send a fetch post request in react. How to post object using fetch with form-data in React? |
How to convert a string to a multi-level JSON? I have an HTML file structured this way:Section 1.11.1.1 random paragraph1.1.1.1 random paragraphSection 1.21.2.1 random paragraph1.2.1.1 random paragraph...Section 11.4 ...11.4.12 random paragraph11.2.12.1 random paragraphHTML example:<p> <span class="c1" >Section 1.1.<span class="c7">&nbsp;&nbsp;</span>Organization and Application</span ></p><p> <span class="c1" >1.1.1.<span class="c7">&nbsp;&nbsp;</span>Organization of this Code</span ></p><p align="justify"> <span class="c1">1.1.1.1.&nbsp;&nbsp;Scope of Division A</span></p><p align="justify"> <span ><b>(1)&nbsp;&nbsp;</b>Division A contains compliance and application provisions and the <i>objectives</i> and <i>functional statements</i> of this Code.</span ></p><p align="justify"> <span class="c1">1.1.1.2.&nbsp;&nbsp;Scope of Division B</span></p><p align="justify"> <span ><b>(1)&nbsp;&nbsp;</b>Division B contains the <i>acceptable solutions</i> of this Code.</span ></p><p align="justify"> <span class="c1">1.1.1.3.&nbsp;&nbsp;Scope of Division C</span></p>I have figured out the regEx to find each Section, SubSection, and so on.sect1 = re.compile(r"class=\"c1\">(Section )?[0-9]+\.[0-9]+\.[^0-9]")sect2 = re.compile(r"class=\"c1\">[0-9]+\.[0-9]+\.[0-9]+\.[^0-9]")sect3 = re.compile(r"class=\"c1\">[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\.[^0-9]")sect4 = re.compile(r"class=\"c1\">[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\.[^0-9]")I can create a first-level "key-value" pair list for the Sections and the HTML contained in them:def stringToList(string, devider): takes in a string and a regEx ;returns list [[ name, resultHtml],[ name, resultHtml]]def stringToList(string, devider): matches = re.finditer(devider, string) matchArr= [] for m in matches : try: lastMatch except NameError: x=True else: start = lastMatch.start() end = m.start() resultHtml = page[start:end] # html string starting with last match, ending with current match name = lastMatch.group().replace('class="c1">','').replace('<','') # match group from last match minus the regEx Tags matchArr.append([ name, resultHtml]) lastMatch= m return matchArr #returns list[[ name, resultHtml],[ name, resultHtml]]This returns a list of section names and the HTML associated with those sections.How do I further sort the list to create a structure like:main: { { 1: { { 1.1: { 1.1.1:html, 1.1.2:html, } }, }The final goal is to have a list of nested links to each html.Is this the best approach to reach the goal? Any input or advice is welcome. | Your HTML doesn't have a proper format, but it's still possible to achieve the result your after. It does require a bit more work:html = """<p> <span class="c1">Section 1.1.<span class="c7">&nbsp;&nbsp;</span>Organization and Application</span></p><p> <span class="c1">1.1.1.<span class="c7">&nbsp;&nbsp;</span>Organization of this Code</span></p><p align="justify"> <span class="c1">1.1.1.1.&nbsp;&nbsp;Scope of Division A</span></p><p align="justify"> <span><b>(1)&nbsp;&nbsp;</b>Division A contains compliance and application provisions and the <i>objectives</i> and <i>functional statements</i> of this Code.</span></p><p align="justify"> <span class="c1">1.1.1.2.&nbsp;&nbsp;Scope of Division B</span></p><p align="justify"> <span><b>(1)&nbsp;&nbsp;</b>Division B contains the <i>acceptable solutions</i> of this Code.</span></p><p align="justify"> <span class="c1">1.1.1.3.&nbsp;&nbsp;Scope of Division C</span></p>"""import reimport jsonfrom bs4 import BeautifulSoup# Parse the HTML so we can search itsoup = BeautifulSoup(html)# Create a placeholder for the outputoutput = {}# Keep track of how deep we are nestedlast_section_number = "1"def clean_text(text: str) -> str: """ Helper method to clean texts. Args: text (str): The input text Returns: str: The clean output """ text = text.replace("\u00a0", " ") text = " ".join([line.strip() for line in text.split("\n")]) return text# Every part of the document seems to start with a <p> tagfor part in soup.find_all("p"): # If the part is a section title (or subsection title) it has a span with class="c1" section_title = part.find("span", "c1") if section_title is not None: # Extract the section number with regex section_number = re.search(r"(\d+\.)+", section_title.text).group(0).strip(".") def _set_nested(section_number: str, subsection: dict) -> dict: """ Method to traverse down into a dictionary based on a section number, formatted as a string. Args: section_number (str): The section number to traverse to (e.g. "1.1") subsection (dict): The subsection of the current section Returns: dict: The updated section """ # Split the section number, keep the first part as the main main, *rest = section_number.split(".") # If there is no "rest" there are no deeper levels if len(rest) <= 0: subsection[main] = { "title": clean_text(section_title.text), } return subsection # Recombine the "rest" into a new section number rest = ".".join(rest) # Use the "rest" to traverse down into the output dictionary subsection[main] = _set_nested(rest, subsection.get(main, {})) # Return the final output return subsection # Use the section number to set a part of a dictionary output = _set_nested(section_number=section_number, subsection=output) # Store the last section number processed last_section_number = section_number # If this part has no title, it's a piece of content else: def _set_nested(section_number: str, subsection: dict) -> dict: # Split the section number, keep the first part as the main main, *rest = section_number.split(".") # If there is no "rest" there are no deeper levels if len(rest) <= 0: subsection[main]["content"] = clean_text(part.text) return subsection # Recombine the "rest" into a new section number rest = ".".join(rest) # Use the "rest" to traverse down into the output dictionary subsection[main] = _set_nested(rest, subsection[main]) # Return the final output return subsection # Use the last section number to set a part of a dictionary output = _set_nested(last_section_number, output)print(json.dumps(output, indent=2))Output:{ "1": { "1": { "title": "Section 1.1. Organization and Application", "1": { "title": "1.1.1. Organization of this Code", "1": { "title": "1.1.1.1. Scope of Division A", "content": " (1) Division A contains compliance and application provisions and the objectives and functional statements of this Code. " }, "2": { "title": "1.1.1.2. Scope of Division B", "content": " (1) Division B contains the acceptable solutions of this Code. " }, "3": { "title": "1.1.1.3. Scope of Division C" } } } }}Previous answer:Don't use regex, use BeautifulSoup (https://www.crummy.com/software/BeautifulSoup/bs4/doc/)!soup = BeautifulSoup("<YOUR HTML AS STRING>")for section in soup.find_all("h1", "c1"): ... nested here ... |
How to sort first half in ascending and second half in descending order in python? I have a python list [1,2,3,4,5] I have to print [1,2,3,4,5,5,4,3,2,1].Please suggest how to do in loop(while or for) | As you say it's sorted list (right?), so do:print(l+l[::-1])Or:print(l+reversed(l))Both cases, the output is:[1, 2, 3, 4, 5, 5, 4, 3, 2, 1] |
How to run an existing Django application on aws ec2 instance? I am trying to run a Django application on AWS Ec2 instance. I've chosen Ubuntu as my platform. After cloning the git repository, and creating a virtual environment, I have installed all apps in my requirements.txt. When I try to the following lines of code python3 manage.py migrate ; python3 manage.py check ; python3 manage.py runserver the following error is coming up.django.db.utils.OperationalError: connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL: password authentication failed for user "columbus_db" connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL: password authentication failed for user "columbus_db"My settings.py file looks like this DATABASES = {# 'default': {# 'ENGINE': 'django.db.backends.sqlite3',# 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),# }'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', # Database Engine of PostgreSQL Database 'NAME': 'columbus_db', # Database Name 'USER': 'columbus_db', # Database has a Root User 'PASSWORD': 'columbus', # Database Connection Password 'HOST': "localhost", # IP Address for Localhost}What can I change in settings.py or Ec2 Instance settings to start the application and see it at Ec2 IP address? | You are missing a running database, the app code except it to be PostgreSQL, you have multiple choices:Install and run a local PostgreSQL instance directly in your EC2Use Amazon's managed database RDSUse Sqlite which is simple to setup and doesn't require more configuration, but your app might required specific PostgreSQL features |
How to prevent undetected chromedriver from closing window after last line of code I am using the undetected chromedriver in python selenium, my problem is that it always closes the window after ending the program.For example I have a line of code like:driver.get('www.google.com')It obviously opens google but then immediately closes the window. When I use my own chromedriver, the window stays open and I can still surf on that window even when the program ends.Any solutions? | I simply add a time.sleep(100) function, or kill the kernel |
One-to-many relationships SQLAlchemy that depend on each other I'm trying to get the following models working together. Firstly the scenario is as follows: A user can have many email addresses, but each email address can only be associated with one user;Each user can only have one primary email address (think of it like their current email address).An email address is a user's id, so they must always have one, but when they change it, I want to keep track of other ones they've used in the past. So far the setup is to have a helper table user_emails that holds a tie between an email and a user, which I hear is not supposed to be setup as a class in using the declarative SQLAlchemy approach (though I don't know why). Also, am I right in thinking that I need to use use_alter=True because the users table won't know the foreign key email_id until it's inserted?models.py looks like this:"""models.py"""user_emails = Table('user_emails', Base.metadata, Column('user_id', Integer, ForeignKey('users.id'), primary_key=True), Column('email', String(50), ForeignKey('emails.address'), primary_key=True))class User(Base): __tablename__ = 'users' id = Column(Integer, Sequence('usr_id_seq', start=100, increment=1), primary_key=True) email_id = Column(String(50), ForeignKey('emails.address', use_alter=True, name='fk_email_id'), unique=True, nullable=False) first = Column(String(25), unique=True, nullable=False) last = Column(String(25), unique=True, nullable=False) def __init__(self, first, last): self.first = first self.last = lastclass Email(Base): __tablename__ = 'emails' address = Column(String(50), unique=True, primary_key=True) user = relationship(User, secondary=user_emails, backref='emails') added = Column(DateTime, nullable=False) verified = Column(Boolean, nullable=False) def __init__(self, address, added, verified=False): self.address = address self.added = added self.verified = verifiedEverything seems OK until I try and commit to the DB:>>> user = models.User("first", "last")>>> addy = models.Email("example@example.com", datetime.datetime.utcnow())>>> addy<Email 'example@example.com' (verified: False)>>>> user>>> <User None (active: True)>>>>>>> user.email_id = addy>>> user>>> <User <Email 'example@example.com' (verified: False)> (active: True)>>>> Session.add_all([user, addy])>>> Session.commit()>>> ...>>> sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt type 'Email' "INSERT INTO users (id, email_id, first, last, active) VALUES (nextval('usr_id_seq'), %(email_id)s, %(first)s, %(last)s, %(active)s) RETURNING users.id" {'last': 'last', 'email_id': <Email 'example@example.com' (verified: False)>, 'active': True, 'first': 'first'}So, I figure I'm doing something wrong/stupid, but I'm new to SQLAlchemy so I'm not sure what I need to do to setup the models correctly.Finally, assuming I get the right models setup, is it possible to add a relationship so that by loading an arbitrary email object I'll be able to access the user who owns it, from an attribute in the Email object?Thanks! | You have already got a pretty good solution, and a small fix will make your code work. Find below the quick feedback on your code below:Do you need the use_alter=True? No, you actually do not need that. If the primary_key for the Email table was computed on the database level (as with autoincrement-based primary keys), then you might need it when you have two tables with foreign_keys to each other. In your case, you even do not have that because you have a third table, so for any relationship combination the SA (sqlalchemy) will figure it out by inserting new Emails, then Users, then relationships.What is wrong with your code?: Well, you are assigning an instance of Email to User.email_id which is supposed to get the email value only. There are two ways how you can fix it:Assign the email directly. so change the line user.email_id = addy to user.email_id = addy.addressCreate a relationship and then make the assignment (see code below).Personally, I prefer the option-2.Other things: your current model does not check that the User.email_id is actually one of the User.emails. This might be by design, but else just add a ForeignKey from [users.id, users.email_id] to [user_emails.user_id, user_emails.email]Sample code for version-2:""" models.py """class User(Base): __tablename__ = 'users' # ... email_id = Column(String(50), ForeignKey('emails.address', use_alter=True, name='fk_email_id'), unique=True, nullable=False) default_email = relationship("Email", backref="default_for_user")""" script """# ... (all that you have below until next line)# user.email_id = addy.addressuser.default_email = addy |
Why does subprocess.Popen want a list instead of a string? I'd like to understand what is going on under the hood in terms of why Popen wants a list instead of a string. Example:cmd_desired = 'func -a arg1 arg2'subprocess.Popen(cmd_desired) # Doesn't worklist_cmd = cmd_desired.split()subprocess.Popen(list_cmd) # Workssubprocess.Popen(cmd_desired, shell=True) # Also worksWhat's going on? | The question is erring in that it refers to the contents of dict_cmd as a dictionary. It is a list, for example ['func', '-a', 'arg1', 'arg2']. As such, dict_cmd is not a good name for the variable. Renaming it accordingly, we have:cmd_desired = 'func -a arg1 arg2'list_cmd = cmd_desired.split()And then the options are:subprocess.Popen(list_cmd) # Worksor:subprocess.Popen(cmd_desired, shell=True) # Also worksOf these options, using a list rather than a string is usually preferable, because it maps more or less directly onto the how the underlying system call (called execve) works, which is used to actually execute the command in the forked subprocess. Here is the prototype for execve in C:int execve(const char *pathname, char *const argv[], char *const envp[]);The second argument argv is an array of char *, which corresponds closely to a list of strings in Python.Where a space-separated string is used instead of a list, this cannot be passed directly to execve. It can be used in combination with shell=True as the question indicates, and then a shell is invoked as an intermediate process. The shell will then interpret spaces as argument separators, and use it to split the string into an array of arguments that can be passed to execve. The shell will also interpret various other characters in the command string (for example > for output redirection).Whether the use of a shell is desirable will depend on the application, but for example if there are arguments containing spaces, they will need to be protected from being split into different arguments if using a shell. This is not a concern when using the list version. |
Why does the order of function order matter? I am having a problem with a function I am trying to fit to some data. I have a model, given by the equation inside the function which I am using to find a value for v. However, the order in which I write the variables in the function definition greatly effects the value the fit gives for v. If, as in the code block below, I have def MAR_fit(v,x) where x is the independent variable, the fit gives a value for v hugely different from if I have the definition def MAR_fit(x,v). I haven't had a huge amount of experience with the curve_fit function in the scipy package and the docs still left me wondering. Any help would be great!def MAR_fit(v,x): return (3.*((2.-1.)**2.)*0.05*v)/(2.*(2.-1.)*(60.415**2.)) * (((3.*x*((2.-1.)**2.)*v)/(60.415**2.))+1.)**(-((5./2.)-1.)/(2.-1.))x = newCD10_AVB1_AMIN01['time_phys'][1:]y = (newCD10_AVB1_AMIN01['MAR'][1:])popt_tf, pcov = curve_fit(MAR_fit, x, y) | Have a look at the documentation again, it says that the callable that you pass to curve_fit (the function you are trying to fit) must take the independent variable as its first argument. Further arguments are the parameters you are trying to fit. You must use MAR_fit(x,v) because that is what curve_fit expects. |
Plotting multiple countplots using seaborn I have categorical variables in my data set, most of them are binary 0,1 but some are multi-class. I used countplot to plot the distribution.f, axes = plt.subplots(4,3,figsize=(17,13), sharex=True)for i, feature in enumerate(cat_var_list): sns.countplot(df[feature],ax=axes[i%4, i//4])cat_var_list has 12 variables.However I found that the scale is 0,1 and variables that have multi-class outcomes 0,1,2 do not show properly.For example, the plot looks like this:However, for the variable Intro Election Status, the plot should look like this:How can I make the plot show up properly in the multi plot grid format? | I see your code works as expected with this sample data:np.random.seed(1)df = pd.DataFrame(np.random.choice([0,1], (100,11)), columns=list('abcdefABCDE'))df['F'] = np.random.choice([0,1,2], 100)cat_var_list = 'abcdefABCDEF'f, axes = plt.subplots(4,3,figsize=(17,13), sharex=True)for f,ax in zip(cat_var_list, axes.ravel()): sns.countplot(df[f], ax=ax) ax.set_title(f)Output: |
How to sort dictionary of sets by using length to sort them? Key Is a number and value of key is a set i want to sort them according to length of sets? Ans={} for i in range(N): x=set(x for x in range(1,N+1)) Ans[i+1]=xIn later stages of code this dictionary will have values of variable length and I want to sort them according to length of set as a value to key ! | If you want to sort the dictionary which is a int->set mapping, this piece of code should be enoughAns = {item[0]:item[1] for item in sorted(Ans.items(), key= lambda x: len(x[1]))}It converts the dictionary to a tuple sorted on values, and build the dictionary from it. There could be better ways, but this was something first came in mind.! |
building a prefect pipeline to run tasks forever I am having trouble building a prefect pipeline. Suppose I have a file, call it streamA.py and streamB.py. The purpose of these two files, is to stream data continuously 24/7 and push data into a redis stream once every 500 records streamed.I created another file called redis_to_postgres.py that grabs all the data in a the redis streams asynchronously and pushes the data to postgresql and cleans the memory from redis streams whose ids i just recently pushed. This is done via async. I want this timed every 15 minutes once the previous pipeline starts.What would be the most practical way of doing this? Would I create 3 separate pipelines in this case? One for streamA and one for streamB and 3rd one to read from redis and push to postgresql and finally clean the data? Or would i create one pipeline to stream data in a parallel manner and another to just read and push to postgres? Thanks | An interesting use case! Are you asking this for Prefect ≤ 1.0 or for Orion? For Orion, there is a blog post that discusses the problem in more detail and shows example flow.But I’ll assume you’re asking for Prefect ≤ 1.0.In order to read the data from Redis and load it to Postgres, say every 10 seconds, you could use a loop within your Prefect task:for iteration in range(1, 7): logger.info("iteration nr %s", iteration) read_from_redis_and_load_to_postgres() # your logic here if iteration < 6: logger.info("Sleeping for 10 seconds...") time.sleep(10)And this flow could be scheduled to run every minute. This would give you retries, observability, and all the Prefect features, and loading data every 10 seconds to Postgres shouldn’t overwhelm your database.But for the part that you get real-time data and continuously load it to a Redis stream, you may run it as a separate service rather than a Prefect flow since Prefect 1.0 flows are more designed towards batch processing and are expected to end at some point in order to tell whether the flow run was successful or not. If you would have it as a Prefect flow that never ends, it could lose flow heartbeat and get killed by a Zombie killer process. So it may be easier to run this part e.g. as a separate containerized service running 24/7. You could deploy it as a separate Kubernetes deployment or ECS service.It also depends on many factors, incl. what this code is doing, how reliable this API is (does the source system from which you extract the data has some rate-limiting? why 500 records? what is the frequency those 500 records are filled and how frequently do you end up writing to Redis?).Having said that, I would be curious to see if you could implement it in Orion, similarly to what the blog post example does. We are currently collecting feedback about streaming use cases for Orion, so we would be interested to hear your feedback on this if you implement this in Orion. |
How to do loop in Python Map? I want to do a loop in a map. My code is like this: for (abcID,bbbID) in map(abcIdList,bbbIdList): buildXml(abcID,bbbID)How should I do to make this work? | Uh, I think you want zip() instead...>>> zip((1, 2), ('a', 'b'))[(1, 'a'), (2, 'b')] |
store openid user in cookie google appengine I am using OpenID as a login system for a google appengine website and I right now for every website I am just passing the user info to every page using user = users.get_current_user()Would using a cookie to do this be more efficient? (I know if would be easier that putting that in every single webpage) and is these any special way to do it with google appengine? I already have a cookie counting visits but I would image it'll be a little different.Update: Could I do self.user = users.get_current_user() as a global variable and then pass in user=self.user on every page to have access to that variable?Thanks! | users.get_current_user() is actually reading the cookies so you don't need to do anything more to optimize it (you can easily verify it by deleting your cookies and then refreshing the page). Unless you want to store more information and have access to them without accessing the datastore on every request. |
how to read the Azure blob file with Azure function in python? I am new in Azure cloud. Now I hope to create a work flow: upload a audio file to the blob --> Blob trigger is invoked --> deployed python function read the upload audio file and extract harmonics --> harmonics output as json file and save in another container. Blow is my code but it doesn't work:import loggingimport azure.function as funcimport audio_readdef main(myblob: func.InputStream): logging.info(f"Python blob trigger function processed blob \n" f"Name: {myblob.name}\n" f"Blob Size: {myblob.length} bytes") audio_info = audioread.audio_open(myblob.read()) logging.info(f"{audio_info}")It returns me an error:Exception: UnicodeDecodeError: "utf-8" codec can't decode byte 0x80 in position 40: invalid start byte.my function.json is:{ "scriptFile": "__init__.py", "bindings": [ { "name": "myblob", "type": "blobTrigger", "direction": "in", "path": "examplecontainer/{name}", "connection": "AzureWebJobsStorage" } ]} | The input binding allows you to read blob storage data as input to an Azure Function.For more details refer this document : https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-input?tabs=pythonAnd make sure that you have given the proper content type and encoding while uploading audio file as blob in azure storageFor more details refer this document |
Make elements in Pandas rows I have a pandas table df: courseID1 physics1011 astronomy 2 maths2 anotherI'd like to derive a table that has the following result: physics101 astronomy maths anotherID1 True True False False2 False False True TrueWhat kind of operation is it?(Elements of df is a defined set of classes) | You can use crosstab():import pandas as pdfrom StringIO import StringIOdata = StringIO("""ID course1 physics1011 astronomy 2 maths2 another""")df = pd.read_csv(data, delim_whitespace=True)pd.crosstab(df.ID, df.course) > 0output:course another astronomy maths physics101ID 1 False True False True2 True False True False |
ModuleNotFoundError: No module named 'requests' even after pip installed requests in Pycharm Previously "requests" library was working, but I wanted to do a project and install a few libraries and "requests==2.22.0" was included among the libraries. After installing the libraries, I found out I get an error on PyCharm:Traceback (most recent call last): File "C:/Users/User/Desktop/python/projects/youtube downloader.py", line 5, in <module> import requestsModuleNotFoundError: No module named 'requests'When I tried to install requests again using pip, I keep getting these errors,I also tried to uninstall requests and then re-install:C:\Users\User>pip3 install requestsRequirement already satisfied: requests in c:\users\muffin\anaconda3\lib\site-packages (2.21.0)Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (3.0.4)Requirement already satisfied: idna<2.9,>=2.5 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (2.8)Requirement already satisfied: urllib3<1.25,>=1.21.1 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (1.24.1)Requirement already satisfied: certifi>=2017.4.17 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (2018.11.29)C:\Users\User>pip install requestsRequirement already satisfied: requests in c:\users\muffin\anaconda3\lib\site-packages (2.21.0)Requirement already satisfied: idna<2.9,>=2.5 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (2.8)Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (3.0.4)Requirement already satisfied: urllib3<1.25,>=1.21.1 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (1.24.1)Requirement already satisfied: certifi>=2017.4.17 in c:\users\User\appdata\roaming\python\python37\site-packages (from requests) (2018.11.29)I noticed I have two? pythons in my computer, but I never got to know what I should do with two pythons (I use Ananconda for Jupiter notebook and PyCharm), and I'm assuming this can be the issue of why pip is not working as I expected... Or not. I am very confused.++ So I found out in Jupiter notebook, I can import requests library without any issues, so I think I need to download the library in the other path???..Can anyone help?Thank you so much in advance. | You have not configured Python interpreter for PyCharm. Follow this tutorial and you should be fine. I'd recommend to use Anaconda and create a virtual environment, either manually or inside PyCharm. Install requests to that environment. |
FileNotFoundError: No such file or directory ERROR football_players = []while True: print(""" ******************* CHOOSE OPERATION: 1. ADD FOOTBALLER (NAME SURNAME, FOOTBALL TEAM) 2. SHOW ME PLAYERS OF FENERBAHÇE TEAM 3. SHOW ME PLAYERS OF GALATASARAY TEAM ENTER 'q' to quit... ******************* """) operation = input("Operation:") if (operation == "q"): break elif (operation == "1"): player = list() players_numbers = int(input("Kaç adet futbolcu ekleyeceksiniz?")) for i in range(players_numbers): player.append(input("Name Surname, Team:").split(",")) with open("players.txt", "w", encoding = "utf-8") as file: for i in player: file.write("Name Surname:{} Team:{}\n".format(i[0], i[1])) if (i[1] == "Fenerbahçe"): with open("fenerbahçe_players.txt", "a", encoding = "utf-8") as file2: file2.write("Name Surname:{} Team:{}\n".format(i[0], i[1])) elif (i[1] == "Galatasaray"): with open("galatasaray_players.txt", "a", encoding = "utf-8") as file3: file3.write("Name Surname:{} Team:{}\n".format(i[0], i[1])) elif (operation == "2"): with open("fenerbahçe_players.txt", "r", encoding = "utf-8") as file2: file2.readlines() elif (operation == "3"): with open("galatasaray_players.txt", "r", encoding = "utf-8") as file3: file3.readlines()I get this below error. And, i cant find the solution. I need to take player names from user and write these into players.txt. After that, I need to write 2 .txt file for their team. Can you help me, please?> FileNotFoundError: [Errno 2] No such file or directory:> 'fenerbahçe_players.txt' | You are trying to open a file with open("fenerbahçe_players.txt", "r", encoding = "utf-8") as file2:```But that file does not exist. |
How to resize image in CV2 image in Colab Editor I am trying to resize an image in cv2 in Colab editor, but I am getting the below error. Can anyone help me to debug this error?My code:img= cv2.imread("/content/drive/My Drive/DL_DATAset/Autotag/Test Image/image100.jpg")height = 220width = 220dim = (width, height)res = cv2.resize(img, dim, interpolation=cv2.INTER_LINEAR)Error:error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function 'resize' | The error simply means that the image cannot be loaded/read. It's coming from this line hereimg= cv2.imread("/content/drive/My Drive/DL_DATAset/Autotag/Test Image/image100.jpg")Path: The image should be in the working directory or a full path of image should be given.Are you sure that you are providing that right path to the imgread() method? Try giving it a full absolute path, e.g.img= cv2.imread("'C:\Users\User\Desktop\geeksforgeeks.png'") |
When I save a PySpark DataFrame with saveAsTable in AWS EMR Studio, where does it get saved? I can save a dataframe using df.write.saveAsTable('tableName') and read the subsequent table with spark.table('tableName') but I'm not sure where the table is actually getting saved? | It is stored under the default location of your database.You can get the location by running the following spark sql query:spark.sql("DESCRIBE TABLE EXTENDED tableName")You can find the Location under the # Detailed Table Information section.Please find a sample output below: |
workbook save failing, not sure why I apologize for the length of this. I am a relative Neophyte to Excel VBA and even more junior with Python. I have run into an issue with an error that occasionally occurs in python using OpenPyXl (just trying that for the first time).Background: I have a series of python scripts (12) running and querying an API to gather data and populate 12 different, though similar, workbooks. Separately, I have a equal number of Excel instances periodically looking for that data and doing near-real-time analysis and reporting. Another python script looks for key information to be reported from the spreadsheets and will text it to me when identified. The problem seems to occur between the data gathering python scripts and a copy command in the data analysis workbooks.The way the python data gathering scripts "talk" to the analysis workbooks is via the sheets they build in their workbooks. The existing vba in the analysis workbooks will copy the data workbooks to another directory (so that they can be opened and manipulated without impacting their use by the python scripts) and then interpret and copy the data into the Excel analysis workbook. Although I recently tested a method to read the data directly from those python-created workbooks without opening them, the vba will require some major surgery to convert to that method and is likely not going to happen soon.TL,DR: There are data workbooks and analysis workbooks. Python builds the data workbooks and the analysis workbooks use VBA to copy the data workbooks to another directory and load specific data from the copied data workbooks. There is a one-to-one correspondence between the data and analysis workbooks.Based on the above, I believe that the only "interference" that occurs with the data workbooks is when the macro in the analysis workbook copies the workbook. I thought this would be a relatively safe level of interference, but it apparently is not.The copy is done in VBA with this set of commands (the actual VBA sub is about 500 lines):fso.CopyFile strFromFilePath, strFilePath, Truewhere fso is set thusly:Set fso = CreateObject("Scripting.FileSystemObject")and the strFromFilePath and strFilePath both include a fully qualified file name (with their respective paths). This has not generated any errors on the VBA side.The data is copied about once a minute (though it varies from 40 seconds to about 5 minutes) and seems to work fine from a VBA perspective.What fails is the python side about 1% of the time (which is probably 12 or fewer times daily. While that seems small, the associated data capture process halts until I notice and restart it. This means anywhere from 1 to all 12 of the data capture processes will fail at some point each day.Here is what a failure looks like:Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> monitor('DLD',1,13,0) File "<string>", line 794, in monitor File "C:\Users\abcd\AppData\Local\Programs\Python\Python39\lib\site-packages\openpyxl\workbook\workbook.py", line 407, in save save_workbook(self, filename) File "C:\Users\abcd\AppData\Local\Programs\Python\Python39\lib\site-packages\openpyxl\writer\excel.py", line 291, in save_workbook archive = ZipFile(filename, 'w', ZIP_DEFLATED, allowZip64=True) File "C:\Users\abcd\AppData\Local\Programs\Python\Python39\lib\zipfile.py", line 1239, in __init__ self.fp = io.open(file, filemode)PermissionError: [Errno 13] Permission denied: 'DLD20210819.xlsx'and I believe it occurs as a result of the following lines of python code (which comes after a while statement with various if conditions to populate the worksheets). The python script itself is about 200 lines long: time.sleep(1) # no idea why wb.save sometimes fails; trying a delay wb.save(FileName)Notice, I left in one of the attempts to correct this. I have tried waiting as much as 3 seconds with no noticeable difference.I admit I have no idea how to detect errors thrown by OpenPyXl and am quite unskilled at python error handling, but I had tried this code yesterday: retries = 1 success = False while not success and retries < 3: try: wb.save success = True except PermissionError as saveerror: print ('>>> Save Error: ',saveerror) wait = 3 print('=== Waiting %s secs and re-trying... ===' % wait) #sys.stdout.flush() time.sleep(wait) retries += 1My review of the output tells me that the except code never executed while testing the data capture routine over 3000 times. However, the "save" also never happened so the analysis spreadsheets did not receive any information until later when the python code saved the workbook and closed it.I also tried adding a wb.close after setting the success variable to true, but got the same results.I am considering either rewriting the VBA to try to grab the data directly from the unopened data workbooks without first copying them (which actually sounds more dangerous) or using an external synching tool to copy them outside of VBA (which could potentially cause exactly the same problem).Does anyone have an idea of what may be happening and how to address it? It works nearly all the time but just fails several times a day.Can someone help me to better understand how to trap the error thrown by OpenPyXl so that I can have it retry rather than just abending?Any suggestions are appreciated. Thank you for reading. | Not sure if this is the best way, but the comment from simpleApp gave me an idea that I may want to use a technique I used elsewhere in the VBA. Since I am new to these tools, perhaps someone can suggest a cleaner approach, but I am going to try using a semaphore file to signal when I am copying the file to alert the python script that it should avoid saving.In the below I am separating out the directory the prefix and the suffix. The prefix would be different for each of the 12 or more instances I am running and I have not figured out where I want to put these files nor what suffix I should use, so I made them variables.For example, in the VBA I will have something like this to create a file saying currently available: Dim strSemaphoreFolder As String Dim strFilePrefix As String Dim strFileDeletePath As String Dim strFileInUseName As String Dim strFileAvailableName As String Dim strSemaphoreFileSuffix As String Dim fso As Scripting.FileSystemObject Dim fileTemp As TextStream Set fso = CreateObject("Scripting.FileSystemObject") strSemaphoreFileSuffix = ".txt" strSemaphoreFolder = "c:\temp\monitor\" strFilePrefix = "RJD" strFileDeletePath = strSemaphoreFolder & strFilePrefix & "*" & strSemaphoreFileSuffix' Clean up remnants from prior activities If Len(Dir(strFileDeletePath)) > 0 Then Kill strFileDeletePath End If' files should be gone ' Set the In-use and Available Names strFileInUseName = strFilePrefix & "InUse" & strSemaphoreFileSuffix strFileAvailableName = strFilePrefix & "Available" & strSemaphoreFileSuffix' Create an available file Set fileTemp = fso.CreateTextFile(strSemaphoreFolder & strFileAvailableName, True) fileTemp.Close' available file should be thereThen, when I am about to copy the file, I will briefly change the filename to indicate that the file is in use, perform the potentially problematic copy and then change it back with something like this:' Temporarily name the semaphore file to "In Use" Name strSemaphoreFolder & strFileAvailableName As strSemaphoreFolder & strFileInUseName fso.CopyFile strFromFilePath, strFilePath, True' After copying the file name it back to "Available" Name strSemaphoreFolder & strFileInUseName As strSemaphoreFolder & strFileAvailableNameOver in the Python script, before I do the wb.save command, I will insert a check to see whether the file indicates that it is available or in use with something like this: prefix = 'RJD' directory = 'c:\\temp\\monitor\\' suffix = '.txt' filepathname = directory + prefix + 'Available' + suffix while not (os.path.isfile(directory + prefix + 'Available' + suffix)): time.sleep(1) wb.saveDoes this seem like it would work?I am thinking that it should avoid the failure if I have properly identified it as an attempt to save the file in the Python script while the VBA script is telling the operating system to copy it.Thoughts?afterthoughts:Using the technique I described, I probably need to create the "Available" semaphore file in the Python script and simply assume it will be there in the VBA script since the Python script is collecting the data and may be doing so before the VBA is even started.A better alternative may be to simply check for the existence of the "In Use" file which will never be there unless the VBA wants it there, like this: while (os.path.isfile(directory + prefix + 'InUse' + suffix)): time.sleep(1) wb.save |
NoSuchElementException: no such element: Unable to locate element: css selector but im using find_element I used selenium IDE to trace my UI activity. I got this following code from IDE and i inspected in UI also,but while using find_element by id i'm getting css selector error.driver.find_element(By.ID, "button-1034-btnIconEl").click()error is raise exception_class(message, screen, stacktrace) NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"[id="button-1034-btnIconEl"]"} (Session info: chrome=78.0.3904.108)Please help me to debug this.. | The id seems to be dynamic one so you cannot use static id in the selector. You need to use a dynamic xpath for this.You can use the below xpath:driver.find_element(By.XPATH, "//span[contains(@id,'btnIconEl')]").click()OR You can find the element using its text as well in the xpath:driver.find_element(By.XPATH, "//span[contains(text(),'Add Order')]").click() |
How do I make .tmp files (of a given size) with batch and Python I am trying to make 20 .tmp files in batch or python. I have been looking through everything and cant find a solution. I want the .tmp files to store in C:\Users\%USERNAME%\AppData\Local\Temp here is the code.Python:import tempfile# This makes a .tmp file but only 0kb i want like 37kbfor i in range(20): new_file, filename = tempfile.mkstemp('.tmp') print(filename)This makes a .tmp file but I want to increase the size/ storage it takesI don't have a way for batch so please help me all support is appreciative | Whilst the question has already been answered, and a python solution accepted, the OP did also include the batch-file tag.For that reason, here is a very basic example of how to generate a dummy file of size 37KB filled with _ characters, from a batch file.@(For /L %%G In (1,1,37) Do @For /L %%H In (1,1,1024) Do @Set /P "=_" 0<NUL) 1>"%LocalAppData%\Temp\dummy.tmp"Just change the number of kilobytes, currently 37, (just before the first closing parenthesis), and the fill character, _, (just after the = character), as needed.Please note that the above example will be relatively slow, so would be unsuitable for large files.There are better ways of creating larger dummy files, one of which is often ignored, because historically it required to be 'Run as administrator'. However, certainly in Windows 10, that restriction is no longer in place for the required task, and standard users can benefit from its use. That method involves the built-in utility fsutil.exe, which has the ability to very quickly create a file of defined size, including larger ones.Example: fsutil file createNew <filename> <length>For a 37KB file named dummy.tmp in %SystemDrive%\Users\%UserName%\AppData\Local\Temp, (which is usually, by default, %TEMP%, and/or %TMP%), from a batch file:@Set "sizeKB=37"@Set /A sizeB = sizeKB * 1024@%SystemRoot%\System32\fsutil.exe file createNew "%LocalAppData%\Temp\dummy.tmp" %sizeB%The downside of this is that you do not have an opportunity to select the content or its format.You could of course leverage other built-in scripting languages from a batch file, VBScript, JScript and PowerShell, but as the OP already has python.exe installed, including such things, under this question would be wasted effort. |
Create CSV from irregular list of dictionaries in python I have a list of dictionaries like[ {'AB Code': 'Test AB Code', 'Created': '2020-08-04 13:20:55.196500+00:00'}, {'AB Code': 'Test AB Code', 'Created': '2020-08-04 13:20:11.315136+00:00', 'Name': 'John Doe', 'Email': 'john@example.com', 'Phone No.': '1234567890', 'Age': '31'}]The two dictionary objects have irregular keys. I want to create a header for each new key and values beneath it.The resultant CSV should beAB Code, Created, Name, Email, Phone No., AgeTest AB Code, 2020-08-04 13:20:55.196500+00:00, '', '', '', ''Test AB Code, 2020-08-04 13:20:55.196500+00:00, John Doe, john@example.com, 1234567890, 31What I'm doing is# headerd_ = []# valuesfor index, item in enumerate(data): if index == 0: d_.append(list(item.keys())) d_.append(list(item.values()))# Add to CSVbuffer = io.StringIO()wr = csv.writer(buffer, quoting=csv.QUOTE_ALL)wr.writerows(d_)Which generates CSVAB Code, CreatedTest AB Code, 2020-08-04 13:20:55.196500+00:00, '', '', '', ''Test AB Code, 2020-08-04 13:20:55.196500+00:00, John Doe, john@example.com, 1234567890, 31 | There is answer using pandas provided by @bigbounty in comments to the question.Here is solution using just standard libraryimport csvfrom collections import ChainMapdata = [ {'AB Code': 'Test AB Code', 'Created': '2020-08-04 13:20:55.196500+00:00'}, {'AB Code': 'Test AB Code', 'Created': '2020-08-04 13:20:11.315136+00:00', 'Name': 'John Doe', 'Email': 'john@example.com', 'Phone No.': '1234567890', 'Age': '31'}]keys = list(ChainMap(*data))with open('spam.csv', 'w', newline='') as f: wrtr = csv.DictWriter(f, fieldnames=keys, quoting=csv.QUOTE_ALL) wrtr.writeheader() wrtr.writerows(data)Also there is extended discussion on merging dicts, which may be somewhat relevant. |
I can't install streamlit with pip I am running python 3.8.2 and pip 20.1.1 on Windows 10. When I try to install streamlit with pip install streamlit I get a long list of errors that appear in the console. Some of the errors seem to say that it needs to be installed on a python version < 3.7, however streamlit is supposed to work for 3.8.2. Does anyone know why this is happening?I can't include the entire error because it goes over the max character limit, but if anyone needs the rest of it, I can give it. Here is as much of the error as I can fit: ERROR: Command errored out with exit status 1: command: 'c:\users\lori\appdata\local\programs\python\python38-32\python.exe' 'c:\users\lori\appdata\local\programs\python\python38-32\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\LORI\AppData\Local\Temp\pip-build-env-l8t3vf24\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'cython >= 0.29' 'numpy==1.14.5; python_version<'"'"'3.7'"'"'' 'numpy==1.16.0; python_version>='"'"'3.7'"'"'' setuptools setuptools_scm wheel cwd: None Complete output (579 lines): Ignoring numpy: markers 'python_version < "3.7"' don't match your environment Collecting cython>=0.29 Using cached Cython-0.29.21-cp38-cp38-win32.whl (1.6 MB) Collecting numpy==1.16.0 Using cached numpy-1.16.0.zip (5.1 MB) Collecting setuptools Using cached setuptools-49.2.0-py3-none-any.whl (789 kB) Collecting setuptools_scm Using cached setuptools_scm-4.1.2-py2.py3-none-any.whl (27 kB) Collecting wheel Using cached wheel-0.34.2-py2.py3-none-any.whl (26 kB) Building wheels for collected packages: numpy Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: 'c:\users\lori\appdata\local\programs\python\python38-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\LORI\\AppData\\Local\\Temp\\pip-install-a17oetog\\numpy\\setup.py'"'"'; __file__='"'"'C:\\Users\\LORI\\AppData\\Local\\Temp\\pip-install-a17oetog\\numpy\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\LORI\AppData\Local\Temp\pip-wheel-ga8cnrkz' cwd: C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\ Complete output (264 lines): Running from numpy source directory. C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\numpy\distutils\misc_util.py:476: SyntaxWarning: "is" with a literal. Did you mean "=="? return is_string(s) and ('*' in s or '?' is s) blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\numpy\distutils\system_info.py:625: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. self.calc_info() blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\numpy\distutils\system_info.py:625: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\numpy\distutils\system_info.py:625: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE openblas_lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE openblas_clapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas,lapack not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in c:\users\lori\appdata\local\programs\python\python38-32\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\numpy\distutils\system_info.py:625: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. self.calc_info() lapack_src_info: NOT AVAILABLE C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\numpy\distutils\system_info.py:625: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. self.calc_info() NOT AVAILABLE c:\users\lori\appdata\local\programs\python\python38-32\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running bdist_wheel running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build\src.win32-3.8 creating build\src.win32-3.8\numpy creating build\src.win32-3.8\numpy\distutils building library "npymath" sources No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Failed building wheel for numpy Running setup.py clean for numpy ERROR: Command errored out with exit status 1: command: 'c:\users\lori\appdata\local\programs\python\python38-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\LORI\\AppData\\Local\\Temp\\pip-install-a17oetog\\numpy\\setup.py'"'"'; __file__='"'"'C:\\Users\\LORI\\AppData\\Local\\Temp\\pip-install-a17oetog\\numpy\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all cwd: C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy Complete output (10 lines): Running from numpy source directory. `setup.py clean` is not supported, use one of the following instead: - `git clean -xdf` (cleans all files) - `git clean -Xdf` (cleans all versioned files, doesn't touch files that aren't checked into the git repo) Add `--force` to your command to use it anyway if you must (unsupported). ---------------------------------------- ERROR: Failed cleaning build dir for numpy Failed to build numpy ERROR: opencv-python 4.3.0.36 has requirement numpy>=1.17.3, but you'll have numpy 1.16.0 which is incompatible. Installing collected packages: cython, numpy, setuptools, setuptools-scm, wheel Running setup.py install for numpy: started Running setup.py install for numpy: finished with status 'error' ERROR: Command errored out with exit status 1: command: 'c:\users\lori\appdata\local\programs\python\python38-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\LORI\\AppData\\Local\\Temp\\pip-install-a17oetog\\numpy\\setup.py'"'"'; __file__='"'"'C:\\Users\\LORI\\AppData\\Local\\Temp\\pip-install-a17oetog\\numpy\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\LORI\AppData\Local\Temp\pip-record-8idiuyk5\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\LORI\AppData\Local\Temp\pip-build-env-l8t3vf24\overlay' --compile --install-headers 'C:\Users\LORI\AppData\Local\Temp\pip-build-env-l8t3vf24\overlay\Include\numpy' cwd: C:\Users\LORI\AppData\Local\Temp\pip-install-a17oetog\numpy\ Complete output (267 lines): Running from numpy source directory. Note: if you need reliable uninstall behavior, then install with pip instead of using `setup.py install`: - `pip install .` (from a git repo or downloaded source release) - `pip install numpy` (last NumPy release on PyPi) blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\lib', 'C:\\', 'c:\\users\\lori\\appdata\\local\\programs\\python\\python38-32\\libs'] NOT AVAILABLE | After reading about a similar issue as yours, they recommended Python 3.7 as it is notquite stable for 3.8.Another solution one of the user provided:I was able to fix it by installing two visual studio dist packages:Download Visual C++ Build Tools installer and install it from here:http://go.microsoft.com/fwlink/?LinkId=691126&fixForIE=.exeGo to this link and download the setup and install Visual C++Redistributable for Visual Studio 2015:https://www.microsoft.com/en-in/download/details.aspx?id=48145I shouldn't consider this answer a "proper" one, but still, if it solves your errors, I'd be happy. |
RNN, Keras, Python: Min Max Scaler Data normalization ValueError: Found array with dim 3. Estimator expected I prepared a simple dataset and labels set. I want to learn how can I implement simple RNN in Keras. I prepared my data. When I do not use normalization (MinMaxScaler) everything compiles without errors. However, when I try to use the scaler, I got ValueError: Found array with dim 3. Estimator expected <= 2.. This is the code:# -*- coding: utf-8 -*- #!/usr/bin/env python3import tensorflow as tffrom tensorflow import kerasfrom keras.models import Sequentialfrom keras.layers import Dense, SimpleRNN from keras.callbacks import ModelCheckpointfrom keras import backend as Kfrom sklearn.preprocessing import MinMaxScalerfrom sklearn.model_selection import train_test_splitimport numpy import matplotlib.pyplot as pltdef stagger(a, delay): num_of_rows = a.shape[0] num_of_cols = a.shape[1] data_in = numpy.zeros((num_of_rows + delay, num_of_cols * (1 + delay))) data_in[0:num_of_rows, 0:4] = a data_in[1:(num_of_rows + 1), 4:8] = a a = data_in[0:num_of_rows, :] return adataset = numpy.array([[0, 2, 0, 324], [1, 2, 0,324], [2, 2, 0, 324], [3, 2, 0, 324], [4, 2, 0, 324], [5, 2, 0, 324], [6, 2, 0, 324], [7, 2, 0, 324], [8, 2, 0, 324], [9, 2, 0, 324], [ 10, 2, 0, 324], [ 11, 2, 0, 324], [ 12, 2, 0, 324], [ 13, 2, 0, 324], [ 14, 2, 0, 324], [ 15, 2, 0, 324], [ 16, 2, 0, 324], [ 17, 2, 0, 324],[ 18, 2, 0, 324], [ 19, 2, 0, 324], [ 20, 2, 0, 324], [ 21, 2, 0, 324],[ 22, 2, 0, 324], [ 23, 2, 0, 324]])labels = numpy.array([[0.82174763], [0.62098727], [0.45012733], [1.5912102 ], [0.37570953], [0.2930966 ], [0.34982923], [0.72239097], [1.37881947], [1.79550653], [1.88867237], [1.93567087], [1.9771925 ], [2.10873853], [2.158302 ], [2.11018633], [1.9714166 ], [2.2553416 ], [2.41161887], [2.41161887], [2.30333453], [2.38390613], [2.21882553], [2.0707972 ]])delay = 2input_shape = (1, 4*(1+delay))min_max_scaler = MinMaxScaler(feature_range=(0, 1))# prepare datasetdataset = stagger(dataset, delay)# split datasetx_train, x_test, y_train, y_test = train_test_split(dataset, labels, test_size=0.2, shuffle=False)x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, shuffle=False)# normalize datasetx_train = min_max_scaler.fit_transform(x_train)x_test = min_max_scaler.transform(x_test)x_val = min_max_scaler.transform(x_val)# reshape datasetx_train = numpy.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))x_test = numpy.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1]))x_val = numpy.reshape(x_val, (x_val.shape[0], 1, x_val.shape[1]))y_train = numpy.reshape(y_train, (y_train.shape[0], 1, y_train.shape[1]))y_test = numpy.reshape(y_test, (y_test.shape[0], 1, y_test.shape[1]))y_val = numpy.reshape(y_val, (y_val.shape[0], 1, y_val.shape[1]))# RNN modelmodel = Sequential()model.add(SimpleRNN(64, activation="relu", kernel_initializer='random_uniform', input_shape=input_shape, return_sequences=True))model.add(Dense(32, activation="relu", kernel_initializer= 'random_uniform')) model.add(Dense(1, activation="linear", kernel_initializer= 'random_uniform'))# train and predictcallback = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=2, verbose=0, mode='auto')model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy', tf.keras.metrics.MeanSquaredError()])history = model.fit(x_train, y_train, epochs=100, batch_size=8, validation_data=(x_val, y_val), callbacks=[callback])results = model.evaluate(x_test, y_test)# plottest_predictions = model.predict(x_test)test_predictions = min_max_scaler.inverse_transform(test_predictions)y_test = y_test[:,:,0]test_predictions = test_predictions[:,:,0]plt.plot(y_test)plt.plot(test_predictions)plt.legend(['y_test', 'predictions'], loc='upper left')plt.show() | this is because you are passing 3d sequences to minmaxscaler. it accepts 2d sequences. what you have to do is to transform your prediction in 2d and then return to 3d. this can be done in one line...test_predictions = min_max_scaler.inverse_transform(test_predictions.reshape(-1,1)).reshape(test_predictions.shape) |
nanmean with weights to calculate weighted average in pandas .agg I'm using a lambda function in a pandas aggregation to calculate the weighted average. My issue is that if one of the values is nan, the whole result is nan of that group. How can I avoid this?df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],columns = ['one', 'two', 'three'])df['four'] = 'bar'df['five'] = df['one'] > 0df = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])df.loc['b','four'] ='foo'df.loc['c','four'] ='foo' one two three four five founda 1.046540 -0.304646 -0.982008 bar True NaNb NaN NaN NaN foo NaN fooc -1.086525 1.086501 0.403910 foo False NaNd NaN NaN NaN NaN NaN NaNe 0.569420 0.105422 0.192559 bar True NaNf 0.384400 -0.558321 0.324624 bar True NaNg NaN NaN NaN NaN NaN NaNh 0.656231 -2.185062 0.180535 bar True NaNdf.groupby('four').agg(sum=('two','sum'), weighted_avg=('one', lambda x: np.average(x, weights=df.loc[x.index, 'two']))) sum weighted_avgfour bar -2.942608 0.648173foo 1.086501 NaNdesired result: sum weighted_avgfour bar -2.942608 0.648173foo 1.086501 -1.086525 Unlike this question, this is not the problem that the actual value of the column does not appear, it's a problem of nanmean not having a weighting option.Another numerical example: x y0 NaN 18.01 NaN 21.02 NaN 38.03 56.0 150.04 65.0 154.0Here we would wnat to just return the weighted average of the two last rows and ignore the other rows that contain nan. | For me working implemented this solution:def f(x): indices = ~np.isnan(x) return np.average(x[indices], weights=df.loc[x.index[indices], 'two'])df = df.groupby('four').agg(sum=('two','sum'), weighted_avg=('one', f))print (df) sum weighted_avgfour bar -2.942607 0.648173foo 1.086501 -1.086525EDIT:def f(x): indices = ~np.isnan(x) if indices.all(): return np.average(x[indices], weights=df.loc[x.index[indices], 'two']) else: return np.nan |
How to select only few columns in scikit learn column selector pipeline? I was reading the scikitlearn tutorial about column transformer. The given example (https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_selector.html#sklearn.compose.make_column_selector) works, but when I tried to select only few columns, It gives me error.MWEimport numpy as npimport pandas as pdimport seaborn as snsfrom sklearn.compose import make_column_transformerfrom sklearn.compose import make_column_selectordf = sns.load_dataset('tips')mycols = ['tip','sex']ct = make_column_transformer(make_column_selector(pattern=mycols)ct.fit_transform(df)RequiredI want only the select columns in the output.NOTEOf course, I know I can do df[mycols], I am looking for scikit learn pipeline example. | If you don't mind mlxtend, it has built-in transformer for that.Using mlxtendfrom mlxtend.feature_selection import ColumnSelectorpipe = ColumnSelector(mycols)pipe.fit_transform(df)For sklearn >= 0.20Reference: https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.htmlfrom sklearn.compose import ColumnTransformerfrom sklearn.pipeline import Pipelineimport seaborn as snsdf = sns.load_dataset('tips')mycols = ['tip','sex']pipeline = Pipeline([ ("selector", ColumnTransformer([ ("selector", "passthrough", mycols) ], remainder="drop"))])pipeline.fit_transform(df)For sklearn < 0.20from sklearn.base import BaseEstimator, TransformerMixinfrom sklearn.pipeline import Pipelineclass FeatureSelector(BaseEstimator, TransformerMixin): def __init__(self, columns): self.columns = columns def fit(self, X, y=None): return self def transform(self, X, y=None): return X[self.columns]pipeline = Pipeline([('selector', FeatureSelector(columns=mycols)) ])pipeline.fit_transform(df)[:5] |
making dictionary from some logic team 'A' , 'B' and 'C' did consecutive goals 12, 1, and 9 times respectively.teams = ['A','B','C']goals = [12,1,9]Which team did 5th goal? Answer is team 'A'.Which team did 13th goal? Answer is team 'B'.Which team did 21th goal? Answer is team 'C'.I want to make dictionary of team vs. goal number.@Kevin answer is nice. dict(enumerate([t for t,g in zip(teams, goals) for _ in range(g)], 1))Then, given list is [5,13,21]. How to get list: ['A','B','C']?? | >>> teams = ['A','B','C']>>> goals = [12,1,9]>>> d = dict(enumerate([t for t,g in zip(teams, goals) for _ in range(g)], 1))>>> d[5]'A'>>> d[13]'B'>>> d[21]'C'This is roughly equivalent to:d = {}count = 1for team, goal in zip(teams, goals): for i in range(goal): d[count] = team count += 1 |
What is happening with torch.Tensor.add_? I'm looking at this implementation of SGD for PyTorch: https://pytorch.org/docs/stable/_modules/torch/optim/sgd.html#SGD And I see some strange calculations which I don't understand. For instance, take a look at p.data.add_(-group['lr'], d_p). It makes sense to think that there is a multiplication of the two parameters, right? (It's how SGD works, -lr * grads) But the documentation of the function doesn't say anything about this.And what is more confusing, although this SGD code actually works (I tested by copying the code and calling prints below the add_), I can't simply use add_ with two arguments as it does:#this returns an error about using too many arguments import torcha = torch.tensor([1,2,3])b = torch.tensor([6,10,15])c = torch.tensor([100,100,100])a.add_(b, c)print(a)What's going on here? What am I missing? | This works for scalars:a = t.tensor(1)b = t.tensor(2)c = t.tensor(3)a.add_(b, c)print(a) tensor(7)Or a can be a tensor:a = t.tensor([[1,1],[1,1]])b = t.tensor(2)c = t.tensor(3)a.add_(b, c)print(a) tensor([[7, 7], [7, 7]])Output is 7, because: (Tensor other, Number alpha) |
format date from excel to dd-Mmm-yyyy to Python I have pulled date from excel to python , when I print the date is shows like "2019-11-28 00:00:00" . I want to again pass this date to python program in format 24-Nov-2019.How to do it ? | Here's a function that will do the job:from datetime import datetimedef excel_to_formatted_date(input_date): return datetime.fromisoformat(input_date).strftime('%d-%b-%Y')Testing with your value gives the following:>>> excel_to_formatted_date('2019-11-28 00:00:00')'28-Nov-2019'Please note that the %b format code will use the current locale (i.e. the month name will be translated according to the current locale)Refer to the documentation for more information on datetime's format codes. |
/usr/include folder missing in mac I've tried pretty much everything on stackoverflow and other forums to get the /usr/include/ folder on my mac (currently using OS X 10.9.5)Re-installed Xcode and command line tools (actually, command line tool wasn't one of the downloads available - so I'm guessing it's was already downloaded)tried /Applications/Install Xcode.app command line on terminalI haven't tested if there is no standard library on Xcode, but I'm only trying to build cloudera/hue from github and it won't install because there is no /usr/include/python2.7 (and couldn't really ask their forum because the error isn't coming from cloudera/hue).How do I get the /usr/include folder? | Try on 10.14:sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target / |
General decorator to wrap try except in python? I'd interacting with a lot of deeply nested json I didn't write, and would like to make my python script more 'forgiving' to invalid input. I find myself writing involved try-except blocks, and would rather just wrap the dubious function up.I understand it's a bad policy to swallow exceptions, but I'd rather prefer they to be printed and analysed later, than to actually stop execution. It's more valuable, in my use-case to continue executing over the loop than to get all keys.Here's what I'm doing now:try: item['a'] = myobject.get('key').METHOD_THAT_DOESNT_EXIST()except: item['a'] = ''try: item['b'] = OBJECT_THAT_DOESNT_EXIST.get('key2')except: item['b'] = ''try: item['c'] = func1(ARGUMENT_THAT_DOESNT_EXIST)except: item['c'] = ''...try: item['z'] = FUNCTION_THAT_DOESNT_EXIST(myobject.method())except: item['z'] = ''Here's what I'd like, (1):item['a'] = f(myobject.get('key').get('subkey'))item['b'] = f(myobject.get('key2'))item['c'] = f(func1(myobject)...or (2):@fdef get_stuff(): item={} item['a'] = myobject.get('key').get('subkey') item['b'] = myobject.get('key2') item['c'] = func1(myobject) ... return(item)...where I can wrap either the single data item (1), or a master function (2), in some function that turns execution-halting exceptions into empty fields, printed to stdout. The former would be sort of an item-wise skip - where that key isn't available, it logs blank and moves on - the latter is a row-skip, where if any of the fields don't work, the entire record is skipped.My understanding is that some kind of wrapper should be able to fix this. Here's what I tried, with a wrapper:def f(func): def silenceit(): try: func(*args,**kwargs) except: print('Error') return(silenceit)Here's why it doesn't work. Call a function that doesn't exist, it doesn't try-catch it away:>>> f(meow())Traceback (most recent call last): File "<stdin>", line 1, in <module>NameError: name 'meow' is not definedBefore I even add a blank return value, I'd like to get it to try-catch correctly. If the function had worked, this would have printed "Error", right? Is a wrapper function the correct approach here?UPDATEI've had a lot of really useful, helpful answers below, and thank you for them---but I've edited the examples I used above to illustrate that I'm trying to catch more than nested key errors, that I'm looking specifically for a function that wraps a try-catch for...When a method doesn't exist. When an object doesn't exist, and is getting a method called on it. When an object that does not exist is being called as an argument to a function. Any combination of any of these things. Bonus, when a function doesn't exist. | You could use a defaultdict and the context manager approach as outlined in Raymond Hettinger's PyCon 2013 presentationfrom collections import defaultdictfrom contextlib import contextmanager@contextmanagerdef ignored(*exceptions): try: yield except exceptions: pass item = defaultdict(str)obj = dict()with ignored(Exception): item['a'] = obj.get(2).get(3) print item['a']obj[2] = dict()obj[2][3] = 4with ignored(Exception): item['a'] = obj.get(2).get(3) print item['a'] |
How to combine groupby and sort values How to merge two groupby and sort_values in to one df_most_ordered = online_rt.groupby(by=['Country']).sum()df_most_ordered.sort_values(['Quantity'],ascending=False).iloc[1:11] | You can use method chaining:online_rt.groupby(by=["Country"]).sum().sort_values( ["Quantity"], ascending=False).iloc[1:11] |
Is it possible to reuse import code in Python? There are several imports that are common between some files in my project. I would like to reuse this code, concentrating it in a unique file and have just one import in the other files. Is it possible?Or is there another way not to replicate the desired import list in multiple files? | Yes its possible. You can create a Python file with imports and then import that Python file in your code.For Eg:ImportFile.pyimport pandas as pdimport numpy as npimport osMainCode.py:from ImportFile import *#Here you can use pd,np,os and complete your codeORfrom ImportFile import pd,np#And then use pd and np |
Data frame indexing not working as it should be. Does not give error as well. Pandas-Python Lets say we have a dataframe:'''df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8)**2})df'''I am trying to set Values of column C to np.nan for a range of values in column C. I am trying to set nan values for all values in C that are less than 2 and greater than 5. I am doing like this: '''df.loc[(df['C']<2) & (df['C']>5),'C']=np.nan'''It does not give any error or warning but also does nothing and the dataframe remains the same. Does anyone know whats going on? I also tried (not recommended solutions) but they also did not work: ''' df['C'][(df['C']<2) & (df['C']>5)]=np.nandf['C'].loc[(df['C']<2) & (df['C']>5)]=np.nan''' | Since C contains a single value , you would need an OR instead of an and, please use:df.loc[(df['C']<2) | (df['C']>5),'C']=np.nan |
Migrating ctypes function from Python 2 to Python 3 In case this is a XY problem, here is what i want to do:I have a wxPython app, that has to communicate with another process using the WM_COPYDATA windows message. While sending the message with the ctypes module was surprisingly easy, receiving the answer requires me to overwrite the wx loop, since wx does not provide a specific event for this case.On python2, I used the ctypes.windll.user32.SetWindowLongPtrW and the ctypes.windll.user32.CallWindowProcW Functions to get the desired behaviour. However, in python3, the same code leads to OSError: exception: access violation writing.As far as I found out, the only difference between the python2 ctypes module and the python3 ctypes module is how they handle strings.I also read, that there is a difference in how the two version layout the memory, but since I'm no C Expert, I can't find the problem in my code.I have tested the code with python3.7 (64Bit) and python2.7(64Bit) and wx 4.0.7 (though it also works with wx2.8 and python2)Here is minimal reproducible example:import ctypes, ctypes.wintypes, win32con, wx, sys_LPARAM = ctypes.wintypes.LPARAM_WPARAM = ctypes.wintypes.WPARAM_HWND = ctypes.wintypes.HWND_UINT = ctypes.wintypes.UINT_LPCWSTR = ctypes.wintypes.LPCWSTR_LONG_PTR = ctypes.c_long_LRESULT = _LONG_PTR_LPCWSTR = ctypes.wintypes.LPCWSTR_WNDPROC = ctypes.WINFUNCTYPE(_LPARAM, # return Value _HWND, # First Param, the handle _UINT, # second Param, message id _WPARAM, # third param, additional message info (depends on message id) _LPARAM, # fourth param, additional message info (depends on message id))_SetWindowLongPtrW = ctypes.windll.user32.SetWindowLongPtrW_SetWindowLongPtrW.argtypes = (_HWND, ctypes.c_int, _WNDPROC)_SetWindowLongPtrW.restypes = _WNDPROC_CallWindowProc = ctypes.windll.user32.CallWindowProcW_CallWindowProc.argtypes = (_WNDPROC, _HWND, _UINT, _WPARAM, _LPARAM)_CallWindowProc.restypes = _LRESULTdef _WndCallback(hwnd, msg, wparam, lparam): print(hwnd, msg, wparam, lparam) return _CallWindowProc(_old_wndproc, hwnd, msg, _WPARAM(wparam), _LPARAM(lparam))_mywndproc = _WNDPROC(_WndCallback)app = wx.App(redirect=False)frame = wx.Frame(None, title='Simple application')frame.Show()_old_wndproc = _WNDPROC( _SetWindowLongPtrW(frame.GetHandle(), win32con.GWL_WNDPROC, _mywndproc ) )if _old_wndproc == 0: print( "Error" ) sys.exit(1)app.MainLoop()Edit: I know that there is a pywin32 module, that could potentially help me. However, since the code works on python2 I'm rather curious what is going on here. | One problem is here:_LONG_PTR = ctypes.c_long_LRESULT = _LONG_PTRThe type LONG_PTR is "an integer the size of a pointer", which varies between 32-bit and 64-bit processes. Since you are using 64-bit Python, pointers are 64-bit and LONG_PTR should be:_LONG_PTR = ctypes.c_longlongIf you want more portable code for 32- and 64-bit, LPARAM is also defined as LONG_PTR in the Windows headers so the below definition would define LONG_PTR correctly for 32-bit and 64-bit Python since ctypes already defines it correctly based on Python's build:_LONG_PTR = ctypes.wintypes.LPARAM # or _LPARAM in your caseAfter that change I tested your script with wxPython and still had an issue. I suspect wxPython is compiled without the UNICODE/_UNICODE definitions so the SetWindowLongPtr and CallWindowProc APIs must use the A version to retrieve and call the old window procedure. I made that change and the following code works.Full code tested with 64-bit Python 3.8.8:```pyimport ctypes, ctypes.wintypes, win32con, wx, sys_LPARAM = ctypes.wintypes.LPARAM_WPARAM = ctypes.wintypes.WPARAM_HWND = ctypes.wintypes.HWND_UINT = ctypes.wintypes.UINT_LPCWSTR = ctypes.wintypes.LPCWSTR_LONG_PTR = _LPARAM_LRESULT = _LONG_PTR_LPCWSTR = ctypes.wintypes.LPCWSTR_WNDPROC = ctypes.WINFUNCTYPE(_LRESULT, # return Value _HWND, # First Param, the handle _UINT, # second Param, message id _WPARAM, # third param, additional message info (depends on message id) _LPARAM, # fourth param, additional message info (depends on message id))_SetWindowLongPtr = ctypes.windll.user32.SetWindowLongPtrA_SetWindowLongPtr.argtypes = (_HWND, ctypes.c_int, _WNDPROC)_SetWindowLongPtr.restypes = _WNDPROC_CallWindowProc = ctypes.windll.user32.CallWindowProcA_CallWindowProc.argtypes = (_WNDPROC, _HWND, _UINT, _WPARAM, _LPARAM)_CallWindowProc.restypes = _LRESULT@_WNDPROCdef _WndCallback(hwnd, msg, wparam, lparam): print(hwnd, msg, wparam, lparam) return _CallWindowProc(_old_wndproc, hwnd, msg, wparam, lparam)app = wx.App(redirect=False)frame = wx.Frame(None, title='Simple application')frame.Show()_old_wndproc = _WNDPROC(_SetWindowLongPtr(frame.GetHandle(), win32con.GWL_WNDPROC, _WndCallback))if _old_wndproc == 0: print( "Error" ) sys.exit(1)app.MainLoop()As an aside, there is a note about SetWindowLongPtr (and similar for CallWindowProc) in the MSDN documentation that hinted at this solution:The winuser.h header defines SetWindowLongPtr as an alias which automatically selects the ANSI or Unicode version of this function based on the definition of the UNICODE preprocessor constant. Mixing usage of the encoding-neutral alias with code that not encoding-neutral can lead to mismatches that result in compilation or runtime errors. For more information, see Conventions for Function Prototypes. |
How to convert a list into 3 digits in Python? I need to calculate every 3 digits of my decimal input. I have a code like this:decimal = 136462380542525933949347185849942359177#Encryptione = 79n = 3337def mod(x,y): if (x<y): return x else: c = x%y return c def enkripsi(m): decimalList = [int(i) for i in str(m)] print("Decimal List: ", decimalList) cipher = [] for i in decimalList: cipherElement = mod(i**e, n) cipher.append(cipherElement) return (cipher)c = enkripsi(decimal)print("Decimal Encyption: ", c)The output are:Decimal List: [1, 3, 6, 4, 6, 2, 3, 8, 0, 5, 4, 2, 5, 2, 5, 9, 3, 3, 9, 4, 9, 3, 4, 7, 1, 8, 5, 8, 4, 9, 9, 4, 2, 3, 5, 9, 1, 7, 7]Decimal Encryption: [1, 158, 2086, 2497, 2086, 3139, 158, 2807, 0, 270, 2497, 3139, 270, 3139, 270, 1605, 158, 158, 1605, 2497, 1605, 158, 2497, 1254, 1, 2807, 270, 2807, 2497, 1605, 1605, 2497, 3139, 158, 270, 1605, 1, 1254, 1254]How can i get output Decimal List: [ 136, 462, 380, ...]so that Decimal Encryption: [ 2174, 2504, 3249, ...] ? | you're almost there:def decimal2list(num, length=3): string = str(num) return [int(string[i:i+length]) for i in range(0, len(string), length)] |
How to improve performance of converting data to json format? I have the following code to convert a data(row data from postgress) to json. Usually len(data) = 100 000def convert_to_json(self, data): s3 = self.session.client('s3') infos = { 'videos':[], 'total_count': len(data) } for row in data: video_id = row[0] url = s3.generate_presigned_url( ClientMethod='get_object', Params={ 'Bucket': '...', 'Key': '{}.mp4'.format(video_id) } ) dictionary = { 'id': video_id, 'location': row[1], 'src': url } infos['videos'].append(dictionary) return json.dumps(infos)Thanks for any ideas. | Most of the time in your program is probably wasted by waiting for the network. Indeed you call s3.generate_presigned_url which will send a request to Amazon and then you have to wait until the server finally responds. In the meantime there is no much processing you can do.So the most potential is to speed the process up by doing requests in parallel. So you send for instance 10 requests and then wait for the 10 responses. This article gives a brief introduction on this.Based on your question, and the article, you can use something like the following to speed up the process:from multiprocessing.pool import ThreadPool# ...def fetch_generate_presigned_url(video_id): return s3.generate_presigned_url( ClientMethod='get_object', Params={ 'Bucket': '...', 'Key': '{}.mp4'.format(video_id) } )def convert_to_json(self, data): pool = ThreadPool(processes=10) urls = [row[0] for row in data] video_ids = pool.map(fetch_generate_presigned_url,urls) infos = { 'videos':[{'id': video_id,'location': row[1],'src': row[0]} for vide_id,row in zip(video_ids,data)], 'total_count': len(data) } return json.dumps(infos)The number of processes, process=10 can be set higher to make the requests more parallel. |
Customize ListItem contains colorful label, colorful label background not show? I customize QListView and ListItem, the ListItem contain colorful label,but the colorful label not working?I can't find why the QLabel not show it's color.Demo codefrom qtpy.QtWidgets import *from qtpy.QtCore import *from qtpy.QtGui import *class ListItem(QWidget): def __init__(self, color, info): super().__init__() lay = QHBoxLayout() self._colorLabel = QLabel() self._info = QLabel(info) lay.addWidget(self._colorLabel) lay.addWidget(self._info) self.setLayout(lay) self._colorLabel.setAutoFillBackground(True) self.setLabelColor(color) def setLabelColor(self, color): pal = self._colorLabel.palette() pal.setColor(QPalette.Window, color) self._colorLabel.setPalette(pal)class ListWiget(QListWidget): def _addItem(self, item): tmpItem = QListWidgetItem() tmpItem.setSizeHint(item.sizeHint()) self.addItem(tmpItem) self.setItemWidget(tmpItem, item)app = QApplication([])listW = ListWiget()item = ListItem(Qt.red, "red")item2 = ListItem(Qt.blue, "blue")listW._addItem((item2))listW._addItem(item)listW.show()app.exec()The PictureIt always show white background? | You have to use Base instead of Window:def setLabelColor(self, color): pal = self._colorLabel.palette() pal.setColor(QPalette.Base, color) self._colorLabel.setPalette(pal) |
Output of interactive python to shell variable There is an interactive python script something likedef myfunc(): print("enter value between 1 to 10") i=int(input()) if(i<1 or i>10): print("again") myfunc() else: print(i)I want to store the final output which is print(i) in a shell variable. Something likepython myFile.py | read aAbove query get stuck everytime i run the command. Is it possible to do that?Even though ( read b | python myFile.py ) | read a defeats the purpose of interactive python function but this doesn't work as well. It works if myfunc() is non-interactive(not expecting user input). The function in reality takes some input, manipulates it, and then output the result in required format. I know it would be much easier to use either python or shell, but since i already wrote the python function, was wondering if it is possible to link both. If yes, is it also possible to add only final value to shell variable rather than all the print()Same issue happens(terminal gets stuck) when i dopython myFile.py > someFilenameHowever file someFilename was created even though terminal was unresponsive. It seems shell is starting both the processes at the same time which makes sense. I am guessing if somehow python myfile.py executes independently before opening the pipe it could be possible, but i may be wrong. | If you are working on Linux or other Unix variants, would you please try:import osdef myfunc(): tty = os.open("/dev/tty", os.O_WRONLY) os.write(tty, "enter value between 1 to 10\n") i=int(input()) if(i<1 or i>10): os.write(tty, "again\n") myfunc() else: print(i)BTW if your shell is bash, it will be better to say:read a < <(python myFile.py)Otherwise read a is invoked in the subshell and the variable acannot be referred in the following codes. |
Pygame. How do I resize a surface and keep all objects within proportionate to the new window size? If I set a pygame window to resizable and then click and drag on the border of the window the window will get larger but nothing blit onto the surface will get larger with it. (Which is understandable) How would I make it so that when I resize a window all blit objects resize with it and fill the window properly?For example: Say I have a window of 200 x 200 and I blit a button at window_width/2 and window_height/2. The button would be in the center of the window at 100 x 100. Now if I resize the window to 300 x 300 the button stays at 100 x 100 instead of 150 x 150. I tried messing around with pygame.Surface.get_width ect, but had no luck.Basically I'm trying to resize a program's window and have all blit images stay proportionate. | Don't draw on the screen directly, but on another surface. Then scale that other surface to size of the screen and blit it on the screen.Here's a simple example:import pygamefrom pygame.locals import *def main(): pygame.init() screen = pygame.display.set_mode((200, 200),HWSURFACE|DOUBLEBUF|RESIZABLE) fake_screen = screen.copy() pic = pygame.surface.Surface((50, 50)) pic.fill((255, 100, 200)) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.display.quit() elif event.type == VIDEORESIZE: screen = pygame.display.set_mode(event.size, HWSURFACE|DOUBLEBUF|RESIZABLE) fake_screen.fill('black') fake_screen.blit(pic, (100, 100)) screen.blit(pygame.transform.scale(fake_screen, screen.get_rect().size), (0, 0)) pygame.display.flip() main() |
Sorting a list of dictionaries by key in Python I have a list of dictionaries in Python which each contain just one numerical key, and I want to sort them by their keys. Example: list = [{.56: 'a'}, {1.0: 'a'}, {.98: 'b'}, {1.0: 'c'}]I want to sort this and return something like this:[{1.0: 'a'}, {1.0: 'c'}, {.98: 'b'}, {.56: 'a'}]In the instance where the key values are the same, I don't care how those are sorted. I've tried using .sort() or .sorted(), but I'm having trouble figuring out the arguments. | This is a simplified version of @dkamins>>> lst = [{.56: 'a'}, {1.0: 'a'}, {.98: 'b'}, {1.0: 'c'}]>>> sorted(lst, key=max, reverse=True)[{1.0: 'a'}, {1.0: 'c'}, {0.98: 'b'}, {0.56: 'a'}]recall that max(d.keys()) returns the same result as max(d)lambda d: max(d) just wraps another function around the call to max so we can leave that out |
How to fix "Invalid encoding' error in python 3? I was creating a python-based shell where I used one latin-1 character: "└──>". So I tried this:~python 3.8# -*- coding: latin-1 -*-input_prompt = input('''└──> ''')But it gave me error:Invalid encoding 'latin-1'Saving as 'UTF-8'Why does it displays this? I tried code in python 2.7 and same error. How to solve this? | The prompt string is not composed of characters that can be represented in latin-1, hence the error:>>> s = '''└──>'''>>> import unicodedata as ud>>> for c in s:print(ud.name(c))... BOX DRAWINGS LIGHT UP AND RIGHTBOX DRAWINGS LIGHT HORIZONTALBOX DRAWINGS LIGHT HORIZONTALGREATER-THAN SIGN>>> s.encode('latin-1')Traceback (most recent call last): File "<stdin>", line 1, in <module>UnicodeEncodeError: 'latin-1' codec can't encode characters in position 0-2: ordinal not in range(256)Either change the source file encoding to one that can support these characters (such as UTF-8) or use only characters that can be encoded as latin-1. |
How to write data in a specific tab of a spreadsheet with the Google Sheets API in Python? I'm writing data in a Google Sheet using this function :def Export_Data_To_Sheets(df):response_date = service.spreadsheets.values().update( spreadsheetId=SAMPLE_SPREADSHEET_ID_input, valueInputOption='RAW', range=SAMPLE_RANGE_NAME, body=dict( majorDimension='ROWS', values=df.T.reset_index().T.values.tolist()[1:])).execute()print('Sheet successfully Updated')It works well, but I have two tabs in my Google Sheet and I would like to choose in which one I want to write data. I don't know how can I do this. | In this point in the code:range=SAMPLE_RANGE_NAMEYou can replace this value with a sheet and cell reference, something like:range="Sheet1!A1:D5"ReferenceWriting a Single Range |
How to iteratively nest a nested function I have an array arr_multi_dim which is multi-dimensional. Every time when I increase a parameter n, there will be more entries created in the array results and the array will get larger.With each increase in n, I need to perform the function np.concatenate() on the array arr_multi_dim, in such a way that there will be more np.concatenate() function nested every time n increases.For eg., when n=2:arr_multi_dim = np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1)when n=3:arr_multi_dim = np.concatenate(np.concatenate( np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1), axis=1), axis=1)when n=4:arr_multi_dim = np.concatenate(np.concatenate( np.concatenate(np.concatenate( np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1), axis=1), axis=1), axis=1), axis=1)etc.where at each increment of n, a pair of np.concatenate() (ie. two) gets added into the function.How do I write a function, loops (or something similar), so that when I specify any values for n, the appropriate np.concatenate() function will be used?Many thanks in advance.Edit:This is the full code that I have written which uses the above np.concatenate() function.from itertools import productfrom joblib import Parallel, delayedfrom functools import reducefrom operator import mulimport numpy as nplst = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]arr = np.array(lst)n = 2def test1(arr, n): flat = np.ravel(arr).tolist() gen = (list(a) for a in product(flat, repeat=n)) results = Parallel(n_jobs=-1)(delayed(reduce)(mul, x) for (x) in gen) nrows = arr.shape[0] ncols = arr.shape[1] arr_multi_dim = np.array(results).reshape((nrows, ncols)*n) arr_final = np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1) # need to generalise this return arr_finalThe above code only works for n=2. I am trying to generalize the np.concatenate part of the code so that it would work for any n as mentioned above. | If i understood you correctly its pretty simple:arr_multi_dim = resultsfor i in range(n): if i < 2: arr_multi_dim = np.concatenate(arr_multi_dim , axis=1) else: arr_multi_dim = np.concatenate(np.concatenate(arr_multi_dim , axis=1), axis=1)becase the first two iteration only add a single layer while the rest add two layers |
How to get the stack trace of a nested exeption in python? If an exception is raised I'd like to analyse the stack trace in python that tells about where exactly the problem is in the source code file.Of course for that purpose the module traceback exists. And that works fine for regular exceptions. But how do you deal with this situation if nested exceptions occur?Consider this example:def test(): try: a = 0 b = 5 / a except Exception as ee1: assert Falsetest()This example prints two exceptions:Traceback (most recent call last): File "./test4d.py", line 12, in test b = 5 / aZeroDivisionError: division by zeroDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "./test4d.py", line 18, in <module> test() File "./test4d.py", line 14, in test assert FalseAssertionErrorSo information about both exceptions is known to the interpreter. I therefore like to retrieve these two pieces of information from Python: The stack trace of the assert statement (used as an example here to cause an exception) and the stack trace of the division by zero exception (used as an example here to cause an exception). How can I do that?And the second part of the question: How can I do that in a structured way? The module traceback can be used to get more information about an existing exception. But I do not want to print the exception, I want to store it. Therefore I'd like to get the stack trace as a tuple of instances of some class where each instance represents the data within each stack frame. How can I do that? | There is a variable named __context__ associated with an exception. This variable can be used to access nested exceptions. See this example:import tracebackdef test(): try: a = 0 b = 5 / a except Exception as ee1: assert Falsetry: test()except Exception as ee: print(repr(ee)) stackTraceList = traceback.extract_stack(ee.__traceback__.tb_frame) del stackTraceList[0] for frame in stackTraceList: print("\t", frame) if ee.__context__: print(repr(ee.__context__)) stackTraceList = traceback.extract_stack(ee.__context__.__traceback__.tb_frame) del stackTraceList[0] for frame in stackTraceList: print("\t", frame)This will output the following text:AssertionError()ZeroDivisionError('division by zero',) <FrameSummary file ./example.py, line 8 in test>That indicates that both exceptions can be identified and their stack traces can be iterated through.For convenience I implemented a simple helper module to process exceptions and stack traces named jk_exceptionhelper. You can install this module using pip. For details have a look at the GIT repository: https://github.com/jkpubsrc/python-module-jk-exceptionhelper |
Web-scraping articles from WSJ using Beautifulsoup in python 3.7? I am trying to scrape articles from the Wall Street Journal using Beautifulsoup in Python. However, the code which I am running is executing without any error (exit code 0) but no results. I don't understand what is happening? Why this code is not giving expected results.I even have paid a subscription.I know that something is not right but I can't locate the problem.import timeimport requestsfrom bs4 import BeautifulSoupurl = 'https://www.wsj.com/search/term.html?KEYWORDS=cybersecurity&min-date=2018/04/01&max-date=2019/03/31' \ '&isAdvanced=true&daysback=90d&andor=AND&sort=date-desc&source=wsjarticle,wsjpro&page={}'pages = 32for page in range(1, pages+1): res = requests.get(url.format(page)) soup = BeautifulSoup(res.text,"lxml") for item in soup.select(".items.hedSumm li > a"): resp = requests.get(item.get("href")) _href = item.get("href") try: resp = requests.get(_href) except Exception as e: try: resp = requests.get("https://www.wsj.com" + _href) except Exception as e: continue sauce = BeautifulSoup(resp.text,"lxml") date = sauce.select("time.timestamp.article__timestamp.flexbox__flex--1") date = date[0].text tag = sauce.select("li.article-breadCrumb span").text title = sauce.select_one("h1.wsj-article-headline").text content = [elem.text for elem in sauce.select("p.article-content")] print(f'{date}\n {tag}\n {title}\n {content}\n') time.sleep(3)As I wrote in the code, I am trying to scrape date, title, tag, and content of all the articles. It would be helpful if I can get suggestions about my mistakes, what should I do to get the desired results. | Replace your code :resp = requests.get(item.get("href"))To:_href = item.get("href")try: resp = requests.get(_href)except Exception as e: try: resp = requests.get("https://www.wsj.com"+_href) except Exception as e: continueBecause most of item.get("href") is not providing proper website url for eg you are getting url like this./news/types/national-security/public/page/news-financial-markets-stock.htmlhttps://www.wsj.com/news/worldOnly https://www.wsj.com/news/world is a valid website URL. so you need to concate base URL with _href.Update:import timeimport requestsfrom bs4 import BeautifulSoupfrom bs4.element import Tagurl = 'https://www.wsj.com/search/term.html?KEYWORDS=cybersecurity&min-date=2018/04/01&max-date=2019/03/31' \ '&isAdvanced=true&daysback=90d&andor=AND&sort=date-desc&source=wsjarticle,wsjpro&page={}'pages = 32for page in range(1, pages+1): res = requests.get(url.format(page)) soup = BeautifulSoup(res.text,"lxml") for item in soup.find_all("a",{"class":"headline-image"},href=True): _href = item.get("href") try: resp = requests.get(_href) except Exception as e: try: resp = requests.get("https://www.wsj.com"+_href) except Exception as e: continue sauce = BeautifulSoup(resp.text,"lxml") dateTag = sauce.find("time",{"class":"timestamp article__timestamp flexbox__flex--1"}) tag = sauce.find("li",{"class":"article-breadCrumb"}) titleTag = sauce.find("h1",{"class":"wsj-article-headline"}) contentTag = sauce.find("div",{"class":"wsj-snippet-body"}) date = None tagName = None title = None content = None if isinstance(dateTag,Tag): date = dateTag.get_text().strip() if isinstance(tag,Tag): tagName = tag.get_text().strip() if isinstance(titleTag,Tag): title = titleTag.get_text().strip() if isinstance(contentTag,Tag): content = contentTag.get_text().strip() print(f'{date}\n {tagName}\n {title}\n {content}\n') time.sleep(3)O/P:March 31, 2019 10:00 a.m. ET Tech Care.com Removes Tens of Thousands of Unverified Listings The online child-care marketplace Care.com scrubbed its site of tens of thousands of unverified day-care center listings just before a Wall Street Journal investigation published March 8, an analysis shows. Care.com, the largest site in the U.S. for finding caregivers, removed about 72% of day-care centers, or about 46,594 businesses, listed on its site, a Journal review of the website shows. Those businesses were listed on the site as recently as March 1....Updated March 29, 2019 6:08 p.m. ET Politics FBI, Retooling Once Again, Sets Sights on Expanding Cyber Threats The FBI has launched its biggest transformation since the 2001 terror attacks to retrain and refocus special agents to combat cyber criminals, whose threats to lives, property and critical infrastructure have outstripped U.S. efforts to thwart them. The push comes as federal investigators grapple with an expanding range of cyber attacks sponsored by foreign adversaries against businesses or national interests, including Russian election interference and Chinese cyber thefts from American companies, senior bureau executives... |
Value Error: could not convert string to float: 'good' I am trying to fit a decision tree model with the training dataset. But finding this errorcredit_df=pd.read_csv('credit.csv')credit_df.head()[! dataframe]1X = credit_df.drop("default" , axis=1)Y=credit_df.pop("default")from sklearn.model_selection import train_test_splitX_train, X_test, train_labels, test_labels = train_test_split(X, y, test_size=.30, random_state=1)dt_model = DecisionTreeClassifier(criterion = 'gini' )dt_model.fit(X_train, train_labels) | I tried the code below and now the error is fixed. There were some object data types and i converted them into categorical valuesfor feature in credit_df.columns: if credit_df[feature].dtype == 'object': credit_df[feature] = pd.Categorical(credit_df[feature]).codes |
Highlight or bold strings in a text file using python-docx? I have a list of 'short strings', such as:['MKWVTFISLLLLFSSAYSRGV', 'SSAYSRGVFRRDTHKSEIAH', 'KPKATEEQLKTVMENFVAFVDKCCA']That I need to match to a 'long string' contained in a word file (BSA.docx) or .txt file (does not matter) such as:sp|P02769|ALBU_BOVIN Albumin OS=Bos taurus OX=9913 GN=ALB PE=1 SV=4MKWVTFISLLLLFSSAYSRGVFRRDTHKSEIAHRFKDLGEEHFKGLVLIAFSQYLQQCPFDEHVKLVNELTEFAKTCVADESHAGCEKSLHTLFGDELCKVASLRETYGDMADCCEKQEPERNECFLSHKDDSPDLPKLKPDPNTLCDEFKADEKKFWGKYLYEIARRHPYFYAPELLYYANKYNGVFQECCQAEDKGACLLPKIETMREKVLASSARQRLRCASIQKFGERALKAWSVARLSQKFPKAEFVEVTKLVTDLTKVHKECCHGDLLECADDRADLAKYICDNQDTISSKLKECCDKPLLEKSHCIAEVEKDAIPENLPPLTADFAEDKDVCKNYQEAKDAFLGSFLYEYSRRHPEYAVSVLLRLAKEYEATLEECCAKDDPHACYSTVFDKLKHLVDEPQNLIKQNCDQFEKLGEYGFQNALIVRYTRKVPQVSTPTLVEVSRSLGKVGTRCCTKPESERMPCTEDYLSLILNRLCVLHEKTPVSEKVTKCCTESLVNRRPCFSALTPDETYVPKAFDEKLFTFHADICTLPDTEKQIKKQTALVELLKHKPKATEEQLKTVMENFVAFVDKCCAADDKEACFAVEGPKLVVSTQTALAWhat I would like to obtain is the following using python (in a terminal or in a jupyter notebook):Highlight shorter strings matches in the long string. The highlight style is not important, it can be highlighted with a yellow marker or bolded, or underline, anything that jump to the eyes to see if there were matches or not.Find the coverage of the long string as ((number of highlighted characters)/(total length of the long string))*100. Note the first line starting with ">>" of the long string is just an identifier and needs to be disregarded.Here is my current code for the first task:from docx import Documentdoc = Document('BSA.docx')peptide_list = ['MKWVTFISLLLLFSSAYSRGV', 'SSAYSRGVFRRDTHKSEIAH', 'KPKATEEQLKTVMENFVAFVDKCCA']def highlight_peptides(text, keywords): text = text.paragraphs[1].text replacement = "\033[91m" + "\\1" + "\033[39m" enter code here`text = re.sub("(" + "|".join(map(re.escape, keywords)) + ")", replacement, text, flags=re.I) highlight_peptides(doc, peptide_list)The problem is that the first two short strings in the list are overlapping and in the results only the first one is highlighted in red in the sequence.See the first link below, that contains the output result I am obtaining.current resultSee this second link to visualize my 'ideal' result.ideal resultIn the ideal I also included the second task of finding the sequence coverage. I am not sure how to count the colored or highlighted characters. | You can use the third-party regex module to do an overlapping keyword search. Then, it is perhaps easiest to go through the matches in 2 passes: (1) storing the start and end positions of each highlighted segment and combining any that overlap:import regex as re # important - not using the usual re moduledef find_keywords(keywords, text): """ Return a list of positions where keywords start or end within the text. Where keywords overlap, combine them. """ pattern = "(" + "|".join(re.escape(word) for word in keywords) + ")" r = [] for match in re.finditer(pattern, text, flags=re.I, overlapped=True): start, end = match.span() if not r or start > r[-1]: r += [start, end] # add new segment elif end > r[-1]: r[-1] = end # combine with previous segment return rpositions = find_keywords(keywords, text)Your 'keyword coverage' (percent highlighted) can be calculated as:coverage = sum(positions[1::2]) - sum(positions[::2]) # sum of end positions - sum of start positionspercent_coverage = coverage * 100 / len(text)Then (2) to add the formatting to the text, using the run properties in docx:import docxdef highlight_sections_docx(positions, text): """ Add characters to a text to highlight the segments indicated by a list of alternating start and end positions """ document = docx.Document() p = document.add_paragraph() for i, (start, end) in enumerate(zip([None] + positions, positions + [None])): run = p.add_run(text[start:end]) if i % 2: # odd segments are highlighted run.bold = True # or add other formatting - see https://python-docx.readthedocs.io/en/latest/api/text.html#run-objects return documentdoc = highlight_sections_docx(positions, text)doc.save("my_word_doc.docx")Alternatively, you could highlight the text in html, and then save this to a Word document using the htmldocx package:def highlight_sections(positions, text, start_highlight="<mark>", end_highlight="</mark>"): """ Add characters to a text to highlight the segments indicated by a list of alternating start and end positions """ r = "" for i, (start, end) in enumerate(zip([None] + positions, positions + [None])): if i % 2: # odd segments are highlighted r += start_highlight + text[start:end] + end_highlight else: # even segments are not r += text[start:end] return rfrom htmldocx import HtmlToDocxs = highlight_sections(positions, text, start_highlight="<strong>", end_highlight="</strong>")html = f"""<html><head></head><body><span style="width:100%; word-wrap:break-word; display:inline-block;">{s}</span></body></html>"""HtmlToDocx().parse_html_string(html).save("my_word_doc.docx")(<mark> would be a more appropriate html tag to use than <strong>, but unfortunately HtmlToDocx does not preserve any formatting of <mark>, and ignores CSS styles).highlight_sections can also be used to output to the console:print(highlight_sections(positions, text, start_highlight="\033[91m", end_highlight="\033[39m"))... or to a Jupyter / IPython notebook:from IPython.core.display import HTMLs = highlight_sections(positions, text)display(HTML(f"""<span style="width:100%; word-wrap:break-word; display:inline-block;">{s}</span>""") |
Qt Designer Auto Fitting the tableWidget directly from the Designer I am trying to auto fit the tableWidget columns to the available area of the tableWidgetCurrently my layout looks like the picture below. As can be seen there is unnecessary white space to the right which I would like to fill out evenly between the four columns.Setting horizontalHeaderStretch to True (shown below) is not what I am after. This stretches the last column unevenly.I tried to set the sizeAdjustPolicy to AdjustToCOntents but I saw no visible difference.A similar question can be found here, however I couldn't find any mentions on how to do this directly from the designer.Any suggestions on how this can be done? Thanks in advance. | The feature you indicate can not be done with Qt Designer. In Qt Designer, only the Q_PROPERTY enabled by the DESIGNABLE flag can be modified, but setSectionResizeMode() is not a Q_PROPERTY but a method of QHeaderView, that is indicated in the docs: The DESIGNABLE attribute indicates whether the property should be visible in the property editor of GUI design tool (e.g., Qt Designer). Most properties are DESIGNABLE (default true). Instead of true or false, you can specify a boolean member function.So you'll have to do it programmatically:header = self.table.horizontalHeader() header.setSectionResizeMode(0, QtWidgets.QHeaderView.Stretch)header.setSectionResizeMode(1, QtWidgets.QHeaderView.ResizeToContents)header.setSectionResizeMode(2, QtWidgets.QHeaderView.ResizeToContents)# ... |
Kosaraju's Algorithm for SCCs, non-recursive I have an implementation of Kosaraju's algorithm for finding SCCs in Python. The code below contains a recursive (fine on the small test cases) version and a non-recursive one (which I ultimately need because of the size of the real dataset).I have run both the recursive and non-recursive version on a few test datasets and get the correct answer. However running it on the much larger dataset that I ultimately need to use, produces the wrong result. Going through the real data is not really an option because it contains nearly a million nodes.My problem is that I don't know how to proceed from here. My suspision is that I either forgot a certain case of graph constellation in my test cases, or that I have a more fundamental misunderstanding about how this algo is supposed to work.#!/usr/bin/env python3import heapqclass Node(): """A class to represent nodes in a DirectedGraph. It has attributes for performing DFS.""" def __init__(self, i): self.id = i self.edges = [] self.rev_edges = [] self.explored = False self.fin_time = 0 self.leader = 0 def add_edge(self, edge_id): self.edges.append(edge_id) def add_rev_edge(self, edge_id): self.rev_edges.append(edge_id) def mark_explored(self): self.explored = True def set_leader(self, leader_id): self.leader = leader_id def set_fin_time(self, fin_time): self.fin_time = fin_timeclass DirectedGraph(): """A class to represent directed graphs via the adjacency list approach. Each dictionary entry is a Node.""" def __init__(self, length, list_of_edges): self.nodes = {} self.nodes_by_fin_time = {} self.length = length self.fin_time = 1 # counter for the finishing time self.leader_count = 0 # counter for the size of leader nodes self.scc_heapq = [] # heapq to store the ssc by size self.sccs_computed = False for n in range(1, length + 1): self.nodes[str(n)] = Node(str(n)) for n in list_of_edges: ns = n[0].split(' ') self.nodes[ns[0]].add_edge(ns[1]) self.nodes[ns[1]].add_rev_edge(ns[0]) def n_largest_sccs(self, n): if not self.sccs_computed: self.compute_sccs() return heapq.nlargest(n, self.scc_heapq) def compute_sccs(self): """First compute the finishing times and the resulting order of nodes via a DFS loop. Second use that new order to compute the SCCs and order them by their size.""" # Go through the given graph in reverse order, computing the finishing # times of each node, and create a second graph that uses the finishing # times as the IDs. i = self.length while i > 0: node = self.nodes[str(i)] if not node.explored: self.dfs_fin_times(str(i)) i -= 1 # Populate the edges of the nodes_by_fin_time for n in self.nodes.values(): for e in n.edges: e_head_fin_time = self.nodes[e].fin_time self.nodes_by_fin_time[n.fin_time].add_edge(e_head_fin_time) # Use the nodes ordered by finishing times to calculate the SCCs. i = self.length while i > 0: self.leader_count = 0 node = self.nodes_by_fin_time[str(i)] if not node.explored: self.dfs_leaders(str(i)) heapq.heappush(self.scc_heapq, (self.leader_count, node.id)) i -= 1 self.sccs_computed = True def dfs_fin_times(self, start_node_id): stack = [self.nodes[start_node_id]] # Perform depth-first search along the reversed edges of a directed # graph. While doing this populate the finishing times of the nodes # and create a new graph from those nodes that uses the finishing times # for indexing instead of the original IDs. while len(stack) > 0: curr_node = stack[-1] explored_rev_edges = 0 curr_node.mark_explored() for e in curr_node.rev_edges: rev_edge_head = self.nodes[e] # If the head of the rev_edge has already been explored, ignore if rev_edge_head.explored: explored_rev_edges += 1 continue else: stack.append(rev_edge_head) # If the current node has no valid, unexplored outgoing reverse # edges, pop it from the stack, populate the fin time, and add it # to the new graph. if len(curr_node.rev_edges) - explored_rev_edges == 0: sink_node = stack.pop() # The fin time is 0 if that node has not received a fin time. # Prevents dealing with the same node twice here. if sink_node and sink_node.fin_time == 0: sink_node.set_fin_time(str(self.fin_time)) self.nodes_by_fin_time[str(self.fin_time)] = \ Node(str(self.fin_time)) self.fin_time += 1 def dfs_leaders(self, start_node_id): stack = [self.nodes_by_fin_time[start_node_id]] while len(stack) > 0: curr_node = stack.pop() curr_node.mark_explored() self.leader_count += 1 for e in curr_node.edges: if not self.nodes_by_fin_time[e].explored: stack.append(self.nodes_by_fin_time[e])###### Recursive verions below ################################### def dfs_fin_times_rec(self, start_node_id): curr_node = self.nodes[start_node_id] curr_node.mark_explored() for e in curr_node.rev_edges: if not self.nodes[e].explored: self.dfs_fin_times_rec(e) curr_node.set_fin_time(str(self.fin_time)) self.nodes_by_fin_time[str(self.fin_time)] = Node(str(self.fin_time)) self.fin_time += 1 def dfs_leaders_rec(self, start_node_id): curr_node = self.nodes_by_fin_time[start_node_id] curr_node.mark_explored() for e in curr_node.edges: if not self.nodes_by_fin_time[e].explored: self.dfs_leaders_rec(e) self.leader_count += 1To run:#!/usr/bin/env python3import utilsfrom graphs import scc_computation# data = utils.load_tab_delimited_file('data/SCC.txt')data = utils.load_tab_delimited_file('data/SCC_5.txt')# g = scc_computation.DirectedGraph(875714, data)g = scc_computation.DirectedGraph(11, data)g.compute_sccs()# for e, v in g.nodes.items():# print(e, v.fin_time)# for e, v in g.nodes_by_fin_time.items():# print(e, v.edges)print(g.n_largest_sccs(20))Most complex test case (SCC_5.txt):1 51 42 32 112 63 74 24 84 105 75 55 36 86 117 98 28 89 310 111 911 6Drawing of that test case: https://imgur.com/a/LA3ObpNThis produces 4 SCCs:Bottom: Size 4, nodes 2, 8, 6, 11 Left: Size 3, nodes 1, 10, 4 Top: Size 1, node 5 Right: Size 3, nodes 7, 3, 9 | Ok, I figured out the missing cases. The algorithm wasn't performing correctly on very strongly connected graphs and duplicated edges. Here is an adjusted version of the test case I posted above with a duplicated edge and more edges to turn the whole graph into one big SCC.1 51 42 32 62 113 23 74 24 84 105 15 35 55 76 87 98 28 28 48 89 310 111 911 6 |
Python Compare Dataframe columns and replace with contents based on prefix Still relatively new to working in python and am having some issues.I currently have a small program that takes csv files, merges them, puts them into a data frame, and then converts to excel.What I want to do is match the values of 'Team' and 'Abrev' from the data frame columns based on the prefix of its values, and then replace Team column with 'Abrev' column contents.Team Games Points AbrevArsenal 38 87 ARSLiverpool 38 80 LIVManchester 38 82 MANNewcastle 38 73 NEWI would like it to eventually look like the following:Team Games Points ARS 38 87 LIV 38 80 MAN 38 82 NEW 38 73 So what I'm thinking is that I need a for loop to iterate through the amount of rows in the dataframe, and then I need a way to compare the contents by the prefix in column Abrev. If first three letters match then replace, but I don't know how to go about it because I am trying not to hard code it.Can someone help or point me in the right direction? | pandas is what you are looking forimport pandas as pddf = pd.read_csv('input.csv')df['team'] = df['Abrev']df.drop('Abrev', axis=1, inplace=True)df.to_excel('output.xls') |
Pandas pd.concat works on first pass but says 'No objects to concat' on subsequent passes I have an interesting problem with Python PANDAS leveraging concat.On the first pass everything works fine on the subsequent passes I receive "No objects to concat". It doesn't make sense because it's looking at the same "CSV's" on each run so in theory there should always be something to "concat" What I am doing:I have a function that looks at incoming URL data opens a csv with two columns and pulls the first column where second column matches the URL data.Example CSV: Two columns:Test | URLTest 2 | URLCode I am using: path = r'./resources/URL' # location of CSV's allFiles = glob.glob(path + "/*.csv") list_ = [] for file_ in allFiles: data = pd.read_csv(file_, index_col=None, header=0) list_.append(data) df = pd.concat(list_, axis=0, ignore_index=True) search = df[df['URL'].str.contains(":" + groupid.group(1))] df1 = search[['Column1']] for index, row in df1.iterrows(): data = ('{0}'.format(row['Column1'])) newid = idgrab(data)# Pass data off to another functionAny idea what might be going on here? Even if I pass the same data over the function multiple times I receive the same error after the initial run. | Your list_ is empty which is what is throwing that error. You should look at the csv's in allFiles. Are you moving the csv's or are they getting renamed in the directory? |
Python: split into lines and remove specific line based on search i have a csv file like below, and with my little python knowledge i am trying to split its content into lines based "sec" as start field and remove specific lines which has field with sip:+99*, sip:+88*, sip:+77*.cat text.csvsec,sip:+1111,2222,3333,4444,5555,sec,6666,sip:+7777,8888,sec,sip:+9999,1000,1100,110,1200,1300,1400required output is lines, where ever string "sec" is matched, and remove specific lines where ever any line with field started with sip:+99*, sip:+88* and sip:+77* (any numbers after sip:+99xxxx)required output after split:sec,sip:+1111,2222,3333,4444,5555sec,6666,sip:+7777,8888sec,sip:+9999,1000,1100,1100,1200,1300,1400required output after removing lines with field match:sec,sip:+1111,2222,3333,4444,5555i have already tried python code using csv, re modules, but no luck.i am new to python programming, please help. | Python:import res = 'sec,sip:+1111,2222,3333,4444,5555,sec,6666,sip:+7777,8888,sec,sip:+9999,1000,1100,110,1200,1300,1400'pos = [m.start() for m in re.finditer('sec', s)]i = 0start_idx = end_idx = Noneraw_data = []while i < len(pos)-1: start_idx = pos[i] end_idx = pos[i+1]-1 raw_data.append(s[start_idx:end_idx]) i = i + 1start_idx = pos[i]end_idx = len(s)raw_data.append(s[start_idx:end_idx])print('%s' % '\n'.join(map(str, raw_data)))p = re.compile(r'sip:\+(?!([7]{2,}|[8]{2,}|[9]{2,})).*')result = [ s for s in raw_data if p.search(s) ]print('\n%s' % '\n '.join(map(str, result)))Output after split:sec,sip:+1111,2222,3333,4444,5555sec,6666,sip:+7777,8888sec,sip:+9999,1000,1100,110,1200,1300,1400Output after filter with regular expression:sec,sip:+1111,2222,3333,4444,5555 |
C++ python module (based on Pybind11) import error: ModuleNotFoundError A C++ python module based on pybind11 library cannot be imported anymore in python. It was working until a few weeks ago but not any more (may be since the installation of miniconda). Cannot track the exact point when this stopped working as I was not using it since many weeks. I start python in the same directory as the module, and tried to import it in the terminal. And I get the error:ModuleNotFoundError: No module named 'ld_pybind_d'In the mean while I also tried:deleted the directory where miniconda was installed, rebuilt the module and linked it against python3.6m library. Created an empty __ init__.py file in the module directoryExported the current working directory in the PYTHONPATH environment variableOther info: Built 64 bit version module and I also have 64 bit version pythonPython version 3.6.8Nothing works .. Your help is very appreciated ... | As is, many times the case, if nothing really works, a restart would definitely do. I did that and seems to solve the case. Never the less, im just relieved it is loading now. |
How To Extract Three Letters Followed By Five Digits Using Regex in Python I have the following dataframe in Python:abc12345 abc1234abc1324.How do I extract only the ones that have three letters followed by five digits? The desired result would be:abc12345.df.column.str.extract('[^0-9](\d\d\d\d\d)$')I think this works, but is there any better way to modify (\d\d\d\d\d) ?What if I had like 30 digits. Then I'll have to type \d 30 times, which is inefficient. | You should be able to use:'[a-zA-Z]{3}\d{5}'If the strings don't include capital letters this can reduce to:'[a-z]{3}\d{5}'Change the values in the {x} to adjust the number of chars to capture. |
Elegant way to drop records in pandas based on size/count of a record This isn't a duplicate. I am not trying drop rows based on IndexI have a dataframe like as shown belowdf = pd.DataFrame({'subject_id':[1,1,1,1,1,1,1,2,2,2,2,2],'time_1' :['2173-04-03 12:35:00','2173-04-03 12:50:00','2173-04-05 12:59:00','2173-05-04 13:14:00','2173-05-05 13:37:00','2173-07-06 13:39:00','2173-07-08 11:30:00','2173-04-08 16:00:00','2173-04-09 22:00:00','2173-04-11 04:00:00','2173- 04-13 04:30:00','2173-04-14 08:00:00'],'val' :[5,2,3,1,1,6,5,5,8,3,4,6]})df['time_1'] = pd.to_datetime(df['time_1'])df['day'] = df['time_1'].dt.dayI would like to drop records based on subject_id if their count is <=5.This is what I trieddf1 = df.groupby(['subject_id']).size().reset_index(name='counter')df1[df1['counter']>5] # this gives the valid subject_id = 1 has count more than 5)Now using this subject_id, I have to get the base dataframe rows for that subject_idThere might be an elegant way to do this.I would like to get the output as shown below. I would like have my base dataframe rows | Use:df[df.groupby('subject_id')['subject_id'].transform('size')>5]Output: subject_id time_1 val day0 1 2173-04-03 12:35:00 5 31 1 2173-04-03 12:50:00 2 32 1 2173-04-05 12:59:00 3 53 1 2173-05-04 13:14:00 1 44 1 2173-05-05 13:37:00 1 55 1 2173-07-06 13:39:00 6 66 1 2173-07-08 11:30:00 5 8 |
Implementation of Python code which uses Tensorflow library into HTML? Machine learning beginner here. I've been following the tensorflow text classification tutorial. I have code which uses a trained keras model to classify movie reviews based on user inputted text.My main question is this: How do I integrate this code into html so that I can create a website which takes in user text and classifies it using the python code?I'm unfamiliar with tensorflow.js, and converting the model over doesn't transfer the keras dataset.Is there some web framework which can support the tensorflow library, or any library for that matter? Or should I give up on this endeavor and just transfer the model into tensorflow.js? | You can use Flask to make a Web App that gets the data via a Form POST and do you thing with the tensorflow and display the results in another Page.Something Likefrom flask import Flask, render_template, requestapp = Flask(__name__)@app.route('/')def homepage(): return render_template('index.html')@app.route('/classify', methods=['POST'])def classify(): text = request.form['name_of_text_input_in_index.html'] # Call your tensorflow function with the text result = classify_with_tensorflow(text) return render_template('result.html', result = result)if __name__ == '__main__': app.run(debug = True)Display the results with formatting on a Jinja Template |
Missing modules when running Jupyter notebook on aws I'm running a Jupyter notebook on a virtual machine on AWS, and I am having issues loading modules. Apparently the notebook doesn't find the modules (see image below), but these are listed if I give the command !conda list. Does anyone have suggestions on how to fix this? Thanks! | Try:import sys !{sys.executable} -m pip install <your package>Here’s a link that might help you find some more information on how to install python packages in jupyter |
Can't override the Model Field in Django ModelForm I'm trying to add a DateTimeWidget and Initial value to the due_date field of my Model, I'm following the documentation as close as I can tell. No matter what I try, I can't get the field declared in my ModelForm class to override my existing field in the Model.https://docs.djangoproject.com/en/1.9/topics/forms/modelforms/#overriding-the-default-fieldsIf I add a Widget separately it works, but then I don't know how to add an initial value unless I set the default in the model. Can someone point out what I'm doing wrong?from django import formsimport datetimefrom datetimewidget.widgets import DateTimeWidgetfrom .models import EstRequestdef due_date(): due_date = (datetime.datetime.now() + datetime.timedelta(days=1)) return due_dateclass EstRequestModelForm(forms.ModelForm): class Meta: model = EstRequest due_date = forms.SplitDateTimeField(widget=forms.SplitDateTimeWidget, initial=due_date) fields = [ 'market', 'plan', 'builder', 'due_date', 'notes', ] # widgets = { # # Use localization and bootstrap 3 # 'due_date': DateTimeWidget(attrs={'id': "due_date"}, usel10n=True, bootstrap_version=3) # } | In fact you are definfing the field in the wrong place, It should be outside Meta class:class EstRequestModelForm(forms.ModelForm): due_date = forms.DateTimeField(widget=forms.DateTimeInput, initial=due_date) class Meta: model = EstRequest fields = [ 'market', 'plan', 'builder', 'due_date', 'notes', ] |
The python documentation about format does not match the running results A word in the python documentation Format Specification Mini-Language: A general convention is that an empty format string ("") produces the same result as if you had called str() on the value. But it doesn't match the actual result from both python2 and python3:In [1]: "".format(100)Out[1]: ''In [2]: str(100)Out[2]: '100' | You have an empty template, not an empty format string. The format string is the part after the optional : in a {..} placeholder. By completely omitting the placeholder, there is nowhere for the value to placed into.So the following produces the same as str() on the value:>>> '{:}'.format(100)'100'>>> '{}'.format(100)'100'as does the empty string as a second argument to the format() function:>>> format(100, '')'100'In all cases the format string is empty.You may have missed that the whole Format Specification Mini-language only documents what formatting operations you can use in the {:...} part of a placeholder, or as the second argument for format(). For template strings (the part you apply the str.format() method to), you need to read section above that, the Format String Syntax section. |
how to trim file - for rows which with the same value in two columns, conserve only the row with max in another columns I am now facing a file trimming problem. I would like to trim rows in a tab-delimited file. The rule is: for rows which with the same value in two columns, preserve only the row with the largest value in the third column. There may be different numbers of such redundant rows defined by two columns. If there is a tie for the largest value in the third column, preserve the first one (after ordering the file). (1) My file looks like (tab-delimited, with several millions of rows):1 100 25 T1 101 26 A1 101 27 G1 101 30 A1 102 40 A1 102 40 T(2) The output I want:1 100 25 T1 101 30 A1 102 40 TThis problem is faced by my real study, not home-work. I expect to have your helps on that, because I have restricted programming skills. I prefer an computation-efficient way, because there is so many rows in my data file. Your help will be very valuable to me. | Here's a solution that will rely on the input file already being sorted appropriately. It will scan line-by-line for lines with similar start (e.g. two first columns identical), check the third column value and preserve the line with the highest value - or the line that came first in the file. When a new start is found, it prints the old line, and begins checking again.At the end of the input file, the max line in memory is printed out.use warnings;use strict;my ($max_line, $start, $max) = parse_line(scalar <DATA>);while (<DATA>) { my ($line, $nl_start, $nl_max) = parse_line($_); if ($nl_start eq $start) { if ($nl_max > $max) { $max_line = $line; $max = $nl_max; } } else { print $max_line; $start = $nl_start; $max = $nl_max; $max_line = $line; }}print $max_line;sub parse_line { my $line = shift; my ($start, $max) = $line =~ /^([^\t]+\t[^\t]+\t)(\d+)/; return ($line, $start, $max);}__DATA__1 100 25 T1 101 26 A1 101 27 G1 101 30 A1 102 40 A1 102 40 TThe output is:1 100 25 T1 101 30 A1 102 40 AYou stated If there is a tie for the largest value in the third column, preserve the first one (after ordering the file).which is rather cryptic. Then you asked for output that seemed to contradict this, where the last value was printed instead of the first.I am assuming that what you meant is "preserve the first value". If you indeed meant "preserve the last value", then simply change the > sign in if ($nl_max > $max) to >=. This will effectively preserve the last value equal instead of the first.If you however implied some kind of sort, which "after ordering the file" seems to imply, then I do not have enough information to know what you meant. |
Python how to compare data like in PHP arrays I have to compare different operations results on two data sources using Python.For each datasource, I get all tables names. For each table, I get all columns. For each column, I do some 'operations' like getting count(column), sum(column). For example, in PHP, it would have given this type of array:----------------------------------------------------[TABLE1][COL1][OPERATION1][value][TABLE1][COL1][OPERATION2][value][TABLE1][COL1][OPERATION3][value]----------------------------------------------------[TABLE1][COL2][OPERATION1][value][TABLE1][COL2][OPERATION2][value][TABLE1][COL2][OPERATION3][value]----------------------------------------------------[TABLE2][COL1][OPERATION1][value][TABLE2][COL1][OPERATION2][value][TABLE2][COL1][OPERATION3][value]----------------------------------------------------[TABLE2][COL2][OPERATION1][value] [TABLE2][COL2][OPERATION2][value][TABLE2][COL2][OPERATION3][value]----------------------------------------------------I need to compare the results of the operations between the two data sources, it means verify if all tables and columns exist on each one, and compare the result of the 'operation'. I have tried to find a way how to realise this using objects but I don't know how. Does anyone have an idea? | If you want to know if the results are the same, use ==. For exampledict1 = {'foo':'bar'}dict2 = {'foo':'baz'}dict1 == dict2# Falsedict2 = {'foo':'bar'}dict1 == dict2# True |
fixed error instance object is not callable I need code for editing user details like first_name , last_name by using APIView Class based. THe serializers.py and views.py are given under but it is not making the changes according to the user details . i am passing token for user authentication. Any assistance will be appreciated.Serializers.pyclass UserEditSerializer(serializers.Serializer): email = serializers.EmailField(required=True) first_name = serializers.CharField(required=True) last_name = serializers.CharField(required=True) def update(self, validated_data, instance): instance.first_name = validated_data.get('first_name') instance.email = validated_data.get('email') instance.last_name = validated_data.get('last_name') instance.save() return instanceViews.pyclass UserEditProfile(APIView): authentication_classes = (authentication.TokenAuthentication,) permission_classes = (permissions.IsAuthenticated,) def get_object(self): return self.request.user def post(self, request): self.object = self.get_object() serializer = UserEditSerializer(data=request.data) if serializer.is_valid(): self.object.save() return Response(serializer.data, status=status.HTTP_200_OK) else: return Response(serializer.errors,status=status.HTTP_400_BAD_REQUEST) | This view will work . Thanks Linoviaclass UserEditProfile(APIView): authentication_classes = (authentication.TokenAuthentication,) permission_classes = (permissions.IsAuthenticated,) def post(self, request): obj = User.objects.get(id=request.user.id) serializer = UserEditSerializer(obj, data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_200_OK) else: return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) |
How can I run mrjob with no input file? I have a mrjob program, and just get data from sql database, so I don't need read local file or any input file, however mrjob forces me to 'reading from STDIN', so I just create an empty file as input file. It's really ugly, is there a way to run the job with no input files? | Have you tried piping the output from mysql to mrjob? Something like:mysql -D database -u user < test.sql | python mrjob_script.py |
Remove object from list after lifetime expires I am creating a program that spawns objects randomly. These objects have a limited lifetime.I create these objects and place them in a list. The objects keep track of how long they exist and eventually expire. They are no longer needed after expiration.I would like to delete the objects after they expire but I'm not sure how to reference the specific object in the list to delete it.if something: list.append(SomeObject())---- later---I would like a cleanup process that looks at the variable in the Object and if it is expired, then remove it from the list.Thanks for your help in advance. | You can use the refCount in case you define "no longer used" as "no other object keeps a reference". Which is a good way, for as no references exist, the object can no longer be accessed and may be disposed of. In fact, Python's garbage collector will do that for you.Where it goes wrong is when you also have all the instances in a list. That also counts as a refeference to the object and it therefore never will be disposed of.For example, a list of state variables that are not only referenced by their owning objects, but also by a list to allow linear access. Explicitly call a cleanup function from the accessor to keep the list clean:GlobalStateList = []def gcOnGlobalStateList(): for s in reversed(GlobalStateList): if (getrefcount(s) <= 3): # 1=GlobalStateList, 2=Iterator, 3=getrefcount() GlobalStateList.remove(s)def getGlobalStateList(): gcOnGlobalStateList() return GlobalStateList Note that even looking at the refcount increases it, so the test-value is three or less. |
How do I install the pip package for python on mac osx? I'm currently stuck on exercise 46 in Zed Shaw's "Learn Python the Hardway". He says I need to install the following python packages: pipdistributenosevirtualenvHe doesn't give the reader any directions on how to properly install the packages and use them. I went to the pip website but the directions were also very vague and kind of unhelpful for a newbie. The installation guide found on https://pip.pypa.io/en/latest/installing.html says to download the get-pip.py file and then run it by typing python get-pip.py in what I presume to be terminal. When I do that it starts downloading, then says cleaning up.. and then a red error message appears that says: Exception:Traceback (most recent call last):" followed by a bunch of file names before ending with "OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pipDoes anyone know how to correct this? If it helps, the get-pip.py file is in my downloads folder, so I did cd Downloads before running python get-pip.py" | You can do:sudo easy_install pipor install it with homebrew: http://mxcl.github.io/homebrew/and then:brew install python |
Python Pyramid not rendering JSON correctly I am using MongoEngine's to_json method on an object I wish to render in a json-rendered Pyarmid page. I've done lots of json rendering in Pyramid, but not with MongoEngine. MongoEngine's to_json method simple calls json_util.dumps. It all works fine in Python. The problem is that when Pyramid renders the page, it is rendered like this:{ "0": "\"", "1": "{", "2": "\\", "3": "\"", "4": "_", etc...However, the json dump looks ok in Python, before it is rendered:'{"_id": {"$oid": "4ebca43ccc7a67085b000000"}, "created": {"$date": 1346419407715}, "modified": {"$date": 1403757381829}, "modified_by": {"$oid": "4ebca43ccc7a67085b000000"}, "email": etc...As has been suggested in the comments, it seems like the json is being jsonified more than once, but I can't figure out where.I pick up the User object from the database and attach it every request:def get_user(request): return User.objects(id=ObjectId(authenticated_userid(request))).first()config.add_request_method(get_user, 'user', reify=True)I return the user as per request:@view_config(route_name='api.user', permission='authenticated', renderer='json')def user_vc(request): response = request.response _id = request.matchdict['id'] if _id == 'session': user = request.user if not user: response.status = 403 return response else: print user # user object as expected (not json) return userI have a custom adapter to handle the User object:# custom json adapterscustom_json = JSON()def user_adapter(obj, request): print obj.to_json() # the json looks ok here return obj.to_json()custom_json.add_adapter(User, user_adapter)config.add_renderer('json', custom_json)I am not doing any other jsonification myself, apart from the adapter above. So what is?? Any help would be great. | Thanks to a comment by @AnttiHappala above, I found the problem. MongoEngine's to_json method converts objects to a jsonified string. However, Pyramid needs a json data structure. So, to fix it, I added the following function to my custom renderer:def render_to_json(obj): return json.loads(obj.to_json()) def user_adapter(obj, request): return render_to_json(obj) custom_json.add_adapter(User, user_adapter)I can now add a custom renderer for my other MongoEngine objects and return them natively. |
How to quickly generate an OpenPGP key pair using GnuPG for testing purposes? I'm testing some code that uses python-gnupg to encrypt/sign/decrypt some plaintext, and I'd like to generate a key pair on the fly. GnuPG is (of course) super paranoid in generating the key pair, and it sucks a lot of entropy from my system.I found this answer on unix.stackexchange.com, but using rngd to have /dev/random pull from /dev/urandom sounds like a bad idea.Since I'm testing I don't need high security, I just need the key pair to be generated as quickly as possible.An idea is to pre-generate some keys offline, and use those keys on my tests. Anyway, I'd like to programmatically generate my temporary key pairs while executing the tests.This is the code I'm using now (that is, again, super slow and not good for testing):from tempfile import mkdtempimport gnupgdef temp_identity(): identity = gnupg.GPG(gnupghome=mkdtemp()) input_data = gpg.gen_key_input(key_type='RSA', key_length=1024) identity.gen_key(input_data) return identity | Using any method to change /dev/random to pull out of /dev/urandom is totally fine once the entropy pool was initiated with a proper random state (which is not a problem on hardware x86 machines, but might require discussion for other devices). I strongly recommend watching The plain simple reality of entropy -- Or how I learned to stop worrying and love urandom, a lecture at 32C3.If you want to fasten-up on-the-fly key generation, consider going for smaller key sizes like RSA 512 (1k keys aren't really secure, either). THis will render keys insecure, but if that's fine for testing -- go for it. Using another algorithm (for example elliptic curves if you already have GnuPG 2.1) might also speed up key generation.If you really want to stick with /dev/random and smaller key sizes don't provide adequate performance, you can very well pre-generate keys, export them using gpg --export-secret-keys and import them instead of creating new ones.gpg-agent also knows the option --debug-quick-random, which seems to fit your use case, but I've never used it before. From man gpg-agent: --debug-quick-random This option inhibits the use of the very secure random quality level (Libgcrypt’s GCRY_VERY_STRONG_RANDOM) and degrades all request down to standard random quality. It is only used for testing and shall not be used for any production quality keys. This option is only effective when given on the command line. |
Why Z3 falling at this? i'm trying to solve this using z3-solverbut the proplem is that it gives me wrong valuesi tried to replace the >> with LShR the values changes but non of them is correnthowever i know the value of w should be 0x41414141 in hexi also tried to set w to 0x41414141 and it said that it's unsat from z3 import *def F(w): return ((w * 31337) ^ (w * 1337 >> 16)) % 2**32s = Solver()w = BitVec("w",32)s.add ( F(w) == F(0x41414141))while s.check() == sat: print s.model() s.add(Or(w != s.model()[w])) | Python uses arbitrary-size integers, whereas z3 clamps all intermediate results to 32 bits, so F gives different results for Python and z3. You'd need something likedef F1(w): return ((w * 31337) ^ (((w * 1337) & 0xffffffff) >> 16)) % 2**32def F1Z(w): return ((w * 31337) ^ LShR(((w * 1337) & 0xffffffff), 16)) % 2**32s.add ( F1Z(w) == F1(0x41414141)) |
using filter to add similar values for a list of tuples I have this list order = [('5464', 39.96), ('8274', 233.82), ('9744', 404.55), ('5464', 89.91), ('9744', 404.55), ('5464', 89.91), ('88112', 274.89), ('8732', 83.93), ('7733', 208.89), ('88112', 199.75)]and it is basically a list of book order number and the total about. I want t use filter, map, lambda, and reduce only to get a list of tuples that will add the values of the similar book order number so it will return a list of 7 tuples. | You can try itertools with lambda :import itertoolsorder = [('5464', 39.96), ('8274', 233.82), ('9744', 404.55), ('5464', 89.91), ('9744', 404.55), ('5464', 89.91), ('88112', 274.89), ('8732', 83.93), ('7733', 208.89), ('88112', 199.75)]print(list(map(lambda m:(m[0],sum(map(lambda xa:xa[1],m[1]))),itertools.groupby(sorted(order),key=lambda x:x[0]))))output:[('5464', 219.78), ('7733', 208.89), ('8274', 233.82), ('8732', 83.93), ('88112', 474.64), ('9744', 809.1)]If you want to use reduce function then :print(list(map(lambda x:(x[0],functools.reduce(lambda x,y:x+y,list(map(lambda x:x[1],list(x[1]))))),itertools.groupby(sorted(order),key=lambda x:x[0])))) |
Django fandjango migration 4.2 After migration fandjango to version 4.2., I've got an error when I access my facebook application:Exception Value: [u'Enter valid JSON']Exception Location: /usr/local/lib/python2.7/dist-packages/jsonfield/fields.py in pre_init, line 77Trace: /usr/local/lib/python2.7/dist-packages/jsonfield/subclassing.py in set obj.dict[self.field.name] = self.field.pre_init(value, obj) ...jsonfield.subclassing.Creator object at 0x2a5c750objUser: My Uservalue u''/usr/local/lib/python2.7/dist-packages/jsonfield/fields.py in pre_init raise ValidationError(_("Enter valid JSON")) ...▼ Local varsVariable Valueselfjsonfield.fields.JSONField: extra_dataobjUser: My Uservalue u''I have upgraded fandjagno using pip install -upgrade fandjango, python manage.py migrate fandjango.There were another problems:-No module named jsonfield, so I installed it using pip-No module named dateutil.tz, so I installed it as well.-Also it asked for property DJANGO_SITE_URL, which was not defined in the settings object. I putted also it in the settings file. However I didn't find any documentation about this property.So now I am trying to figure out what else is needed. | Ok, I get it. The problem was with mysql database. The new version added a json field extradata. MySql interpreted it as text field with NULL value. So the problem was that fandjango wanted empty json, not NULL. I have updated the extradata field with '{}' and it's worked. Now I have a standart problem: The mobile version of the app is unavailable because it is misconfigured for mobile access. As it was earlier, before new versionNow I will try to figure out what is this. :) |
How do I make Scrapy print all duplicate urls? I am getting this message once in the logs: 2014-01-16 12:41:45+0100 [mybot] DEBUG: Filtered duplicate request: <GET https://mydomain/someurl> - no more duplicates will be shown (see DUPEFILTER_CLASS)The url was requested using Request() and it says it's a duplicate on the very first time it requests it. I don't know what's causing this. What can I do to debug this? How do I make it print all the duplicate urls that it's filtering? | Try exact url with curl -v URL and see if the headers contains a 301 or 302. Alternatively you can try scrapy shell URL.I've seen some sites that redirects to the same page when the parameters are not in the same order or the expected letter case. Scrapy doesn't consider the order or the letter case of the parameters when comparing two request objects. |
Count list length in a column of a DataFrame This is my Dataframe:CustomerID InvoiceNo0 12346.0 [541431, C541433]1 12347.0 [537626, 542237, 549222, 556201, 562032, 57351]2 12348.0 [539318, 541998, 548955, 568172]3 12349.0 [577609]4 12350.0 [543037]Desired Output: CustomerID InvoiceCount0 12346.0 21 12347.0 62 12348.0 43 12349.0 14 12350.0 1I want to calculate the total number of Invoice a customer(CustomerID) have.Please help. | See if this works:df["InvoiceCount"] = df['InvoiceNo'].str.len() |
How to convert string extracted by regex to integer in python? i'm trying to convert string to integer, but it's not that so easier than i'm thinking. content = '''<entry colname="1" morerows="1" morerowname="2"><p>111</p></entry><entry colname="2" rowname="2"><p></p></entry>'''morerows = ''.join(re.findall('morerows="\d"', content))morerows_n = int(''.join(re.findall('\d', morerows)))print(morerows_n)this results error as follow :morerows_n = int(''.join(re.findall('\d', morerows)))ValueError: invalid literal for int() with base 10: ''where is wrong with that code? i've tried int() function but doesn't work and it's not float also.any help? | I guess there are non-integer characters in the morerows attribute in your real case.How about this:content = '''<entry colname="1" morerows="1x" morerowname="2"><p>111</p></entry><entry colname="1" morerows="1" morerowname="2"><p>111</p></entry><entry colname="2" rowname="2"><p></p></entry>'''morerows = ''.join(re.findall('morerows="[0-9]+"', content))if morerows: morerows_n = int(''.join(re.findall('\d', morerows)))print(morerows_n)Use [0-9]+ instead of \d |
'\n' == 'posix' , '\r\n' == 'nt' (python) is that correct? I'm writing a python(2.7) script that writes a file and has to run on linux, windows and maybe osx.Unfortunately for compatibility problems I have to use carriage return and line feed in windows style.Is that ok if I assume:str = someFunc.returnA_longText()with open('file','w') as f: if os.name == 'posix': f.write(str.replace('\n','\r\n')) elif os.name == 'nt' f.write(str) Do I have to considerate an else?os.name has other alternatives ('posix', 'nt', 'os2', 'ce', 'java', 'riscos'). Should I use platform module instead?Update 1:The goal is to use '\r\n' in any OS.I'm receiving the str fromstr = etree.tostring(root, pretty_print=True,xml_declaration=True, encoding='UTF-8')I'm not reading a file.3. My fault, I should probably check the os.linesep instead? | Python file objects can handle this for you. By default, writing to a text-mode file translates \n line endings to the platform-local, but you can override this behaviour.See the newline option in the open() function documentation: newline controls how universal newlines mode works (it only applies to text mode). It can be None, '', '\n', '\r', and '\r\n'. It works as follows: When reading input from the stream, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newlines mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. When writing output to the stream, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '' or '\n', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string. (the above applies to Python 3, Python 2 has similar behaviour, with io.open() giving you the Python 3 I/O options if needed).Set the newline option if you need to force what line-endings are written:with open('file', 'w', newline='\r\n') as f:In Python 2, you'd have to open the file in binary mode:with open('file', 'wb') as f: # write `\r\n` line separators, no translation takes placeor use io.open() and write Unicode text:import iowith io.open('file', 'w', newline='\r\n', encoding='utf8') as f: f.write(str.decode('utf8'))(but pick appropriate encodings; it is always a good idea to explicitly specify the codec even in Python 3).You can always use the os.linesep constant if your program needs to know the appropriate line separator for the current platform. |
Python - " AttributeError: 'str' object has no attribute 'Tc' (Tc is one of the arguments) I have this code:import numpy as npimport matplotlib.pyplot as pltfrom scipy.optimize import newtonR = 8.314e-5 # universal gas constant, m3-bar/K-molclass Molecule:"""Store molecule info here"""def __init__(self, name, Tc, Pc, omega): """ Pass parameters desribing molecules """ #! name self.name = methane #! Critical temperature (K) self.Tc = -83+273 #! Critical pressure (bar) self.Pc = 45.99 #! Accentric factor self.omega = 0.011def preos(molecule, T, P, plotcubic=True, printresults=True): Tr = T / molecule.Tc # reduced temperature a = 0.457235 * R**2 * molecule.Tc**2 / molecule.Pc b = 0.0777961 * R * molecule.Tc / molecule.Pc kappa = 0.37464 + 1.54226 * molecule.omega - 0.26992 * molecule.omega**2 alpha = (1 + kappa * (1 - np.sqrt(Tr)))**2 A = a * alpha * P / R**2 / T**2 B = b * P / R / TWhen I call the function preos with the arguments I want:preos("methane", 160, 10, "true", "true")There's an error message: " AttributeError: 'str' object has no attribute 'Tc' " on this part:def preos(molecule, T, P, plotcubic=True, printresults=True): Tr = T / molecule.Tc # reduced temperatureAnd I guess it's going to have the same error for the other arguments (Pc and omega). What does this error mean? | It's here:def preos(molecule, T, P, plotcubic=True, printresults=True): Tr = T / molecule.Tc # reduced temperature...preos("methane", 160, 10, "true", "true")You're clearly passing "methane" into the preos function as a string, then trying to call .Tc on that string. The error is saying exactly that. This doesn't have anything to do with IPython. In other words, you're trying to run "methane".Tc.Edit: It's hard to tell what you actually want to happen, but I think you're not quite getting classes and methods. |
What does this ImportError mean when importing my c++ module? I've been working on writing a Python module in C++. I have a C++ program that can run on its own. It works great, but I thought it would be better if I could actually call it like a function from Python. So I took my best go at it, and it builds and installs. Here's the code for my module (called nnrunner.cpp):#include <Python.h>#include <vector>#include "game.h"#include "neuralnetai.h"using namespace std;/************************************************** * This is the actual function that will be called *************************************************/static int run(string filename){ srand(clock()); Game * pGame = new Game(); vector<int> topology; topology.push_back(20); Network net(31, 4, topology); net.fromFile(filename); NNAI ai(pGame, net); pGame->setAI(&ai); while (!pGame->isGameOver()) pGame->update(NULL); return pGame->getScore();}static PyObject *nnrunner_run(PyObject * self, PyObject * args){ string filename; int score; if (!PyArg_ParseTuple(args, "s", &filename)) return NULL; score = run(filename); return PyLong_FromLong(score);}static PyMethodDef NnrunnerMethods[] = { {"run", nnrunner_run, METH_VARARGS, "Run the game and return the score"}, {NULL, NULL, 0, NULL} /* Sentinel */};static struct PyModuleDef nnrunnermodule = { PyModuleDef_HEAD_INIT, "nnrunner", /* name of module */ NULL, /* module documentation, may be NULL */ -1, /* size of per-interpreter state of the module, or -1 if the module keeps state in global variables. */ NnrunnerMethods};PyMODINIT_FUNCPyInit_nnrunner(void){ PyObject *m; m = PyModule_Create(&nnrunnermodule); if (m == NULL) return NULL; return m;}And my build script (called setup.py):from distutils.core import setup, Extensionmodule1 = Extension('nnrunner', sources = ['nnrunner.cpp', 'game.cpp', 'uiDraw.cpp', 'uiInteract.cpp', 'player.cpp', 'ship.cpp', 'network.cpp'], libraries = ['glut', 'GL', 'GLU'])setup (name = 'NNRunner', version = '1.0', description = 'This is my first package', ext_modules = [module1])It has to compile with -lglut -lGL -lGLU due to a dependency, but it doesn't actually have any UI.I can compile it and install it (python setup.py build, python setup.py install) but when I try to import it, I get errors:Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:53:06) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import nnrunnerTraceback (most recent call last): File "<stdin>", line 1, in <module>ImportError: /home/justin/anaconda3/lib/python3.5/site-packages/nnrunner.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZTVNSt7__cxx1115basic_stringbufIcSt11char_traitsIcESaIcEEE>>> Could somebody point me in the direction of documentation about this? This is the first time I've tried to make a Python module in C++. | Most likely it means that you're importing a shared library that has a binary interface not compatible with your Python distribution. So in your case: You have a 64-bit Python, and you're importing a 32-bit library, or vice-versa. (Or as suggested in a comment, a different compiler is used). |
PyQt5 cannot import name 'QApplication' I am trying convert my code from PyQt4 to PyQt5 but I am getting errors.from PyQt5.QtGui import QApplication, QPixmapdesktop = QApplication.desktop()QPixmap.grabWindow(desktop.screen().winId()).save("screen.png", "PNG")3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:44:40) [MSC v.1600 64 bit (AMD64)]Traceback (most recent call last): File "C:\Python34\Projects\name.py", line 7, in <module> from PyQt5.QtGui import QApplication, QPixmapImportError: cannot import name 'QApplication' | QApplication is located in PyQt5.QtWidgets module. So your import statement should be:from PyQt5.QtWidgets import QApplication |
High performance computing projects using Python For a paper I want to argue why I have used Python for the implementation of my algorithm. Besides the typical arguments that it is fast -using suitable libraries- and it is easy to implement the algorithm with it, I thought maybe there are some big HPC projects that are using it. Does anyone know a famous project that uses Python for large parallel calculations, maybe with a paper which I can cite? | To be honest, as great a language as python is, it wouldn't be a suitable environment for scientific computing and in particular high performance computing, if those libraries weren't available. So you can see python as one pieces of a larger puzzle - much as MATLAB can be. The two key reasons to use python for scientific or high-performance computing can then be said to be because of the convenient interfaces to software packages written in other languages, or because you need fast turn around on a project. Commonly, both issues arise at the time. The classic example of this is the paper "Feeding a Large-scale Physics Application to Python", by David M. Beazley which combines performance intensive C++ with python using SWIGIf you're looking for something very current, there is a new paper, "A New Modelling System for Seasonal Streamflow Forecasting Service of the Bureau of Meteorology, Australia", by Daehyok Shin et al., that due to be presented at MODSIM2011. I saw the first author speak at the Melbourne Python Users Group about how ipython was used being used as a mechanism for bridging high performance fortran models and HDF5 data in such a way that even non-programmers could make effective contributions to a larger scientific program. |
Kivy popup call structure and bindings not making sence Below is a working snippet example of a program that presents the user with menu popups to enter info. The issues is getting the dismiss bindings working correctly. The program flow is currently: declare content with a return callback Load content into a popup object call the popup __init__ is called and sets up something like _keyboard bindings User enters data and presses accept Return call back is called, the popup is no longer needed so we call popup.dismiss() popup closes and is itThe issue is if I do _keyboard binding in the __init__ then when the popup closes I MUST call the unbind method or else the keyboard input is still calling the old popups functions!Another thing I dislike is the return callback needing to call self._popup.dismiss(). I think it is much cleaner if the popup is completely self contained and completely reuseable. This is a numpad entry popup, it should bind the keyboard and unbind it by itself. The callback recieves an instance snapshot of the popup so the return data is easy to access. The popup itself should be the one to close itself as it knows for sure that the returnCB() was its final goal.I have no idea how to implement this though. Binding on_dismiss inside of the __init__ does nothing at all as TouchGoToInput_dismiss is never called. I also cant figure out how to get TouchGoToInput to close itself.Another issue is if ESC is pressed the popup closes and once again the keyboard binding is messed up.Can anyone lend me a hand understanding the call case structure?from kivy.app import Appfrom kivy.lang import Builderfrom kivy.factory import Factoryfrom kivy.uix.gridlayout import GridLayoutfrom kivy.properties import ObjectPropertyfrom kivy.properties import StringPropertyfrom kivy.core.window import Windowfrom kivy.uix.popup import PopupBuilder.load_string('''<TouchGoToInput>: textInput:textInput cols: 1 size: root.size pos: root.pos GridLayout: cols: 1 size_hint_y:.25 TextInput: size_hint_x:1.0 font_size: self.height - 15 padding_y: [self.height / 2.0 - (self.line_height / 2.0) * len(self._lines), 0] id:textInput disabled: True GridLayout: cols: 3 Button: text: "1" on_release: root.addText("1") Button: text: "2" on_release: root.addText("2") Button: text: "3" on_release: root.addText("3") Button: text: "4" on_release: root.addText("4") Button: text: "5" on_release: root.addText("5") Button: text: "6" on_release: root.addText("6") Button: text: "7" on_release: root.addText("7") Button: text: "8" on_release: root.addText("8") Button: text: "9" on_release: root.addText("9") Button: text: "." on_release: root.addText(".") Button: text: "0" on_release: root.addText("0") Button: text: "Done" on_release: root.accept()''')class TouchGoToInput(GridLayout): returnCB = ObjectProperty(None) def __init__(self, **kwargs): super(TouchGoToInput, self).__init__(**kwargs) self.bind(on_dismiss=self.dismiss) print('TouchGoToInput.__init__') def dismiss(self): print('TouchGoToInput_dismiss') def addText(self, text): self.textInput.text = self.textInput.text + text def accept(self): print('TouchGoToInput.accept') self.returnCB(self) def __del__(self): print('TouchGoToInput.__del__') self.returnCB(self)class TestApp(App): def build(self): self.popupContent = TouchGoToInput(returnCB=self.gotoLinePopup) self._popup = Popup(title="GoTo...", content=self.popupContent, size_hint=(0.9, 0.9)) #self._popup.bind(on_dismiss=self.main_dismiss) return Factory.Button(text="press me", on_press=self._popup.open) def gotoLinePopup(self, instance): print('returnCB.text: ', instance.textInput.text) self._popup.dismiss() def main_dismiss(self, instance): print('main_dismiss')TestApp().run() | In the example, it demonstrates implementation of numpad using Popup widget with keyboard binding. It accepts input from Buttons, Keyboard, and NumPad.Popup » dismiss By default, any click outside the popup will dismiss/close it. If you don’t want that, you can set auto_dismiss to False:Popup » auto_dismissauto_dismiss This property determines if the view is automatically dismissed when the user clicks outside it. auto_dismiss is a BooleanProperty and defaults to True.Examplemain.pyfrom kivy.app import Appfrom kivy.lang import Builderfrom kivy.uix.gridlayout import GridLayoutfrom kivy.uix.popup import Popupfrom kivy.uix.button import Buttonfrom kivy.core.window import WindowBuilder.load_string('''#:kivy 1.11.0<NumPad>: title: "GoTo..." size_hint: (0.9, 0.9) auto_dismiss: False<TouchGoToInput>: textInput: textInput cols: 1 size: root.size pos: root.pos GridLayout: cols: 1 size_hint_y: .25 TextInput: size_hint_x:1.0 font_size: self.height - 15 padding_y: [self.height / 2.0 - (self.line_height / 2.0) * len(self._lines), 0] id: textInput disabled: True GridLayout: cols: 3 Button: text: "1" on_release: root.addText(self.text) Button: text: "2" on_release: root.addText(self.text) Button: text: "3" on_release: root.addText(self.text) Button: text: "4" on_release: root.addText(self.text) Button: text: "5" on_release: root.addText(self.text) Button: text: "6" on_release: root.addText(self.text) Button: text: "7" on_release: root.addText(self.text) Button: text: "8" on_release: root.addText(self.text) Button: text: "9" on_release: root.addText(self.text) Button: text: "." on_release: root.addText(self.text) Button: text: "0" on_release: root.addText(self.text) Button: text: "Done" on_release: app._popup.dismiss()''')class TouchGoToInput(GridLayout): def addText(self, text): self.textInput.text = self.textInput.text + textclass NumPad(Popup): def __init__(self, **kwargs): super(NumPad, self).__init__(**kwargs) self.popupContent = TouchGoToInput() self.content = self.popupContent def on_open(self): # erase previous textInput self.popupContent.textInput.text = '' # keyboard binding self._keyboard = Window.request_keyboard( self._keyboard_closed, self, 'text') if self._keyboard.widget: # If it exists, this widget is a VKeyboard object which you can use # to change the keyboard layout. pass self._keyboard.bind(on_key_down=self._on_keyboard_down) def _keyboard_closed(self): # keyboard have been closed! if self._keyboard is not None: self._keyboard.unbind(on_key_down=self._on_keyboard_down) self._keyboard = None def _on_keyboard_down(self, keyboard, keycode, text, modifiers): # check for 0...9, or '.' pressed from keyboard if (keycode[0] in list(range(48, 58))) or (keycode[0] == 46): # keyboard: 0 / 48 to 9 / 57, or decimal / 46 self.popupContent.addText(text) # check for 0...9, or '.' pressed from numpad elif (keycode[0] in list(range(256, 267))): # numpad0 / 256 to numpad9 / 265, or numpaddecimal / 266 if keycode[0] == 266: self.popupContent.addText('.') else: self.popupContent.addText(keycode[1][-1:]) # Keycode is composed of an integer + a string # If we hit escape, release the keyboard if keycode[1] == 'escape': keyboard.release() # Return True to accept the key. Otherwise, it will be used by # the system. return True def on_dismiss(self): print('\tNumPad.on_dismiss: self.popupContent.textInput.text=', self.popupContent.textInput.text) self._keyboard_closed()class TestApp(App): def build(self): self._popup = NumPad() return Button(text="press me", on_press=self._popup.open)if __name__ == "__main__": TestApp().run()Output |
Python set to array and dataframe Interpretation by a friendly editor:I have data in the form of a set.import numpy as n , pandas as ps={12,34,78,100}print(n.array(s))print(p.DataFrame(s))The above code converts the set without a problem into a numpy array.But when I try to create a DataFrame from it I get the following error: ValueError: DataFrame constructor not properly called!So is there any way to convert a python set/nested set into a numpy array/dictionary so I can create a DataFrame from it?Original Question:I have a data in form of set .Code import numpy as n , pandas as p s={12,34,78,100} print(n.array(s)) print(p.DataFrame(s))The above code returns same set for numpyarray and DataFrame constructor not called at o/p . So is there any way to convert python set , nested set into numpy array and dictionary ?? | Pandas can't deal with sets (dicts are ok you can use p.DataFrame.from_dict(s) for those)What you need to do is to convert your set into a list and then convert to DataFrame:import pandas as pds = {12,34,78,100}s = list(s)print(pd.DataFrame(s)) |
Use python PGSQL driver without installing it? Is there anyway I can use any pgsql driver without actually installing it? I see Psycopg2 is most commonly used for connecting to PGSQL database but that need installing and The issue I have here is I need to distribute the code but the we are not allowed to install anything on the server. Anything standalone (e.g. including the driver/library files in a directory along with my scripts) but that's the only thing I can do. Is there any way I can import psycopg2 from a local directory?Thanks in advance! | Yes, you can import modules from local files - you can even import whole python files of your own if you want. You can download the source code and put it onto your system by using a USB stick (if permitted). However, you will still have to install it which may not be possible depending on your situation. If this is for homework or a school project, I recommend only using files or libraries you create or are provided with. You can read more about Python imports here https://docs.python.org/3/reference/import.html Best of luck. |
Unable to parse an image link from a webpage using requests I'm trying to scrape two images from two identical links using requests. However, the script that I've created can't grab them. Although the image link is generated dynamically, most of the times there are ways to parse that using requests. So, I tried to find it using dev tools but failed. To let you know, this is the location of the image which is taken from the first link.import requestsfrom bs4 import BeautifulSouplinks = [ 'https://www.glideapps.com/templates/baby-reveal-boy-or-girl-wr', 'https://www.glideapps.com/templates/escool-virtual-school-6d']with requests.Session() as s: s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36' for link in links: res = s.get(link) soup = BeautifulSoup(res.text,'lxml') app_image = soup.select_one("img.h-full")['src'] print(app_image)PS Selenium is not an option I would like to go with. | Try this:-from requests_html import HTMLSessionlinks = [ 'https://www.glideapps.com/templates/baby-reveal-boy-or-girl-wr', 'https://www.glideapps.com/templates/escool-virtual-school-6d']def main(): with HTMLSession() as session: for link in links: res = session.get(link) res.raise_for_status() res.html.render() for img in res.html.xpath('//*/img[contains(@class, "h-full")]'): print(img.attrs['src'])if __name__ == '__main__': main() |
get IP adresses of my local network I am working on a GUI program to command power supplies by Ethernet.I have the DHCP of my computer activated, therefore I guess that the IP adresses of my power supplies are fixed by my computer.I would like to know the IP addresses of my power supplies, in order to communicate with them through the TCP/IP protocol, using Python.For the moment, I use a program called LXI discovery tools, and while I run it, the Window command arp -a command gives me the IP adresses of my power supplies.The problem is that I need to run this LXI program. Is it obligatory? Owing to the DCHP, my computer is the one which sets the IP addresses, therefore isn't there a way to get those addresses more easily?Moreover, is the Python socket library able to help me? | Finally I solved my problem, using statique IP addresses. Therefore I know them and I don't need anymore to "scan" my network. |
Multiple timers in Python (Pygame) I'm an amateur programmer. I am trying to write a simple program that will measure the reaction time for a series of visual stimuli (flashes of squares) that will be used for a biology experiment. Here's my code (beware, first time coding a graphical interface):stimulus = pygame.Rect(100,250,100,100)#draw on surface objecttime.sleep(2) #wait for 2 seconds before it appearsscreen.fill(BLACK)pygame.draw.rect(screen,WHITE,stimulus)pygame.display.update(stimulus)#record time stimulus appearedt0 = time.clock()#clear screen ("flash" illusion)time.sleep(0.5) #***PROBLEM***screen.fill(BLACK)pygame.display.update(stimulus) while True: for event in pygame.event.get(): if event.type == KEYDOWN: t1 = time.clock() print t1-t0 if event.type == QUIT: pygame.quit() sys.exit()The program was working fine before I included the block with the line marked "problem". The reaction time printed seemed reasonable. However, I want the square to disappear after a while, as though it just "flashed". After including the time.sleep(0.5), the time printed is no longer correct. It is always 0.5xxxx or greater, no matter how fast I press. Is there any workaround?P.S. I need it to disappear because I want to present a sequence of flashes with predetermined (not constant) pauses in between.Thanks.EditI need to achieve two things: 1. The shape must flash on the screen for 0.5 sec. 2. The program must create a timestamp (e.g. write to a list) every time the spacebar is pressed (even if it is pressed randomly twice between two flashes). | Your problem is that the computer will be doing nothing for 0.5 seconds due to the line you marked as a problem. What you need to do is make it so it is possible for the reaction to be registered while the square is still being shown. Instead of having time.sleep(0.5), put this:while time.clock()-t0<0.5: for event in pygame.event.get(): if event.type == pygame.KEYDOWN: t1 = time.clock() print t1-t0This should fix your code. |
Python: Determine assigned serial port my hardware connected to Microcontroller interfacing with Windows PC via USB CDC creating virtual serial port. Windows assign port number randomly depend on availability, USB port and differs from computer to computer. The question is how via Python script determine which port assigned for my microcontroller and use it. | you can use ctypes to figure out which ports are availableyou can connect to each port that is available and send something like get ver where you know the expected response.when you find expected response you have found your serialportalternatively (and probably easier) you can just enumerate through all 256 comports (0-255) and try/except to connect to themfor i in range(256): try: s = serial.Serial(i) print "Found A Serial Port Available At COM%d"%i except serial.serialutil.SerialException: print "Nothing On COM%d"%i |
Python tkinter downsizing widgets I've looked at all the other questions and answers and couldn't find one that fit what I'm trying to do. Code:class GameWin: def __init__(self, master): self.master = master self.master.title("Title") self.main_frame = Frame(self.master, bd = 10, bg = uni_bg) self.main_frame.grid() self.left_frame = Frame(self.main_frame, bd = 10, bg = uni_bg) self.left_frame.grid(column = 0, row = 0) self.right_frame = Frame(self.main_frame, bd = 10, bg = uni_bg) self.right_frame.grid(column = 1, row = 0) self.right_frame.columnconfigure(0, weight = 1) self.right_frame.rowconfigure(0, weight = 1) self.right_frame.columnconfigure(1, weight = 1) self.right_frame.rowconfigure(1, weight = 1) self.web = Text(self.left_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bt_bg, fg = uni_fg) self.web.grid(column = 0, row = 0, padx = 5, pady = 5) self.output = Text(self.left_frame, font = (uni_font, 12), wrap = WORD, bg = "black", fg = uni_fg) self.output.grid(column = 0, row = 1, padx = 5, pady = 5, sticky = "ew") self.output.configure(state = "disabled") self.input = Entry(self.left_frame, font = (uni_font, 12)) self.input.grid(column = 0, row = 2, sticky = "ew") self.input.bind('<Return>', self.submit) self.notepad = Text(self.right_frame, font = (uni_font, 12), wrap = WORD, bg = uni_fg, fg = "black", width = 42) self.notepad.grid(column = 0, row = 0, pady = 5, rowspan = 2) self.sys_info = Text(self.right_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bg, fg = uni_fg, width = 35, height = 11, bd = 0) self.sys_info.tag_configure('center', justify='center') self.sys_info.grid(column = 1, row = 0, pady = 5) self.sys_info.insert(END, "NAME", "center") self.sys_info.configure(state = "disabled") self.trace = Text(self.right_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bg, fg = uni_fg, width = 35, height = 11) self.trace.grid(column = 1, row = 1, pady = 5) self.email = Text(self.right_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bt_bg, fg = uni_fg) self.email.grid(column = 0, row = 2, pady = 5, columnspan = 2) self.email.configure(state = "disabled") self.respond = Entry(self.right_frame, font = (uni_font, 12)) self.respond.grid(column = 0, row = 3, columnspan = 2, sticky = "ew") self.respond.bind('<Return>', self.do_respond) def submit(self, event): self.output.configure(state = "normal") self.output.configure(state = "disabled") pass def do_respond(self, event): passImage of current screen: https://i.imgur.com/P2B6E5y.pngFirst thing I'm trying to figure out is how to not explicitly state the size of the 3 text widgets in the top right. Because everyone's screen is differently sized. If I don't explicitly state the size, they expand and everything goes wacko (since the default text widget is big). I want the widgets to automatically downscale to fit within the column (the same width as the big grey bottom right text widget). Is this even possible?Second is for the frames and widgets to fill up all the space in the window. Whether it's fullscreen (like in the pic) or a smaller window (and hopefully keep their size relative to each other). There's a lot of empty space at the edges of the window and I want to get rid of that. I've tried everything I can think of but I can't get them to fill that space.I tried putting the top 3 widgets each in their own frame, limiting the size of the frames relative to the window size, and setting the widgets to fill that frame but it doesn't work. Code I used to try this: https://pastebin.com/3YWK9Xg2class GameWin: def __init__(self, master): self.master = master self.master.title("Hacker") win_width = self.master.winfo_width() win_height = self.master.winfo_height() self.main_frame = Frame(self.master, bd = 10, bg = uni_bg) self.main_frame.grid(sticky = "nsew") self.left_frame = Frame(self.main_frame, bd = 10, bg = uni_bg, height = int(win_height), width = int(win_width/2)) self.left_frame.grid(column = 0, row = 0, rowspan = 3) self.left_frame.grid_propagate(False) self.note_frame = Frame(self.main_frame, bd = 10, bg = uni_bg, height = int(win_height/2), width = int(win_width/4)) self.note_frame.grid(column = 1, row = 0, rowspan = 2, sticky = "n") self.note_frame.grid_propagate(False) self.sys_frame = Frame(self.main_frame, bd = 10, bg = uni_bg, height = int(win_height/4), width = int(win_width/4)) self.sys_frame.grid(column = 2, row = 0, sticky = "n") self.sys_frame.grid_propagate(False) self.trace_frame = Frame(self.main_frame, bd = 10, bg = uni_bg, height = int(win_height/4), width = int(win_width/4)) self.trace_frame.grid(column = 2, row = 1, sticky = "n") self.trace_frame.grid_propagate(False) self.bottom_right_frame = Frame(self.main_frame, bd = 10, bg = uni_bg, height = int(win_height/2), width = int(win_width/2)) self.bottom_right_frame.grid(column = 1, row = 2, columnspan = 2) self.bottom_right_frame.grid_propagate(False) self.web = Text(self.left_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bt_bg, fg = uni_fg) self.web.grid(column = 0, row = 0, padx = 5, pady = 5) self.output = Text(self.left_frame, font = (uni_font, 12), wrap = WORD, bg = "black", fg = uni_fg) self.output.grid(column = 0, row = 1, padx = 5, pady = 5, sticky = "ew") self.input = Entry(self.left_frame, font = (uni_font, 12)) self.input.grid(column = 0, row = 2, sticky = "ew") self.input.bind('<Return>', self.submit) self.notepad = Text(self.note_frame, font = (uni_font, 12), wrap = WORD, bg = uni_fg, fg = "black") self.notepad.pack(fill = BOTH, expand = YES) self.sys_info = Text(self.sys_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bg, fg = uni_fg) self.sys_info.tag_configure('center', justify='center') self.sys_info.grid(sticky = "nsew") self.sys_info.insert(END, "NAME", "center") self.sys_info.configure(state = "disabled") self.trace = Text(self.trace_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bg, fg = uni_fg) self.trace.grid(sticky = "nsew") self.email = Text(self.bottom_right_frame, font = (uni_font, 12), wrap = WORD, bg = uni_bt_bg, fg = uni_fg) self.email.grid(row = 0, pady = 5, columnspan = 2, sticky = "nsew") self.email.configure(state = "disabled") self.respond = Entry(self.bottom_right_frame, font = (uni_font, 12)) self.respond.grid(row = 1, columnspan = 2, sticky = "ew") self.respond.bind('<Return>', self.do_respond) def submit(self, event): self.output.configure(state = "normal") self.output.configure(state = "disabled") def do_respond(self, event): passand picture of the result: https://i.imgur.com/IVnw65x.pngHere is the full code: https://pastebin.com/Gm2ePqFH. I want it to look like it is in the first picture, without having to explicitly state the size of each text widget. And I want to get it to all stay the same size relative to the window. | If you want widgets to shrink down to the size of the column, the strategy that has worked best for me is to make the widget very small and then use the layout manager (pack, place, or grid) make them bigger to fit. You can either make the widget 1x1 if that's truly a minimum size you will accept, or you can set it to what you think the absolute minimum should be (for example, 4 lines of 20 characters, 20 lines of 10 characters, etc). So, start by making those widgets as small as possible. Next, make sure you use the sticky attribute so that the widgets grow to fill their allotted space. You also need to make sure you use the sticky attribute for self.right_frame so that it fills its space too. Finally, make sure that you call rowconfigure and columnconfigure to set a positive weight on any widget that has children managed by grid. You aren't doing that for self.master, nor are you doing it for self.left_frame and self.right_frameAs a rule of thumb, if you're only putting one or two widgets in a frame and you want those widgets to fill the frame, it's much better to use pack than grid, simply because you don't have to remember to give the rows and columns weights.For example, you can use pack to manage the left and right frames. You can probably use pack to put GameWin inside of its master, too. Also, don't try to solve all of your layout problems at once. In your case, I would tackle the problem like this:Start with just your mainframe and get it so that it fills the containing window. Make sure you manually resize the window to watch it grow and shrink.Next, add your left and right frames. Make sure they grow and shrink to fit mainframe.Next, focus on just the widgets in the left frame. Add them, and make sure they shrink and fit. Then, focus on the right frame. Add them, and make sure they shrink and fit.Finally, a word of advice. Group your calls to grid together. As your code is written now, you need to fix a whole bunch of lines scattered over the entire file. With some reorganization, almost all of the lines of code you need to change will be in one block of code.For example, instead of this:self.web = Text(...)self.web.grid(...)self.output = Text(...)self.output.grid(...)self.input = Text(...)self.input.grid(...)self.notepad = Text(...)self.notepad.grid(...)Do this:self.web = Text(...)self.output = Text(...)self.input = Text(...)self.notepad = Text(...)self.web.grid(...)self.output.grid(...)self.input.grid(...)self.notepad.grid(...)With that, at a glance you can answer the question "how are my widgets defined?", and "how are my widgets laid out". With the way you have it now, those questions are very difficult to answer. |
pickling lru_cached function on object As part of parallellizing some existing code (with multiprocessing), I run into the situation that something similar to the class below needs to be pickled.Starting from:import picklefrom functools import lru_cacheclass Test: def __init__(self): self.func = lru_cache(maxsize=None)(self._inner_func) def _inner_func(self, x): # In reality this will be slow-running return xcallingt = Test()pickle.dumps(t)returns_pickle.PicklingError: Can't pickle <functools._lru_cache_wrapper object at 0x00000190454A7AC8>: it's not the same object as __main__.Test._inner_funcwhich I don't really understand. By the way, I also tried a variation where the name of _inner_func was func as well, that didn't change things. | Use methodtools.lru_cache not to create a new cache function in __init__import picklefrom methodtools import lru_cacheclass Test: @lru_cache(maxsize=None) def func(self, x): # In reality this will be slow-running return xif __name__ == '__main__': t = Test() print(pickle.dumps(t))It requires to install methodtools via pypi:pip install methodtools |
django can't import in installed apps and can't import function In Django, I have a "fbsurvey" project, with a "canvas" application.I have another "cblib" project, with a "survey" app and a "graphs" app.In the "survey" app, there are models and some functions.In the "graphs" app, there is just a "utils" folder with 2 .py files in it-- a file "get_chart_info" with a function "get_chart_info" and a file "chart_utils" with some assorted functions in itgraphs app has an init.py on each levelall of the models in "survey" workbut "get_chart_info" (the file) REFUSES to import.If I try to put "cblib.graphs" in my installed apps, when I try to runserver, it breaks, saying "Error: No module named graphs"If I leave it out of my installed apps, I get:ImportError at /canvas/chart/No module named graphs.utils.get_chart_info(btw, I don't understand why this says no module named graphs.utils instead of cblib.graphs.utils)with a line reference to the import statement.Note that all of the imports work in the shell. I.e. when I run:./manage.py shellimport cblibimport cblib.surveyimport cblib.graphsimport cblib.graphs.get_chart_infofrom cblib.graphs.get_chart_info import get_chart_infonothing fails.Does anyone have any idea why this could be breaking? I feel like I've checked everything.someone mentioned it might be useful to see the ascii tree of my project (edited for relevance)cblib looks like:.├── graphs│ ├── admin.py│ ├── __init__.py│ ├── __init__.pyc│ └── utils│ ├── get_chart_info.py│ ├── get_chart_info.pyc│ ├── graph_utils.py│ ├── graph_utils.pyc│ ├── __init__.py│ └── __init__.pyc├── __init__.py├── __init__.pyc└── survey ├── admin.py ├── fixtures │ ├── badges.json │ ├── q1-174.json │ ├── q175-271.json │ ├── q272-302.json │ └── responseoptions_767-1594.json ├── __init__.py ├── __init__.pyc ├── management │ ├── commands │ │ ├── create_fake_users.py │ │ ├── import_fake_user_data.py │ │ ├── import_questions.py │ │ └── __init__.py │ └── __init__.py ├── migrations │ ├── 0001_initial.py │ ├── 0002_auto__del_field_votelog_direction.py │ ├── 0003_auto__chg_field_pointlog_action_type.py │ ├── 0004_auto__add_opengraphverb__add_field_question_school_specific_opengraph_.py │ └── __init__.py └── models ├── badge.py ├── badge.pyc ├── __init__.py ├── __init__.pyc ├── opengraphverb.py ├── opengraphverb.pyc ├── pointlog.py ├── pointlog.pyc ├── question.py ├── question.pyc ├── responseoption.py ├── responseoption.pycand fbsurvey looks like: .├── canvas│ ├── admin.py│ ├── admin.pyc│ ├── brainys.json│ ├── data.csv│ ├── decorators.py│ ├── decorators.pyc│ ├── DefaultInfoObject.py│ ├── DefaultInfoObject.pyc│ ├── DefaultJsonResponse.py│ ├── DefaultJsonResponse.pyc│ ├── fixtures│ │ └── test-fixture.json│ ├── __init__.py│ ├── __init__.pyc│ ├── level.py│ ├── level.pyc│ ├── management│ │ ├── commands│ │ │ ├── convert_fbuser_to_cbuser.pyc│ │ │ ├── credit_inviters.py│ │ │ ├── __init__.py│ │ │ ├── __init__.pyc│ │ │ ├── reminder_wallposts.py│ │ │ ├── reminder_wallposts.pyc│ │ │ └── update_user_colleges.py│ │ ├── __init__.py│ │ └── __init__.pyc│ ├── migrations│ │ ├── 0001_initial.py│ │ ├── 0001_initial.pyc│ │ ├── __init__.py│ │ └── __init__.pyc│ ├── models.py│ ├── models.pyc│ ├── static│ │ ├── css│ │ ├── img│ │ └── js│ ├── templates│ │ ├── answers.html│ │ ├── answers-mobile.html│ │ ├── answertest.html│ │ ├── badge-explanation.html│ │ ├── badges.html│ │ ├── baduser.html│ │ ├── bottombar.html│ │ ├── bottombar-mobile.html│ │ ├── browse-stories.html│ │ ├── end.html│ │ ├── friends.html│ ├── tests.py│ ├── tests.pyc│ ├── urls.py│ ├── urls.pyc│ ├── views│ │ ├── answers.py│ │ ├── answers.pyc│ │ ├── badge_explanation.py│ │ ├── badge_explanation.pyc│ │ ├── badges.py│ │ ├── badges.pyc│ │ ├── browse_stories.py│ │ ├── browse_stories.pyc│ │ ├── explanation.pyc│ │ ├── format_for_graph.py│ └── views.pyc├── __init__.py├── __init__.pyc├── local_settings.py├── local_settings.pyc├── local_settings.py.example├── logclient│ └── __init__.py├── manage.py├── maps.py├── maps.pyc├── patch.py├── pokesite├── python.exe.stackdump├── README├── requirements.txt├── settings.py├── settings.pyc├── survey│ ├── admin.pyc│ ├── data│ │ ├── CBI Questions with percentages v3.csv│ │ ├── data.csv│ │ ├── List of School Nicknames.txt│ │ ├── pquestions.csv│ │ ├── question_pks_and_categories.csv│ │ ├── questions.csv│ │ └── questions.json│ ├── __init__.py│ ├── __init__.pyc│ ├── localsetting.py│ ├── models.pyc│ ├── tests.py│ └── views.py├── surveydump.json├── sync_badges.py├── templates│ ├── 404.html│ ├── 500.html│ ├── base.html│ └── base-mobile.html├── testdump.json├── tree.txt├── urls.py└── urls.pyc | The answer was something to do with my .pyc files... I don't know how or why, but runningfind . -name "*.pyc" -delete(which then presumably regenerated my pyc files)in both of my project directories fixed the problem. |
Sphinx using Python3 interpreter instead of Python2 I installed Sphinx lately for python 2.x based on the instructions: http://www.sphinx-doc.org/en/master/usage/installation.html.After I generate all the .rst files, I did a "make html" to generate the html file. However, when it builds the files, it does not use the Pycharm project interpreter which is python 2.7, instead it uses python 3.6:/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219:Because of that, it introduce a bunch of "No Module Names xx" issue. The python path is set to the project so i am pretty sure the issue is not because of that. Any one can give me some clue about how I can force it to build by using Python 2.7 on my mac. | Easy fix would be to create a new virtual environment of python 2.7. Then, do pip install sphinx. I would suggest using sphinx_apidoc.exe and sphinx_build.exe instead of make html. Those exes can be run with various options which is really helpful. |
How to preserve the original string format with jinja template looking for a tip.I have a random string generated elsewhere in format:string = """[TAG1] Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas commodo diam ac sollicitudin vestibulum. Nunc ac dignissim elit. [TAG2] Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas commodo diam ac sollicitudin vestibulum. Nunc ac dignissim elit. """Now I want to pass it to html template and preserve it's actual format. Right now I am splitting the string by '\n' and: {% for line in comments %} <div>{{ line|safe}}</div> {% endfor %}which results in:string = """[TAG1] Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas commodo diam ac sollicitudin vestibulum. Nunc ac dignissim elit. [TAG2] Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas commodo diam ac sollicitudin vestibulum. Nunc ac dignissim elit. """Should I replace the spaces with html entity? What is the acceptable way to do it? | both seems good.. hope this helps <p style="white-space: pre-wrap;">{{ string }}</p>OR<p style="white-space: pre-line;">{{ string }}</p>Updatemain.pyfrom flask import Flask, render_template, requestapp = Flask(__name__)@app.route('/', methods=['GET', 'POST'])def index(): string = """[TAG1] Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas commodo diam ac sollicitudin vestibulum. Nunc ac dignissim elit. [TAG2] Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas commodo diam ac sollicitudin vestibulum. Nunc ac dignissim elit. """ return render_template('index.html', string=string) index.html<html><head></head><body> <p style="white-space: pre-line;">{{ string }}</p></body></html> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.