questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
"OperationalError: database is locked" when deploying site to Azure I have built a django website and a part of it is Microsoft authentication link.When I upload the site to azure cloud and click on the "log in" link, I recieve the following error: OperationalError at /logindatabase is lockedRequest Method: GETRequest URL: http://bhkshield.azurewebsites.net/loginDjango Version: 2.2.2Exception Type: OperationalErrorException Value: database is lockedException Location: /home/site/wwwroot/antenv3.6/lib/python3.6/site-packages/django/db/backends/base/base.py in _commit, line 240Python Executable: /usr/local/bin/pythonPython Version: 3.6.7Python Path: ['/usr/local/bin', '/home/site/wwwroot', '/home/site/wwwroot/antenv3.6/lib/python3.6/site-packages', '/usr/local/lib/python36.zip', '/usr/local/lib/python3.6', '/usr/local/lib/python3.6/lib-dynload', '/usr/local/lib/python3.6/site-packages']Server time: Fri, 14 Jun 2019 13:19:22 +0000I am using sqlite3 (setting.py code piece):DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), }}I don't understand why I get this error because I don't insert or commit anything to the database. My website consists only of one page that has a sign in link (4 views: home, contex intialize, login and callback). That's it. Just to mention, when I run the site locally, everything works. It stops working only after deployment. Another weird thing is that I have uploaded something like this before to another site on azure and the login worked. For some reason, it doesn't work now and I have no idea why...Has anyone encountered this type of error and can help?If you need me to provide more files' content, let me know which files and I will.
It seems like a duplication of this question: OperationalError: database is locked.From the documentation of Django:https://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errorsoption SQLite is meant to be a lightweight database, and thus can’t support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released. Python’s SQLite wrapper has a default timeout value that determines how long the second thread is allowed to wait on the lock before it times out and raises the OperationalError: database is locked error. If you’re getting this error, you can solve it by: Switching to another database backend. At a certain point SQLite becomes too “lite” for real-world applications, and these sorts of concurrency errors indicate you’ve reached that point. Rewriting your code to reduce concurrency and ensure that database transactions are short-lived. Increase the default timeout value by setting the timeout database optionI have been developing a Django web app as well, and I chose a Azure SQL Server to be the database of the application. Everything has been working OK.
How can I reference a user globally in Django for all Exceptions? I have two questions that correlate.1) Does django-rest-framework have a way to reference a user globally?2) Does django / python allow me to change the generic exception class to include this user ID as meta every time it throws?I know I can create custom exception classes and raise them in code, but what about an exception I don’t correctly handle? For example, let’s say a divide by zero exception is thrown but I didn’t correctly handle it, right now my logs just say “Divide by zero exception”.Is there a way to update this globally so if a user is logged in it says “Divide by zero exception for user_id {id}“?class SomeExternalApiHelper: @staticmethod def do_api_call(): url = 'https://example.com' # do api request try: home_value = 100 / 0 except Exception as e: # Exception occurs here, I want to be able to reference user_id, without having # to pass the user_object all the way down into this call raise Exception("Something went wrong for user ID {0}".format(user_id))class AddNewHouse(APIView): def post(self, request, format=None): # I can call request.user here and access the user object SomeExternalApiHelper.do_api_call()
You can create a custom middleware that catches all exceptions, you can then log the exception along with the userdef exception_middleware(get_response): def middleware(request): try: response = get_response(request) except Exception as e: # You now have the exception and request.user and can log how you like raise return responsereturn middleware
Finding difference of sets of frozensets If I do:set({frozenset({1,2}), frozenset({1})}) - set(frozenset({1}))I would expect:{frozenset({1, 2})}as the result but actually I get:{frozenset({1}), frozenset({1, 2})}Why?
That is because when you do set(frozenset({1})) that's actually {1}. If you try:set({frozenset({1,2}), frozenset({1})}) - {frozenset({1})}you may get the result you want.
How to apply a complicated function a function on a column without “apply”? I have a dataframe df:A | B | C | ... | D1000 | 600 | 600 | productdesc | 01500 | 400 | 600 | productdesc | 11000 | 600 | 300 | productdesc | 0and a function do_stuff():def do_stuff(A, B, C): * calculations * return resultI would like to apply this function onto my dataframe df. Due to the size of my dataframe and the complexity of my function I try to avoid .apply().Is there any other method to use a function on a dataframe with column values of each row as function parameters for the result for each row into a new column? Something likedf["scale_factor"] = do_stuff(df[["A", "B", "C"]])End result should be:A | B | C | ... | D | scale_factor1000 | 600 | 600 | productdesc | 0 | *result of do_stuff(1000, 600, 600)*1500 | 400 | 600 | productdesc | 1 | *result of do_stuff(1500, 400, 600)*1000 | 600 | 300 | productdesc | 0 | *result of do_stuff(1000, 600, 300)*
Just need to ensure you return a list or np.array of same size as data framedf = pd.DataFrame({f"col{i}":[random.randint(0,10) for i in range(10)] for i in range(4)})def dostuff(a): return [f"*result of dostuff({x},{a[1][i]},{a[2][i]})*" for i,x in enumerate(a[0])]df["scale_factor"] = dostuff(np.array(df[["col0","col1","col2"]]).T)print(df.to_string(index=False))output col0 col1 col2 col3 scale_factor 2 0 3 2 *result of dostuff(2,0,3)* 9 6 10 2 *result of dostuff(9,6,10)* 0 7 8 4 *result of dostuff(0,7,8)* 10 2 9 6 *result of dostuff(10,2,9)* 8 3 4 2 *result of dostuff(8,3,4)* 2 2 2 5 *result of dostuff(2,2,2)* 1 8 1 5 *result of dostuff(1,8,1)* 0 1 6 6 *result of dostuff(0,1,6)* 2 0 10 6 *result of dostuff(2,0,10)* 9 10 8 2 *result of dostuff(9,10,8)*
How to use Cyberduck Credentials to Access WebDAV with Python I've never used WebDav before, but I downloaded Cyberduck and used it to connect to an internal work drive and download an entire directory to my desktop. However, for reasons I can't yet identify, I run into random errors where some files don't download. I believe this is related to the network, not Cyberduck.The problem I'm having is that Cyberduck doesn't keep a record of the errors and doesn't seem to have very robust error and exception processing.I'd like to run the same process through a python program so I can make a record of errors.However, the libraries I've tried I can't connect. I'm sure the problem is user error.I've tried easywebdav and webdavclient3, but I can't seem to replicate a connection.For easywebdav I've tried to mimic the info I input for Cyberduck (see image below) like so:import easywebdavwebdav = easywebdav.connect(host='drive.corp.amazon.com', username='username', port=443, protocol='https', password='password')print(webdav.ls())But that doesn't work.And I've tried changing the host argument to https://username@drive.corp.amazon.com/mnt/... but no luck there either. Any idea what I'm doing wrong?
It seems Cyberduck is configured for using NTLM authentication, but requests by default use Basic authentication.For connecting to WebDAV server with NTLM authentication you can use 3rd party library which implements it, for example requests-ntlm:from webdav3.client import Clientfrom requests_ntlm import HttpNtlmAuthoptions = { 'webdav_hostname': "https://webdav.server.ru"}client = Client(options)# Configure authentication method client.session.auth = HttpNtlmAuth('domain\\username','password')
Django-sorting Cannot reorder a query once a slice has been taken Use django-sorting library according to this example:django-sorting example, but get errors said "Cannot reorder a query once a slice has been taken." at line "{% autosort object_list %}".
A slice is something like object_list = MyModel.objects.all()[:5]. Trying to autosort that would throw this error.You'll need to pass an entire queryset to autosort.
What is the difference between TF model garden and tf.keras.applications? With the new TensorFlow 2 we have their Model Garden (in GitHub under /models), as well as the pre-trained models for Keras under tf.keras.applications. What is the difference between those two?
Tensorflow Keras Applications- pre-trained models for CNNs. They include most frequently used CNN architectures such as ResNet, InceptionNet, VGG etc. - tf.keras.applications allow you to directly import a CNN architecture (see docs https://www.tensorflow.org/api_docs/python/tf/keras/applications?hl=en) customize its loss and use it as you like. Tensorflow Model Garden - models for other tasks such as NLP, object detection etc. These models are too maintained by TensorFlow. - tf model garden are not part of tf core API so you can't import them directly as you do for tf.applications.
How can I continue a nested conversation in a separate file I am not a professional programmer but I'm trying to build a python-telegram-bot for work using ConversationHandlers. Basically, I offer users a menu of options, summarized as:Complete SurveyEXITIf "Complete Survey" is selected, the bot then asks for the user ID. Depending on the user ID I assign the user 1 of 30+ different surveys (I'm trying to use child conversations). Over time this list of surveys will grow and each survey has unique questions and steps to it.Given the number of surveys, I thought of managing each survey as a child conversation with its own ConversationHandler, and running it from a separate file/module (to keep things dynamic and not have one HUGE file with n+ variables to consider).The thing is, how can I continue the child conversation from a separate file? Is there another way to approach this? I understand that the bot is still running from the main file and checking for updates. I would like to run each survey and, once finished, return to the INITIAL bot menu (parent conversation).I found this previous discussion but my knowledge barely goes beyond the python-telegram-bot examples so I'm having a hard time following along: https://github.com/python-telegram-bot/python-telegram-bot/issues/2388Here is an example summarized code of what I'm trying to do:main_file.pyfrom telegram import ReplyKeyboardMarkup, ReplyKeyboardRemove, InlineKeyboardMarkup, InlineKeyboardButton, Update, KeyboardButton, Bot, InputMediaPhotofrom telegram.ext import Updater, CommandHandler, MessageHandler, Filters, ConversationHandler, CallbackQueryHandler, CallbackContextimport surveys # this file contains each survey as a function with its own ConversationHandlertoken = ''MENU, USER, CHAT_ID, USER_ID, FINISHED = map(chr, range(1,6))END = ConversationHandler.ENDdef start(update: Update, context: CallbackContext) -> int: """Initialize the bot""" context.user_data[CHAT_ID] = update.message.chat_id text = 'Select an option:' reply_keyboard = [ ['Complete Survey'], ['EXIT'], ] context.bot.send_message( context.user_data[CHAT_ID], text=text, reply_markup=ReplyKeyboardMarkup(reply_keyboard, one_time_keyboard=True) ) return MENUdef exit(update:Update, context:CallbackContext) -> None: """Exit from the main menu""" context.bot.send_message( context.user_data[CHAT_ID], text='OK bye!', reply_markup=ReplyKeyboardRemove() ) return ENDdef abrupt_exit(update:Update, context:CallbackContext) -> None: """Exit the main conversation to enter the survey conversation""" return ENDdef survey_start(update:Update, context:CallbackContext) -> None: """Asks for the user_id in order to determine which survey to offer""" text = 'Please type in your company ID' context.bot.send_message( context.user_data[CHAT_ID], text=text, reply_markup=ReplyKeyboardRemove() ) return USERdef survey_select(update:Update, context:CallbackContext) -> None: """Search database to find next survey to complete""" user = str(update.message.text) chat_id = context.user_data[CHAT_ID] context.user_data[USER_ID] = user """Search database with user_id and return survey to complete""" survey = 'survey_a' # this value is obtained from the database runSurvey = getattr(surveys, survey) # I used getattr to load functions in a different module runSurvey(Update, CallbackContext, user, chat_id, token) return FINISHEDdef main() -> None: updater = Updater(token, use_context=True) # Get the dispatcher to register handlers dispatcher = updater.dispatcher # Survey conversation survey_handler = ConversationHandler( entry_points=[ MessageHandler(Filters.regex('^Complete Survey$'), survey_start), ], states={ USER: [ MessageHandler(Filters.text, survey_select), ], FINISHED: [ # I'm guessing here I should add something to exit the survey ConversationHandler ], }, fallbacks=[ CommandHandler('stop', exit), ], ) # Initial conversation conv_handler=ConversationHandler( entry_points=[ CommandHandler('start', start), ], states={ MENU: [ MessageHandler(Filters.regex('^Complete Survey$'), abrupt_exit), MessageHandler(Filters.regex('^EXIT$'), exit), ], }, allow_reentry=True, fallbacks=[ CommandHandler('stop', exit), ], ) dispatcher.add_handler(conv_handler, group=0) # I used separate groups because I tried ending dispatcher.add_handler(survey_handler, group=1) # the initial conversation and starting the other # Start the Bot updater.start_polling() updater.idle()if __name__ == '__main__': main()surveys.py This is where each survey is with its own conversation and functions to call. Basically I enter survey_A (previously selected) and am trying to use it as the main()from telegram import ReplyKeyboardMarkup, ReplyKeyboardRemove, InlineKeyboardMarkup, InlineKeyboardButton, Update, \ KeyboardButton, Bot, InputMediaPhotofrom telegram.ext import Updater, CommandHandler, MessageHandler, Filters, ConversationHandler, CallbackQueryHandler, \ CallbackContextNEXT_QUESTION, LAST_QUESTION, CHAT_ID = map(chr, range(1,4))END = ConversationHandler.ENDdef exit(update:Update, context:CallbackContext) -> None: """Exit from the main menu""" context.bot.send_message( context.user_data[CHAT_ID], text='OK bye!', reply_markup=ReplyKeyboardRemove() ) return ENDdef first_q(update:Update, context:CallbackContext, chat_id:str) -> None: """First survey_A question""" context.bot.send_message( chat_id, text='What is your name?', reply_markup=ReplyKeyboardRemove() ) return NEXT_QUESTIONdef last_q(update: Update, context: CallbackContext) -> None: """Last survey_A question""" update.message.reply_text( 'How old are you?', reply_markup=ReplyKeyboardRemove() ) return LAST_QUESTIONdef survey_a(update:Update, context:CallbackContext, user:str, chat_id: str, token:str) -> None: """This function acts like the main() for the survey A conversation""" print(f'{user} will now respond survey_a') CHAT_ID = chat_id # identify the chat_id to use updater = Updater(token, use_context=True) # here I thought of calling the Updater once more survey_a_handler = ConversationHandler( entry_points=[ MessageHandler(Filters.text, first_q), ], states={ NEXT_QUESTION: [ MessageHandler(Filters.text, last_q), ], LAST_QUESTION: [ MessageHandler(Filters.text, exit), ], }, allow_reentry=True, fallbacks=[ CommandHandler('stop', exit), ], ) updater.dispatcher.add_handler(survey_a_handler, group=0) # I only want to add the corresponding # survey conversation handler first_q(Update, CallbackContext, CHAT_ID)I run the code and it breaks at surveys.py line 23, in first_q:context.bot.send_message(AttributeError: 'property' object has no attribute 'send_message'I assume my logic with the conversation handler is way off.I appreciate any help
I have been developing telegram bots for about a year now, and I hope the best approach is to structure your project first. Let me explain that all in detail."Foldering"Folder structureBasically, all the code is in the src folder of the project. Inside the src folder there is another sub-folder called components which includes all the different sections of your bot you want to work on (i.e your quiz_1, quiz_2, ...) and main.py file which includes the 'core' of the bot. However in the root directory of the project (which is just your project folder) you can see bot.py file which serves just as a runner file. So nothing more in there except just:import src.main from mainif '__name__' == '__main__': main()TipsSo regarding your questionnaire:I would recommend using just strings as keys for the states instead of mapping them to random values. Basically you can do just like "MAIN_MENU", "STATE_ONE" , "STATE_TWO" and so on, but be sure to return the same string in the callback function!The overall logic of the PTB library is like:Telegram API server -> PTB Updater() class -> Dispatcher() (which is updater.dispatcher in your code) -> Handlers -> callback function -> <- user.The reason arrows point to user and back to callback function is because there is an interaction of your bot's logic and user, so that user's response goes back to your callback function code.I recommend not choosing callback function names as like 'first_question' or 'second_question'. Instead name it like get_question() use that function to retrieve question data from other source so that it can be dynamic. So for example, you will have a dictionary of different questions with keys of question number - simple, right? And then you will write a function that will send user a question according to its state and picking the right question with the right key from the dictionary. By this you can add more questions to your dictionary and no need to change the code in the function because it will be dynamic (as long as you write the correct function that will work).In your main.py file have only one main() function which will hold the Updater() with the given token, because you cannot have more than one Updater() with the same token. It's like one bot can be accessed by only and only one app that is polling at a time. Polling - visit here. Great news!To support your bot development and follow the structured project creation, I have created a repo on GitHub that holds almost the same project structure as I tried to explain to you today. Feel free to check it out, clone it and play around. Just add your token to .env file and run the bot.More resourcesCheck out these projects as well:https://github.com/ez-developers/traderbot/tree/master/bothttps://github.com/ez-developers/equip.uz/tree/main/botAs you will see in there main.py contains all the handlers and src folder contains all the different 'components' which are more of a like different parts of the bot.If you need any help, I am here and more than happy to answer any of your questions.
Define bubble sizes according to a column and bubble colors according to another column in scatter plot (matplotlib) I'm building a simple scatter plot that reads data from a xls file. It's the classic Life expectancy x GDP per capita scatter plot. Here's the code:import pandas as pdimport matplotlib.pyplot as pltimport matplotlib.cm as cm#ler a terceira sheet da planilhadata = pd.read_excel('sample.xls', sheet_name=0)data.head()plt.scatter(x = data['LifeExpec'], y = data['GDPperCapita'], s = data['PopX1000'], c = data['PopX1000'], cmap=cm.viridis, edgecolors = 'none', alpha = 0.7)for estado in range(len(data['UF'])): plt.text(x = data['LifeExpec'][estado], y = data['GDPperCapita'][estado], s = data['UF'][estado], fontsize = 14)plt.colorbar()plt.show()The .xls file:The population column from the xls file (PopX1000) is defining the bubbles sizes and currently it's defining their colors as well. I would like the bubbles to change sizes according to population (as they do now), but the colors to change according to the Region the State is in.I believe I can't simply change the c property because it expects a float value.Any tips on how to do this?
You could transform the Region to a numeric representation, and use that as a "key" to your colormap. Below are two methods to do that (one is commented out, pick whichever you choose, the result should be the same):plt.scatter(x = data['LifeExpec'], y = data['GDPperCapita'], s = data['PopX1000'], c = pd.factorize(data['Region'])[0], # Alternatively: # c = data['Region'].astype('category').cat.codes cmap=cm.viridis, edgecolors = 'none', alpha = 0.7)
Pandas how to select top 2 values after group by? I got confused with sortby or nlargest functions. Can someone show me the light please? New and learning Python with all your help.Current Dataset:df = pd.DataFrame({'State':['TX','TX','TX','LA','LA','LA','LA','MO','MO'], 'County':['TX1','TX1','TX1','LA1','LA1','LA1','LA1','MO1','MO1'], 'value':[1,2,3,1,2,3,4,1,4]})Desired output dataset would be like this:df1 = pd.DataFrame({'State':['TX','TX','LA','LA','MO','MO'], 'County':['TX1','TX1','LA1','LA1','MO1','MO1'], 'value':[3,2,4,3,4,1]})
More than one way to do this but I think the "built-in" method to select ordinal data is most likely nth(). Docs.import pandas as pd>>>df State County value0 TX TX1 11 TX TX1 22 TX TX1 33 LA LA1 14 LA LA1 25 LA LA1 36 LA LA1 47 MO MO1 18 MO MO1 4gp = df.sort_values('value', ascending=False).groupby(['State', 'County']).nth([range(2)])>>>gp valueState CountyLA LA1 4 LA1 3MO MO1 4 MO1 1TX TX1 3 TX1 2To get the output table that you requested, reset its index.>>>gp.reset_index() State County value0 LA LA1 41 LA LA1 32 MO MO1 43 MO MO1 14 TX TX1 35 TX TX1 2
Sum columns based on multiple lists with whitespace replacement in Pandas I want to create three sum columns based on the items from each list. The process is to replace the whitespace with underscore before summing the columns. I was trying to do a loop instead of doing a list comprehension one by one, but I might have missed out something in the loop. How can I achieve my expected result?import pandas as pdfruits = ['apple pie', 'watermelon pie', 'banana pie']places = ['Hong Kong', 'Boston', 'New York']df = pd.DataFrame({ 'apple_pie': [3, 4, 5], 'watermelon_pie': [3, 4, 5], 'New_York': [6, 7, 8]})xup = ['fruits', 'places', 'persons']yup = [fruits, places, persons]for y in yup: for x in xup: try: df[x]= df[[y.replace(" ", "_") for y in yup]].sum(axis = 1) except: continueExpected output: apple_pie watermelon_pie New_York fruits places0 3 3 6 6 61 4 4 7 8 72 5 5 8 10 8
You need to loop through xup and yup in parallel using zip instead of nesting them:for sum_col, cols in zip(xup, yup): cols = [x.replace(' ', '_') for x in cols] df[sum_col] = df[df.columns.intersection(cols)].sum(1)df apple_pie watermelon_pie New_York fruits places persons0 3 3 6 6 6 0.01 4 4 7 8 7 0.02 5 5 8 10 8 0.0
Input string. Associate index to each character in string for a dictionary I am asking the user to enter a string. I am ultimately trying to pass the string to a dictionary, where the the index of each character is associated with each character in the string.Ex: Input = CSC120What I have done so far is entered a string and passed it to a set. The issue is that when I pass it to a set, it passes in : {'1', '2', 'C', '0', 'S'}. It is out of order. I was thinking I would be able to correlate the string to an index once it was passed to the set, but it is out of order and does not duplicate the 'C'. The plan was to have 2 sets and link them in a dictionary. I am stuck at trying to get the string to be correctly passed to the set. d = {}set1 = set()string1 = input("Enter a string:").upper()for i in string1: set1.add(i)print(set1)Ultimately the results I am trying to achieve is:d = { 0:'C', 1:'S', 2:'C', 3:'1', 4:'2', 5:'0'}
It can be done with a dictionary display (aka comprehension):Input = 'CSC120'd = {i: c for i, c in enumerate(Input)}print(d) # -> {0: 'C', 1: 'S', 2: 'C', 3: '1', 4: '2', 5: '0'}However, it can be done with even less code (and likely more quickly), by passing the dict constructor the an enumeration of the characters in the string (as helpfully pointed-out by @coldspeed in a comment):d = dict(enumerate(Input))Here's the documentation for the built-in enumerate() function.
Sum column of a hierarchical index? The DataFrame without sum for column of a hierarchical index: dex1 dex2 dex3 one two H D A 1 2 B 4 5 C 7 8 I E A 1 1 B 2 2 C 3 3 The DataFrame with sum for a column of a hierarchical index should be, for columns df['one'] index dex1, 12 (1 + 4 + 7), 6 (1 + 2 + 3) and for column df['two'] of index dex1, 12 (1 + 4 + 7), 6 (1 + 2 + 3).So far, I have tried:df.loc[dex1].sum['one']
You can use groupby + GroupBy.sum:df1 = df.groupby(level='dex1').sum()print (df1) one twodex1 H 12 15I 6 6From version of pandas 0.20.0 is possible omit level parameter.df1 = df.groupby('dex1').sum()print (df1) one twodex1 H 12 15I 6 6Or use DataFrame.sum with parameter level:df1 = df.sum(level=0)print (df1) one twodex1 H 12 15I 6 6df1 = df.sum(level='dex1')print (df1) one twodex1 H 12 15I 6 6
How can I mock waiting library in Python? I'm using the waiting library in some of my code to wait for a condition to become true. As a part of the library, waiting.wait returns True when the predicate is true; otherwise it throws and exception or waits forever depending on timeout values, etc.I'd like to patch this in my tests to always return True without getting into the wait cycle. Here's my attempt:#!/usr/bin/env python3from unittest.mock import Mockimport waitingfrom waiting import waitdef test_waiting(): waiting.wait.return_value = True # Below *should* wait forever because it can never be true. # Want to make it return true instead. return wait(lambda: False)if __name__ == "__main__": assert(test_waiting())What I find, though, is that it actually calls the library's code instead of short-circuiting the return.How can I force this method to simply return a value (or raise a side-effect) without actually calling the code?
Your waiting.wait.return_value = True won't work, because waiting.wait is not a mock object. You only added an arbitrary attribute to the existing wait function, but that function won't use that attribute.To mock out the wait function, just mock it directly:from unittest import mockwith mock.patch('__main__.wait'): wait.return_value = TrueThere is no need to mock the internals of the waiting library, all you want to happen in your code is that any use of the wait() callable immediately returns.Note that I picked the __main__ module to patch the name wait() in, see Where to patch in the unittest.mock documentation.Your actual location may differ, and if you used import waiting everywhere, then you'd have to use mock.patch('waiting.wait'). Otherwise, you generally would apply it the same module you used from waiting import wait in.
pandas set column value only for common rows with another table Inputtable 1+---+---+---+| A | B | C |+---+---+---+| a | b | 0 |+---+---+---+| x | y | 0 |+---+---+---+| w | q | 0 |+---+---+---+table 2+---+---+| A | B |+---+---+| a | b |+---+---+| w | q |+---+---+Outputtable 1+---+---+---+| A | B | C |+---+---+---+| a | b | 1 | <-+---+---+---+| x | y | 0 |+---+---+---+| w | q | 1 | <-+---+---+---+I have two tables, I want to set column C in table 1 to 1 for all the rows in table 1 which have the same values as the rows in table 2.
UseIn [303]: df1['C'] = df1.merge(df2, how='left', indicator='_')['_'].eq('both').astype(int)In [304]: df1Out[304]: A B C0 a b 11 x y 02 w q 1
pydbg 64 bit enumerate_processes() returning empty list I'm using pydbg binaries downloaded here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#pydbg as recommended in previous answers.I can get the 32-bit version to work with a 32-bit Python interpreter, but I can't get the 64-bit version to work with 64-bit Python. enumerate_processes() always returns an empty list.. Am I doing something wrong?Test code:import pydbgif __name__ == "__main__": print(pydbg.pydbg().enumerate_processes())32-bit working:>C:\Python27-32\python-32bit.exePython 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32...>C:\Python27-32\python-32bit.exe pydbg_test.py[(0L, '[System Process]'), (4L, 'System'), <redacted for brevity>]64-bit gives an empty list:>pythonPython 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:24:40) [MSC v.1500 64 bit (AMD64)] on win32...>python pydbg_test.py[]
Pydbg defines the PROCESSENTRY32 structure wrong.Better use a maintained package such as psutil or use ctypes directly, e.g.:from ctypes import windll, Structure, c_char, sizeoffrom ctypes.wintypes import BOOL, HANDLE, DWORD, LONG, ULONG, POINTERclass PROCESSENTRY32(Structure): _fields_ = [ ('dwSize', DWORD), ('cntUsage', DWORD), ('th32ProcessID', DWORD), ('th32DefaultHeapID', POINTER(ULONG)), ('th32ModuleID', DWORD), ('cntThreads', DWORD), ('th32ParentProcessID', DWORD), ('pcPriClassBase', LONG), ('dwFlags', DWORD), ('szExeFile', c_char * 260), ]windll.kernel32.CreateToolhelp32Snapshot.argtypes = [DWORD, DWORD]windll.kernel32.CreateToolhelp32Snapshot.restype = HANDLEwindll.kernel32.Process32First.argtypes = [HANDLE, POINTER(PROCESSENTRY32)]windll.kernel32.Process32First.restype = BOOLwindll.kernel32.Process32Next.argtypes = [HANDLE, POINTER(PROCESSENTRY32)]windll.kernel32.Process32Next.restype = BOOLwindll.kernel32.CloseHandle.argtypes = [HANDLE]windll.kernel32.CloseHandle.restype = BOOLpe = PROCESSENTRY32()pe.dwSize = sizeof(PROCESSENTRY32)snapshot = windll.kernel32.CreateToolhelp32Snapshot(2, 0)found_proc = windll.kernel32.Process32First(snapshot, pe)while found_proc: print(pe.th32ProcessID, pe.szExeFile) found_proc = windll.kernel32.Process32Next(snapshot, pe)windll.kernel32.CloseHandle(snapshot)
Error in regression analysis with 3 classes I am trying to apply one vs all logistic regression:I am using one vs all method (class1 vs class2+ class3, c2 vs c1+c3, c3 vs c1+c2) to calculate the three cases weights w1,w2,w3: for n1 in range(0,50000): s1 = np.dot(dt, w1) p1 = (1 / (1 + np.exp(-s1))) gr1 = (np.dot(dt.T, p1-c1)) gr1 /= N w1 -= 0.01 * gr1 if n1 % 10000 == 0 and n1!=0: loss1 = np.sum( np.log(1 + np.exp(s1))-p1*s1 ) print('loss1',loss1)dt is my features, w1,w2,w3 are initialized as w1=np.zeros((5,1)),c1=np.vstack((np.ones(40),np.zeros(40),np.zeros(40)))c2=np.vstack((np.zeros(40),np.ones(40),np.zeros(40)))c3=np.vstack((np.zeros(40),np.zeros(40),np.ones(40)))
so. the iris data set is not perfect linearly separable in all the sets.so wen we use a linear classifier like logistic regression the loss in the part that is not linearly separable tends to be unpredictable.you can put a a very small learning hate and a patiently method to avoid overffitting.normalization of you data between 0 and 1 will help too.
Django bulk create objects from QuerySet If I have a QuerySet created from the following command:data = ModelA.objects.values('date').annotate(total=Sum('amount'), average=Avg('amount'))# <QuerySet [{'date': datetime.datetime(2016, 7, 15, 0, 0, tzinfo=<UTC>), 'total': 19982.0, 'average': 333.03333333333336}, {'date': datetime.datetime(2016, 7, 15, 0, 30, tzinfo=<UTC>), 'total': 18389.0, 'average': 306.48333333333335}]>Is it possible to use data to create ModelBobjects from each hash inside the QuerySet? I realize I could iterate and unpack the QuerySet and do it like so: for q in data: ModelB.objects.create(date=q['date'], total=q['total'], average=q['average'])But is there a more elegant way? It seems redundant to iterate and create when they're almost in the bulk_create format.
This is more efficient than your call because it uses bulk_create, which invokes just one SQL bulk create operation, as opposed to one create per object; also, it is much more elegant:ModelB.objects.bulk_create([ ModelB(**q) for q in data ])As to how this works, the double asterisk unpacks a dictionary (or dictionary-like object) into keyword arguments to a Python function.Official Django documentation: https://docs.djangoproject.com/en/3.2/ref/models/querysets/#django.db.models.query.QuerySet.bulk_create
What technology is used to serve HTTP requests compatible with Python I'm building an application on AWS and well, this world is new to me.I expose the problemI have experience with Apache / PHP, Apache is the one who helps me serve HTTP requests and PHP is the language of Backend.The backend language that I am using in this new project is Python, but my question is, what is the technology that helps me serve the requests?Can I install Apache / Python, or what would be the perfect duo?I know this can have many variants depending on each experience and needs of the project, but honestly I am lost in what to install and what not.Thanks for your guidance
I would recommend to look at uWSGI:https://uwsgi-docs.readthedocs.io/en/latest/WebServers.htmlmod_wsgi for Apache:https://modwsgi.readthedocs.io/en/develop/For simple pages you still have CGI:https://docs.python.org/3.6/library/cgi.htmlThere exists also mod_python for Apache, but it isn't modern way to embed Python interpreter within the webserver (don't use it, here only for information purposes!).http://modpython.org/
AttributeError: Tensor.op is meaningless when eager execution is enabled I am trying to implement RESNET 50 from scratch. After accumulating all the layers, I call tf.keras.Model. However, it gives an error:AttributeError: Tensor.op is meaningless when eager execution is enabled.For testing, I am inputting a 4-D tensor. conv_diff_size and conv_same_size are two custom blocks having con2d and batch-normalization layers. I am using TensorFlow 2.0 on Google Colab.def ResNet50(inputs, classes): X = tf.keras.layers.Conv2D(64, kernel_size = (7,7), strides=2, padding='valid', data_format='channels_last', input_shape = inputs.shape)(inputs) X = tf.keras.layers.BatchNormalization(axis=-1, momentum=0.9)(X) X = tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)(X) X = conv_diff_size(X, [64, 64, 256]) X = conv_same_size(X, [64, 64, 256]) X = conv_same_size(X, [64, 64, 256]) X = conv_diff_size(X, [128, 128, 512]) X = conv_same_size(X, [128, 128, 512]) X = conv_same_size(X, [128, 128, 512]) X = conv_same_size(X, [128, 128, 512]) X = conv_diff_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_diff_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), name = 'avg_pool')(X) X = tf.keras.layers.Flatten()(X) X = tf.keras.layers.Dense(classes, activation='relu')(X) model = tf.keras.Model(inputs=X, outputs = X) return model
the problem in your code is you are giving X as input as well as output.Try thisimport tensorflow as tfdef ResNet50(input_shape, classes): inputs = tf.keras.Input(shape=input_shape)#input_shape = (224,224,3) X = tf.keras.layers.Conv2D(64, kernel_size = (7,7), strides=2, padding='valid', data_format='channels_last', input_shape = inputs.shape)(inputs) X = tf.keras.layers.BatchNormalization(axis=-1, momentum=0.9)(X) X = tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)(X) X = conv_diff_size(X, [64, 64, 256]) X = conv_same_size(X, [64, 64, 256]) X = conv_same_size(X, [64, 64, 256]) X = conv_diff_size(X, [128, 128, 512]) X = conv_same_size(X, [128, 128, 512]) X = conv_same_size(X, [128, 128, 512]) X = conv_same_size(X, [128, 128, 512]) X = conv_diff_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_same_size(X, [256, 256, 1024]) X = conv_diff_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = conv_same_size(X, [512, 512, 2048]) X = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), name = 'avg_pool')(X) X = tf.keras.layers.Flatten()(X) out = tf.keras.layers.Dense(classes, activation='relu')(X) model = tf.keras.Model(inputs=inputs, outputs = out) return modelI have added a input tensor layer and assigned it to variable inputs and the final layer to variable out
get_attribute problem with selenium python I've been suing Selenium-Python for about 2 months. I want to get 'rel' attribute value. Every other values are working, but 'rel' value returns nonei.e<span class="isOdd" data-ratio="-1" data-outcome="1.85" data-percentage="23" data-market-id="1677954" data-outcomeno="1" data-id="undefined" rel="[{'fixedoddsweb':'1.62','ratiostatus':0},{'fixedoddsweb':'1.90','ratiostatus':-1},{'fixedoddsweb':'1.85','ratiostatus':-1}]" style="" xpath="1">1.85and if I try like;link = browser.find_element_by_xpath("//*[@id='eventContentContainer']/div[2]/div[1]/span[4]/ul/li[1]/span")print(str(link.get_attribute("class")))it returns, isOdd and its correct .Or if I try to take data-id it returns undefined which is correct value again. Now,If I try to take rel attribute it returns "None"I searched on the internet, they always same thing again and again about get_attribute value. I am not good in HTML. Is there any special thing for rel attribute?Edit 1:I also used execute.script() for another element, and the output like;{'class': 'isOdd', 'data-id': 'undefined', 'data-market-id': '1679546', 'data-outcome': '3.10', 'data-outcomeno': '1', 'data-percentage': '3', 'data-ratio': '-1'}every attribute was printed except 'rel'Edit 2:I found the REAL problem, if you wait on the element about 1-2 seconds, then some tags pops on the elenment then it has rel attribute. Do you know how to wait on the element about 1 sec then click? I guess it is about dynamic element
This might be what you are after?link=WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//*[@id='eventContentContainer']/div[4]/div[4]/span[4]/ul/li[4]/span")))print(link.get_attribute('rel'))Add these imports before trying the above:from selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWait
How to set a background image in tkinter using grid only I'm trying to set a background image to my tkinter window, however I don't quite know how to resize it so it fits the dimensions of the window. I've looked online, and all the tutorials/answers use pack (to expand and fill), but I can't use pack because I have a bunch of other buttons/labels that all use grid (this is a minimal workable example, my actual script is much bigger with more buttons/larger size). Is there any way to do this using grid?This is my current setup:import tkinter as tkfrom PIL import ImageTk, Imageroot = tk.Tk()background_image = ImageTk.PhotoImage(Image.open("pretty.jpg"))l=tk.Label(image=background_image)l.grid()tk.Label(root, text="Some File").grid(row=0)e1 = tk.Entry(root)e1.grid(row=0, column=1)tk.mainloop()
You can use place(x=0, y=0, relwidth=1, relheight=1) to lay out the background image label. In order to fit the image to the window, you need to resize the image when the label is resized.Below is an example based on your code:import tkinter as tkfrom PIL import Image, ImageTkdef on_resize(event): # resize the background image to the size of label image = bgimg.resize((event.width, event.height), Image.ANTIALIAS) # update the image of the label l.image = ImageTk.PhotoImage(image) l.config(image=l.image)root = tk.Tk()root.geometry('800x600')bgimg = Image.open('pretty.jpg') # load the background imagel = tk.Label(root)l.place(x=0, y=0, relwidth=1, relheight=1) # make label l to fit the parent window alwaysl.bind('<Configure>', on_resize) # on_resize will be executed whenever label l is resizedtk.Label(root, text='Some File').grid(row=0)e1 = tk.Entry(root)e1.grid(row=0, column=1)root.mainloop()
Matrix Elements Ratio Control I am using the codeimport numpy as npP=np.random.choice([0, 1], (10000, 10, 10, 10))to generate 10,000 3D binary matrices. But I need to control the ratio of ones to zeros in each of the matrices. What I mean is that for any given matrix, I want 70% of its elements to be 1 and the rest to be 0. Is there a way for doing so? A probabilistic approach would work as well. For example, if, for any given matrix, the probability of each of its elements to be 1 could be equal to 70% that would work too.
You should specify probablity parameter in numpy.random.choiceimport numpy as npsize = (10000, 10, 10, 10)prob_0 = 0.3 # 30% of zerosprob_1 = 1 - prob_0 # 70% of onesP = np.random.choice([0, 1], size=size, p=[prob_0, prob_1])However this will allow to control the ratio in the overall 4d array, not in each sub-array.
How to properly eliminate elements in dictionary until one string remains I really need help on thisdef get_winner (dict_winner): new_dict = {} for winner in dict_winner: first_letter = winner[0] value = dict_winner[winner] if first_letter in new_dict: new_dict[first_letter] += value else: new_dict[first_letter] = value return (new_dict)get_winner({ ('C', 'A', 'B', 'D') :3, ('D', 'B', 'C', 'A') :2, ('C', 'D', 'A', 'B') :1, ('A', 'D', 'B', 'C') :2, ('A', 'D', 'C', 'B') :4, ('A', 'C', 'D', 'B') :2})#Outputs {'A': 8, 'D': 2, 'C': 4}Now I want the result be a tuple of str, NoneType..Also, it is eliminating only the letter with the smallest value in first place only one time. I want it to repeat this process until I get one winner in the end. So in this case all the B's will be eliminated in the dict itself, not in the output. for example:first time = [8, 0, 4, 2] second time = { ('C', 'A', 'D') :3, ('D', 'C', 'A') :2, ('C', 'D', 'A') :1, ('A', 'D', 'C') :2, ('A', 'D', 'C') :4, ('A', 'C', 'D') :2}) #Outputs C = 4 D = 2 A = 8 third time= { ('C', 'A') :3, ('C', 'A') :2, ('C', 'A') :1, ('A', 'C') :2, ('A', 'C') :4, ('A', 'C') :2}) #Outputs C = 6 A = 88/ 14 > 50%, I know that should have been the case since the beginning because A already had the majority value. But i am assuming A has a value of 40% which is when elimination should begin. So, could you point out where I went wrong in coding this? In the example A should be the winner! So the output shopuld be('A', None)
My solution - in one step:def get_winner(candidates): winners = dict.fromkeys(map(lambda f: f[0] for f in candidates.keys())) for cand, votes in candidates.iteritems(): winners[cand[0]]+=votes return [winner for winner, vote in winners.iteritems() if vote ==max(winners.values())]It is not fancy, but it is simple :-)
python 2.7 random sampling causes memory error random.sample(range(2**31 - 1), random.randrage(1, 100))This results in:Traceback (most recent call last): File "<stdin>", line 1, in <module>MemoryErrorI'm running python 2.7.3 on ubuntu 12.04 64-bit with 6GB RAM.I thought 2**31 -1 was the normal upper limit for integers in a 32-bit computer. I'm still 1 below that and I'm getting a memory error?
You are probably referring to the limit on integers in languages like C where an int is usually 4 bytes. In Python 2.7 integers have no limit and are automatically promoted to a larger type when needed, holding infinite precision. Your problem is not directly related to this, you are trying to create a list of each number from 0 to 2**31-1 which will require a very large amount of memory, thus the MemoryError.You do not need to create this list, just use xrange which evaluates lazily:>>> random.sample(xrange(2**31 - 1), random.randrange(1, 100))[248934843, 2102394624, 1637327281, 352636013, 2045080621, 1235828868, 951394286, 69276138, 2116665801, 312166638, 563309004, 546628308, 1890125579, 699765051, 1567057622, 656971854, 827087462, 1306546407, 1071615348, 601863879, 410052433, 1932949412, 1562548682, 1772850083, 1960078129, 742789541, 175226686, 1513744635, 512485864, 2074307325, 798261810, 1957338925, 1414849089, 1375678508, 1387543446, 1394012809, 951027358, 1169817473, 983989994, 1340435411, 823577680, 1078179464, 2051607389, 180372351, 552409811, 1830127563, 1007051104, 2112306498, 1936077584, 2133304389, 1853748484, 437846528, 636724713, 802267101, 1398481637, 15781631, 2139969958, 126055150, 1997111252, 1666405929, 1368202177, 1957578568, 997110500, 1195347356, 1595947705, 2138216956, 768776647, 128275761, 781344096, 941120866, 1195352770, 1244713418, 930603490, 2147048666, 2029878314, 2030267692, 1471665936, 1763093479, 218699995, 1140089127, 583812185, 1405032224, 851203055, 1454295991, 1105558477, 2118065369, 1864334655, 1792789380, 386976269, 1213322292, 663210178, 1402712466, 1564065247]xrange is not a generator - it's a sequence object that evaluates lazily. It supports almost all of the same methods as range but doesn't need to store its data inside a list. (In Python 3 it's simply called range). Therefore it has a length and random.sample can access this instantly to pick k items from it.Note: With regards to 2**31-1 (on 32 bit, sys.maxint for your specific system) being the largest number, you were correct in a sense since xrange has a limit on the upper bound (this doesn't mean Python int has a max size though) also note that Python 3 fixes this with the normal range.>>> random.sample(xrange(2**31), random.randrange(1, 100))Traceback (most recent call last): File "<pyshell#16>", line 1, in <module> random.sample(xrange(2**31), random.randrange(1, 100))OverflowError: Python int too large to convert to C long
Has anyone Compared Qt Commercial Charts with matplotlib? What is better to use for interactive data plotting?matplotlib (http://www.matplotlib.org) or Qt Commercial Charts (http://qt.digia.com/Product/Qt-Add-Ons/Charts/) There are several functionality missing from matplotlib with regards to interactiveness like when I want to hover and display information specific related to the points in a scatter plots.My overall program is in python and pyqt.I also had a look at pyqtgraph.org but I dont think it is mature enough.
matplotlib is interactive, see this demo - you can even embed it in Qt (which I use all the time and it works very very well).You will find using PyQt for data plotting difficult for anything beyond simple plots. As far as I am aware, that "Qt Commercial Charts" thing you linked to isn't available in PyQt nor pyside, but you may want to take a look at PyQtGraph.But I would suggest that you stick with matplotlib first, and if it doesn't meet your needs then look elsewhere.
Referencing a list element by it's name I'm trying to scan through a list of dictionary references to get the number of keys in each dictionary. How do I go about referencing an element's name as opposed to the content of the dictionary? Each of the elements in the audit_dicts list is a reference to an existing dictionary.audit_dicts = [ osdata, weblogic, tomcat ]for i in audit_dicts: print "Length of the %s dictionary is %d lines." % (i.name(), len(i))I understand that it has to do with the contents type being a dictionary, but is there no way to print the name of the element in the list? I'm essentially using the list to store all of these dictionaries, so that I can perform multiple actions on them in a single loop. Also, is there a way to declare these dictionaries within the same loop? What is the pythonic way of doing so? I currently have a bout 20 different dicitonaries of data, but I've only been able to to delcare each one individually before building the dictionary from web data.for i in audit_dicts: i = {}
Lists don't contain names, they contain references to other objects. If you want to be able to use more than just an index to refer to the elements then you should use another data structure such as a dict.
how to print the matched words in python I have a text file and 2 user defined positive and negative files. I'am comparing the words present the 2 files with the text file, and returning either positive or negative. But i need to print those keywords in the text, which categorized them to either positive or negative.example of the output i looking for:file_name IBM Keywords Labelaudio1.wav The customer is good good Positiveaudio2.wav the service is bad bad NegativePlease let me know how to go about it. Here's the code so farpos = readwords('C:\\Users\\anagha\\Desktop\\SynehackData\\positive.txt')neg = readwords('C:\\Users\\anagha\\Desktop\\SynehackData\\Negative.txt')pos = [w.lower() for w in pos]neg = [w.lower() for w in neg]def assign_comments_labels(x): try: if any(w in x for w in pos) : return 'positive' elif any(w in x for w in neg): return 'negative' else: return 'neutral' except: return 'neutral'import pandas as pddf = pd.read_csv("C:\\Users\\anagha\\Desktop\\SynehackData\\noise_free_audio\\outputfile.csv", encoding="utf-8") df['IBM'] = df['IBM'].str.lower()df['file_name'] = df['file_name'].str.lower()df['labels'] = df['IBM'].apply(lambda x: assign_comments_labels(x))df[['file_name','IBM','labels']]
A good start would be to have the right indentation in the assign_comments_labels(x) function.Indent the whole body.Edited answer:Ok I get your question now;This code should work for you based on the logic you used above:def get_keyword(x): x_ = x.split(" ") try: for word in x_: if (word in neg) or (word in pos): return word except: return -1 return -1Then can use lambda as you did for labels:df['keywords'] = df['IBM'].apply(lambda x: get_keyword(x))Edit 2:To return multiple keywords per sentence you can modify the code to return a list; def get_keyword(x): x_ = x.split(" ") keywords = [] try: for word in x_: if (word in neg) or (word in pos): keywords.append(word) except: return -1 return keywordsAn even better solution would be to create two functions get_pos_keywords(x)get_neg_keywords(x)And instead of one column for keywords in your DataFrame you will have two, one for pos and one for neg.Usually texts would have both positive and negative keywords, however the weight of each word would classify the end result of the sentence as positive or negative. If this is your case then I highly recommend you implement the second solution.Note:For second solution change the if statement to# For positive keywords function if word in pos: keywords.append(word)# For negative keywords functionif word in neg: keywords.append(word)Hope that helps
Pandas source code import multiple modules I was looking at the pandas source code here, and I found the following statement a little bit weird: from pandas._libs import NaT, groupby as libgroupby, iNaT, lib, reductionIt seems that it imported Nat and groupby, which are two libraries, as multiple modules (libgroupby, iNaT, lib, reduction).I went to the pandas._libs library here, but I didn't find any model with name NaT. There is indeed a groupby.pyx, which I assume is the groupby library? Can the number of imported libraries be less than the imported modules? How does that work? From my past understanding, we can do import a as b, but we cannot do import a as b, c.
from pandas._libs it actually imported 5 method/class/module:NaT,grouby as libgroupy (so in your script you will now use libgroupy)iNaTlibreductionNow NaT and iNaT indeed doesn't exists in the _libs folder, but it won't give an import error because they are imported from somewhere else in __init__.py of _libs.__init__.py of a package implicitly executes whenever something is imported from that package or it's subpackages.So the __init__.py inside _libs will execute, where NaT, iNaT etc. is imported from package .tslibs hence making them available for import from .libs package too.Now if you will look for NaT or iNaT in the .tslibs folder you will won't find it, but if you will look at the __init__.py of .tslibs you will see here NaT and iNaT is imported from .nattype, so if you look inside that file this time you will find the definition of NaT and iNaT in there.You can take a look at the docs for better explanationYou can import it like this and then it might be easier for you to understand whats going on:from pandas._libs import NaT, iNaT, lib, reduction, groupby as libgroupbyThis import will do exactly the same as what the import statement in your question does.
How to replace specific integer number with a character in python using regex? I tried to replace a specific number like 22 in my string with a string like "Hi there", but it also replace float numbers like 22.14 in my string (Hi there.14). import re my_string = "22 and 22.14" re.sub(r'\b22\b', "Hi there", my_string)
You can use this regex, which will not let it match decimal values by using positive lookahead to ensure it only matches if 22 is followed by a space or end of input.\b22(?= |$)DemoIf you want 22 to be matched at the end of line ending with a . then you can use this regex,\b22(?!\.\d)Demo where line ends with 22.
What's the difference between dist-packages and site-packages? I'm a bit miffed by the python package installation process. Specifically, what's the difference between packages installed in the dist-packages directory and the site-packages directory?
dist-packages is a Debian-specific convention that is also present in its derivatives, like Ubuntu. Modules are installed to dist-packages when they come from the Debian package manager into this location:/usr/lib/python2.7/dist-packagesSince easy_install and pip are installed from the package manager, they also use dist-packages, but they put packages here:/usr/local/lib/python2.7/dist-packagesFrom the Debian Python Wiki:dist-packages instead of site-packages. Third party Python softwareinstalled from Debian packages goes into dist-packages, notsite-packages. This is to reduce conflict between the system Python,and any from-source Python build you might install manually.This means that if you manually compile and install Python interpreter from source, it uses the site-packages directory. This allows you to keep the two installations separate, especially since Debian and Ubuntu rely on the system version of Python for many system utilities.
Python & OpenERP development environment setup howto? I downloaded Open ERP server & web, having decided against the thicker gtk. I added the 2 as projects in eclipse, pydev running on Ubuntu 11.10 and started then up. I went through the web client setup & I though the installation had been done. At some point though I had executed a script that tried to copy all the bits and pieces out of my home folder into the file system some going to /ect or usr/local. I didn't want this so I stopped the process. Cause then I though I'd have to run eclipse as root & I'd not be able to trace process though the source cause it's all be scattered thought the file system.Problems came when I tried to install a new module. I couldn't get it into the module list & even zipping it up and trying to import it through the client failed without errors.While trying to get the module I added to show up I discovered this on the forums "You'll have to run setup.py install after putting the module in addons if you didn't specify an addons path when running openerp-server."So it looked like I had to run:python setup.py buildsudo python setup.py install Firstly I'm confused about why you need to build I thought it was onlt the c libs that needed building and I'd done that when installing dependences. Secondly setup.py install is obviosuly vital if you need to run it to get a new module recognised. How can I trace stuff through the source if it's running from all over the file system. Everything has now been copied out of home into the file system as I had tried to avoid. Now the start up scripts are in usr/local/bin so I assume I can't run, using 'debug as' in eclipse or see the logs in the eclipse console. I also found in the documentation that that suggests starting the server with:./openerp-server.py –addons-path=~/home/workspace/stable/addonsWhich apparently overrides the addons in the file system created by the install, suggesting that you'd have just the modules in addon in eclipse where one could debug etc, but the other resources would be elsewhere? I suppose that's ok, but I still have trouble visualizing how this is going to work, I suppose if this is the way it's done then how would one get standard out to go to the eclipse console? I suppose I could have the complete project in eclipse but all the resources besides the addons would just be for reference purposes, while only the addons would actually be running since they are over-ridden by the –addons-path argument. Then if I could get output to go to the console it would be like what I would expect.I've seen some references to using links in the eclipse workspace or running eclipse as root like an eclipse php setup.Can anyone tell me how to start the server and web apps from eclipse and have the log output appear in the console?Maybe an experienced python developer can spot my blind spots & suggests what I may be else I might be missing here?
I feel your pain. I went through the same process a couple of years ago when I started working with OpenERP. The good news is that it's not too hard to set up, and OpenERP runs smoothly in Eclipse with PyDev.Start by looking at the developer book for OpenERP. They lay out most of the requirements for getting it running.To try and answer your specific questions, you shouldn't need to run the setup.py script at all in your development environment. It's only necessary when you deploy to a server. To get the server to recognize a new module, go to the administration menu, and choose Modules Management: Update Modules List. I'm still running OpenERP 5.0, so the names and locations might be slightly different in version 6.1.For the project configuration in Eclipse, I just checked out each branch from launchpad, and then imported each one as a project into my Eclipse workspace. The launch details are a bit different between 6.0 and 6.1. Here are my command line arguments for each:6.0:--addons-path ${workspace_loc:openerp-addons-6.0} --config ${workspace_loc:openerp-config/src/server.config} --xmlrpc-port=9069 --netrpc-port=9070 --xmlrpcs-port=90716.1 needs the web client to launch with the server:--addons-path ${workspace_loc:openerp-addons-trunk},${workspace_loc:openerp-web-trunk}/addons,${workspace_loc:openerp-migration} --config ${workspace_loc:openerp-config/src/server.config} --xmlrpc-port=9069 --netrpc-port=9070 --xmlrpcs-port=9071
Matplotlib figure only shows after second file run I am doing some basic plotting routine (as below), and after the first file run I will only get <Figure size 640x460 with 1 Axes> appearing in the output area. And then on the second run of the code, the figure will actually be plotted. Ideally it would plot on the first run, as later I want to test some matplotlib style editting.import matplotlib.pyplot as pltimport numpy as npdata = np.arange(20)plt.plot(data , label='1')plt.plot(data+2, label='2')plt.plot(data+4, label='3')plt.plot(data+6, label='4')plt.plot(data+8, label='5')plt.legend()plt.xlabel('X label')plt.ylabel('Y label')plt.show()I'm using Python 3.6 in Hydrogen (Atom)
import matplotlibmatplotlib.use('Qt5Agg')Solves the issue and plots first run (not sure why exactly)
Module not found with virtual environment I can run my app from the console that there is in pyCharm but If I try to run my app from a shell my app doesn't find "pymysql" module.The module is installed in my project in a virtual environment. You can see in the next image how is installed this module.And If try to run my app from the shell I've got this error:I'm using python3.What am I doing wrong? Is there any easy way to access to the module?
There are several ways:activate virtual env: source venv/bin/activate.directly use specific python: venv/bin/python main.pySurely you can temporarily add venv/bin to your PATH, that's almost the same as the first option: export PATH=full/path/to/bin:$PATHGenerally I recommend the first option. But sometimes you may want to use the second one. For example, you want to use this python in a crontab script.
Python appending empty list I am very new to Python and trying to learn by trial-and-error, so my question may sound naive for the community.Let's say I have two empty lists with only the first element defined:a = [[]]*20a[0] = 0b = [[]]*20b[0] = 1I want to use a for loop for creating the other elements of the lists:x = 20for i in range(1,x): a[i] = b[i-1], b[i] = a[i-1]+b[i-1]What I obtain is the following error:TypeError: can only concatenate tuple (not "int") to tuple.Basically I am trying to reproduce the fibonacci series (a famous starting point in Python tutorial), but I would like to experiment other ways of obtaining the same output.Thank you!
The problem is on this line:a[i] = b[i-1],Notice the comma at the end? That makes python think you're dealing in tuples. Remove it and the error will be gone.
I cannot append my data to a list I am trying to append the total area to the attribute table and it runs through without any error message. I am not sure what I'm doing wrong:import osimport arcpyimport mathfolderpath = 'C:\Users\Michaelf\Desktop\GEOG M173'arcpy.env.workspace = folderpatharcpy.env.overwriteOutput = Trueinput_shp = folderpath + r'\lower48_county_2012_election.shp'equal_shape = folderpath + r'\project_lower48.shp'projection = arcpy.SpatialReference('USA Contiguous Albers Equal Area Conic USGS')arcpy.Project_management(input_shp, equal_shape, projection)print 'step 1'arcpy.CopyFeatures_management (input_shp, equal_shape)tot_area = []print "step 2"fields = [ ("totarea", "FLOAT"),]for field in fields: arcpy.AddField_management(equal_shape, "totarea")print "step 3"with arcpy.da.SearchCursor(equal_shape, ("OID@", "SHAPE@AREA")) as cursor: for row in cursor: print("Feature {0} has an area of {1}".format(row[0], row[1]))print "step 4"a_cursor = arcpy.SearchCursor(equal_shape)for area in a_cursor: tot_area.append(area.totarea,)print "step 5"Update:wnnmaw when I run the code it runs and prints a list of the features/polygon's area but when I open the .shp file the new column is there, and none of the data appends to the list.ZWiki When I print the list it returns a huge list of 0's like what is listed in the attribute table. In the code I classify it as a FLOAT however in arcmap attribute table properties it still is identifying it as LONG, could this be the problem since the answers are decimals?
Firstly: You've got a data type issue when you're adding that list of fields.This code simply adds a field named totarea, and doesn't do anything with the data in the fields list.for field in fields: arcpy.AddField_management(equal_shape, "totarea")Instead:fields = [("totarea", "FLOAT")]for field in fields: arcpy.AddField_management(equal_shape, field[0], field[1])Secondly: Use an UpdateCursor. Search cursors don't have the ability to change the table's data.If you want to stick with that list of areas:tot_area = []with arcpy.da.SearchCursor(equal_shape, ("OID@", "SHAPE@AREA")) as cursor: for row in cursor: print("Feature {0} has an area of {1}".format(row[0], row[1])) tot_area.append(row[1])print "step 4"sum_area = sum(tot_area)with arcpy.da.UpdateCursor(equal_shape, ["TOTAREA"]) as cursor: for row in cursor: row[0] = sum_area cursor.updateRow(row)print "step 5"Or just sum as you go through the SearchCursor:tot_area = 0with arcpy.da.SearchCursor(equal_shape, ("OID@", "SHAPE@AREA")) as cursor: for row in cursor: print("Feature {0} has an area of {1}".format(row[0], row[1])) tot_area += row[1]print "step 4"with arcpy.da.UpdateCursor(equal_shape, ["TOTAREA"]) as cursor: for row in cursor: row[0] = tot_area cursor.updateRow(row)print "step 5"
Average vectors between two pandas DataFrames Assume, there are two DataFrame, which areimport pandas as pdimport numpy as np df1 = pd.DataFrame({'item':['apple', 'orange', 'melon', 'meat', 'milk', 'soda', 'wine'], 'vector':[[12, 31, 45], [21, 14, 56], [9, 47, 3], [20, 7, 98], [11, 67, 5], [23, 45, 3], [8, 9, 33]]})df2 = pd.DataFrame({'customer':[1,2,3], 'grocery':[['apple', 'soda', 'wine'], ['meat', 'orange'], ['coffee', 'meat', 'milk', 'orange']]})The outputs of df1 and df2 aredf1 item vector0 apple [12, 31, 45]1 orange [21, 14, 56]2 melon [9, 47, 3]3 meat [20, 7, 98]4 milk [11, 67, 5]5 soda [23, 45, 3]6 wine [8, 9, 33]df2customer grocery0 1 [apple, soda, wine]1 2 [meat, orange]2 3 [coffee, meat, milk, orange]The goal is to average vectors of each customer's grocery list. If an item does not list in the df1 then use [0, 0, 0] to represent, thus 'coffee' = [0, 0, 0]. The final data frame df2 will be like customer grocery average0 1 [apple, soda, wine] [14.33, 28.33, 27]1 2 [meat, orange] [20.5, 10.5, 77]2 3 [coffee, meat, milk, orange] [13, 22, 39.75]where customer1 is to average the vectors of apple, soda, and wine. customer3 is to average vectors of coffee, meat, milk and orange, Again, here coffee = [0, 0, 0] because it is not on df1. Any suggestions? many thanks in advance
This answer may be long-winded and not optimized, but it will serve your purpose.First of all, you need to check if the items in df2 is in df1 so that you can add the non existing item into df1 along with the 0s.import itertoolsfor i in set(itertools.chain.from_iterable(df2['grocery'])): if i not in list(df1['item']): df1.loc[len(df1.index)] = [i,[0,0,0]]Next, you can perform list comprehension to find the average of the list and add it to a new column in df2.df2['average'] = [np.mean(list(df1.loc[df1['item'].isin(i)]["vector"]),axis=0) for i in df2["grocery"]]df2Out[91]: customer ... average0 1 ... [14.333333333333334, 28.333333333333332, 27.0]1 2 ... [20.5, 10.5, 77.0]2 3 ... [13.0, 22.0, 39.75][3 rows x 3 columns]
How do I extract the entire sentence from a job description which consists the number of years of experience in it? I've been working on a job description parser and I have been trying to extract the entire sentence which consists of the number of years of experience required.I have tried to use regex which provides me the number of years but not the entire sentence.def extract_years(self,resume_text): resume_text = str(resume_text.split('.')) exp=[] rx = re.compile(r"(\d+(?:-\d+)?\+?)\s*(years?)",re.I) for word in resume_text: exp_temp = rx.search(resume_text) if exp_temp: exp.append(exp_temp[0]) exp = list(set(exp)) return expOutput:['5-7 years']Desired Output:['5-7 years of experience in journalism, communications, or content creation preferred']
Try: (\d+(?:-\d+)?+?)\s*(years?).*Though I'm somewhat new to Regex, I believe you can get what you desire using a combination of ".*" to end of your match terms and possibly the beginning if "5-7 years" comes after some characters like "needs 5-7 years of experience".just adding the group ".*" at the end would mean to add any combination of characters, 0 or more after your initial match stopping at a line break, to match the entire sentence.Hope this helps.
PCRE Regex (*COMMIT) equivalent for Python The below pattern took me a long time to find. When I finally found it, it turns out that it doesn't work in Python. Does anyone know if there is an alternative?(*COMMIT) Defined: Causes the whole match to fail outright if the rest of the pattern does not match.(*FAIL) doesn't work in Python either. But this can be replaced by (?!).+-----------------------------------------------+|Pattern: ||^.*park(*COMMIT)(*FAIL)|dog |+-------------------------------------+---------+|Subject | Matches |+-----------------------------------------------+|The dog and the man play in the park.| FALSE ||Man I love that dog! | TRUE ||I'm dog tired | TRUE ||The dog park is no place for man. | FALSE ||park next to this dog's man. | FALSE |+-------------------------------------+---------+Example taken from:regex match substring unless another substring matches
This might not be a generic replacement, but for your case you can work with lookaheads, to assert that dog is matched, but park is not: ^(?=.*dog)(?!.*park).*$Your samples on regex101
Access m2m relationships on the save method of a newly created instance I'd like to send emails (only) when Order instances are created. In the email template, I need to access the m2m relationships. Unfortunatly, its seems like the m2m relations are ont yet populated, and the itemmembership_set.all() method returns an empty list.Here is my code:class Item(models.Model): ...class Order(models.Model): ... items = models.ManyToManyField(Item, through='ItemMembership') def save(self, *args, **kwargs): pk = self.pk super(Order, self).save(*args, **kwargs) # If the instance is beeing created, sends an email with the order # details. if not pk: self.send_details_email() def send_details_email(self): assert len(self.itemmembership_set.all()) != 0class ItemMembership(models.Model): order = models.ForeignKey(Order) item = models.ForeignKey(Item) quantity = models.PositiveSmallIntegerField(default=1)
Some of the comments suggested using signals. While you can use signals, specifically the m2m_changed signal, this will always fire whenever you modify the m2m fields. As far as I know, there is no way for the sender model (in your sample, that is ItemMembership) to know if the associated Order instance was just created or not.Sure, you can probably use the cache framework to set a temporary flag upon calling save() of an Order object, then read that same flag on the m2m_changed signal and delete the flag when it is over. The downside is you have to validate the process, and it beats the purpose of using signals which is to decouple stuff.My suggestion is to totally remove all those email sending functionalities from your models. Implement it as a helper function instead, and then just invoke the helper function explicitly after an Order object with its associated ItemMembership objects have been successfully created. IMHO, it also makes debugging a lot easier.
Infoblox WAPI: how to search for an IP Our network team uses InfoBlox to store information about IP ranges (Location, Country, etc.)There is an API available but Infoblox's documentation and examples are not very practical.I would like to search via the API for details about an IP. To start with - I would be happy to get anything back from the server. I modified the only example I foundimport requestsimport jsonurl = "https://10.6.75.98/wapi/v1.0/"object_type = "network" search_string = {'network':'10.233.84.0/22'}response = requests.get(url + object_type, verify=False, data=json.dumps(search_string), auth=('adminname', 'adminpass'))print "status code: ", response.status_codeprint response.textwhich returns an error 400status code: 400{ "Error": "AdmConProtoError: Invalid input: '{\"network\": \"10.233.84.0/22\"}'", "code": "Client.Ibap.Proto", "text": "Invalid input: '{\"network\": \"10.233.84.0/22\"}'"}I would appreciate any pointers from someone who managed to get this API to work with Python.UPDATE: Following up on the solution, below is a piece of code (it works but it is not nice, streamlined, does not perfectly checks for errors, etc.) if someone one day would have a need to do the same as I did.def ip2site(myip): # argument is an IP we want to know the localization of (in extensible_attributes) baseurl = "https://the_infoblox_address/wapi/v1.0/" # first we get the network this IP is in r = requests.get(baseurl+"ipv4address?ip_address="+myip, auth=('youruser', 'yourpassword'), verify=False) j = simplejson.loads(r.content) # if the IP is not in any network an error message is dumped, including among others a key 'code' if 'code' not in j: mynetwork = j[0]['network'] # now we get the extended atributes for that network r = requests.get(baseurl+"network?network="+mynetwork+"&_return_fields=extensible_attributes", auth=('youruser', 'youpassword'), verify=False) j = simplejson.loads(r.content) location = j[0]['extensible_attributes']['Location'] ipdict[myip] = location return location else: return "ERROR_IP_NOT_MAPPED_TO_SITE"
By using requests.get and json.dumps, aren't you sending a GET request while adding JSON to the query string? Essentially, doing aGET https://10.6.75.98/wapi/v1.0/network?{\"network\": \"10.233.84.0/22\"}I've been using the WebAPI with Perl, not Python, but if that is the way your code is trying to do things, it will probably not work very well. To send JSON to the server, do a POST and add a '_method' argument with 'GET' as the value: POST https://10.6.75.98/wapi/v1.0/networkContent: { "_method": "GET", "network": "10.233.84.0/22"}Content-Type: application/jsonOr, don't send JSON to the server and sendGET https://10.6.75.98/wapi/v1.0/network?network=10.233.84.0/22which I am guessing you will achieve by dropping the json.dumps from your code and handing search_string to requests.get directly.
Django: assigning a foreign key of class that hasn't been created yet I have the relational database and one of the relations looks like this:Student < --- > Major_enrollmentsSo I need to create a column with a foreign key to the second table in both tables. How can I do so in the view of the fact, that if I define the class e.g. Student first, I will be notified with such error: "NameError: name 'Major_enrollments' is not defined".This is a piece of code I wrote (models.py):class Students(models.Model): nr_album = models.IntegerField() fName = models.CharField(max_length=70) lName = models.CharField(max_length=70) pesel = models.BigIntegerField() address = models.CharField(max_length=100) major_enrollments = models.ForeignKey(Major_enrollments) #<---THAT DOESN'T WORK def __unicode__(self): return unicode(self.pesel) class Meta: db_table='Students'class Major_enrollments(models.Model): majors = models.ForeignKey(Majors) students = models.ForeignKey(Students) def __unicode__(self): return unicode(self.id) class Meta: db_table='Major_enrollments'
You can use the class name (as a string) instead of class itself:class Students(models.Model): nr_album = models.IntegerField() fName = models.CharField(max_length=70) lName = models.CharField(max_length=70) pesel = models.BigIntegerField() address = models.CharField(max_length=100) major_enrollments = models.ForeignKey('Major_enrollments') def __unicode__(self): return unicode(self.pesel) class Meta: db_table='Students'
Flash only uncategorized messages in Flask app I want to display flashed messages with the 'error' category in one section, and uncategorized messages in another section. If I just ask for messages with_categories=False, I get messages with the 'error' category as well. Preferably, I don't want to have to add a category to all my messages. How do I get all uncategorized messages?flash('You did something wrong', 'error')flash('Hello'){% with messages = get_flashed_messages(with_categories=false) %} {% for message in messages %} {{message}} {% endfor %}{% endwith %}{% with messages = get_flashed_messages(category_filter=['error']) %} {% for message in messages %} {{message}} {% endfor %}{% endwith %}OutputsYou did something wrongHelloYou did something wrongI expectHelloYou did something wrong
All messages have the default category 'message'. Get those messages, then get your other messages.{% with messages = get_flashed_messages(category_filter=['message']) %}
how to manipulate user submitted text and display it with django? I want to build a very simple webapp that takes a user's text, runs a function on it that alters it and then displays the altered text. I have the code for the function but everything else is unclear. I am very new to django and just need a push in the right direction with this problem. At the very least, tell me what to google, I've went through several tutorials but neither of them dealt with this kind of task.Thanks in advance!
Define a form; in forms.py under your app's folderclass MyForm(forms.Form): myinput = forms.forms.CharField(max_length=100)Define a function in your views.pyimport .formsdef handle_form(request): if request.method == 'POST': # If the form has been submitted... form = MyForm(request.POST) # A form bound to the POST data if form.is_valid(): # All validation rules pass # Process the data in form.cleaned_data # ... return HttpResponseRedirect('/thanks/') # Redirect after POST else: form = MyForm() # An unbound form return render(request, 'handle_form.html', { 'form': form, })Add a template <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Submit" /> </form>Of course you need to add it to your urls.pyMost info was copy pasted from: https://docs.djangoproject.com/en/1.8/topics/forms/
tarfile compressionerror bz2 module is not available I'm trying to install twisted pip install https://pypi.python.org/packages/18/85/eb7af503356e933061bf1220033c3a85bad0dbc5035dfd9a97f1e900dfcb/Twisted-16.2.0.tar.bz2#md5=8b35a88d5f1a4bfd762a008968fddabfThis is for a django-channels project and I'm having the following error problemException:Traceback (most recent call last): File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1655, in bz2open import bz2 File "/usr/local/lib/python3.5/bz2.py", line 22, in <module> from _bz2 import BZ2Compressor, BZ2DecompressorImportError: No module named '_bz2'During handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/petarp/.virtualenvs/CloneFromGitHub/lib/python3.5/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/commands/install.py", line 310, in run wb.build(autobuilding=True) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/wheel.py", line 750, in build self.requirement_set.prepare_files(self.finder) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/req/req_set.py", line 370, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/req/req_set.py", line 587, in _prepare_file session=self.session, hashes=hashes) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/download.py", line 810, in unpack_url hashes=hashes File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/download.py", line 653, in unpack_http_url unpack_file(from_path, location, content_type, link) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/utils/__init__.py", line 605, in unpack_file untar_file(filename, location) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/utils/__init__.py", line 538, in untar_file tar = tarfile.open(filename, mode) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1580, in open return func(name, filemode, fileobj, **kwargs) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1657, in bz2open raise CompressionError("bz2 module is not available")tarfile.CompressionError: bz2 module is not availableClearly I'm missing bz2 module, so I've tried to installed it manually, but that didn't worked out for python 3.5, so how can I solved this?I've did what @e4c5 suggested but I did it for python3.5.1, the output is➜ ~ python3.5 Python 3.5.1 (default, Apr 19 2016, 22:45:11) [GCC 4.8.4] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import bz2Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/bz2.py", line 22, in <module> from _bz2 import BZ2Compressor, BZ2DecompressorImportError: No module named '_bz2'>>> [3] + 18945 suspended python3.5➜ ~ dpkg -S /usr/local/lib/python3.5/bz2.pydpkg-query: no path found matching pattern /usr/local/lib/python3.5/bz2.pyI am on Ubuntu 14.04 LTS and I have installed python 3.5 from source.
I don't seem to have any problem with import bz2 on my python 3.4 installation. So I did import bz2print (bz2.__file__)And found that it's located at /usr/lib/python3.4/bz2.py then I diddpkg -S /usr/lib/python3.4/bz2.pyThis reveals: libpython3.4-stdlib:amd64: /usr/lib/python3.4/bz2.pyThus the following command should hopefully fix this:apt-get install libpython3.4-stdlibUpdate:If you have compiled python 3.5 from sources, it's very likely the bz2 hasn't been compiled in. Please reinstall by first doing./configure --with-libs='bzip' The same applies for python 3.6 as well. Note that this will probably complain about other missing dependencies. You will have to install the missing dependencies one by one until everything is covered.
Filter elements from list based on them containing spam terms So I've made a script that scrapes some sites and builds a list of results. Each result has the following structure:result = {'id': id, 'name': name, 'url': url, 'datetime': datetime, }I want to filter results from the list of results based on spam terms being in the name. I've defined the following function, and it seems to filter certain results, but not all of them:def filterSpamGigsList(theList): index = 0 spamTerms = ['paid','hire','work','review','survey', 'home','rent','cash','pay','flex', 'facebook','sex','$$$','boss','secretary', 'loan','supplemental','income','sales', 'dollars','money'] for i in theList: for y in spamTerms: if y in i['name'].lower(): theList.pop(index) break index += 1 return theListAny clue why this might not be filtering out all results that contain these spam terms? Maybe I need to call .split() on name after calling .lower() as some of the names are phrases?
I guess you've got a problem with in-place modifying theList as iterating over it as Jakub suggested.The obious way would be to return a new list. I would split this in two functions for readability:def is_spam(value): spam_terms = ['paid','hire','work','review','survey', 'home','rent','cash','pay','flex', 'facebook','sex','$$$','boss','secretary', 'loan','supplemental','income','sales', 'dollars','money'] for term in spam_terms: if term in value.lower(): return True return Falsedef filter_spam_gigs_list(results): return [i for i in results if not is_spam(i['name'])]
How to dynamically make an existing non-abstract django model, abstract? I think I have a more or less unorthodox and hackish question for you. What I currently have is django project with multiple apps.I want to use a non-abstract model (ModelA) of one app (app1) and use it in another app (app2) by subclassing it. App1's modelsshould not be migrated to the DB, I just want to use the capabilities of app1 and it's model classes, by extending its functionality and logic.I achieved that by adding both apps to settings.INSTALLED_APPS, and preventing app1's models being migrated to the DB.INSTALLED_APPS += ( 'App1', 'App2',)# This is needed to just use App1's models# without creating it's database tables# See: http://stackoverflow.com/a/35921487/1230358MIGRATION_MODULES = { 'App1': None,}So far so good, ugly and hackish, I know... The remaining problem is now that most of app1's models are non-abstract (ModelA) and if I tryto subclass them, none of the ModelA's fields get populated to the db into the table named app2_modelb. This is clear to me, because I excluded the App1 frommigrating to the DB and therefore the table app1_modela is completely missing in the DB.My idea now was to clone ModelA, preserve all its functionallity, and changing it's Meta information from non-abstract to abstract (ModelB.Meta.abstract = True).I hope that by this, all the original fields of ModelA will be inherited to ModelB and can be found in its respective DB table and columns (app1_modelb).What I have right now is:# In app1 -> models.pyclass ModelA(models.Model): title = models.CharField(_('title'), max_length=255) subtitle = models.CharField(_('subtitle'), max_length=255) class Meta: abstract = False # just explicitly for demonstration# In app2 -> models.pyfrom app1.models import ModelAclass ModelB(ModelA): pass # Just extending ModelAdoes not create the fields title and subtitle fields in app2_modelb # because ModelA.meta.abstract = FalseMy current way (pseudo code) to make an existing non-abstract model abstract looks like this:# In app2 -> models.pyfrom app1.models import ModelAdef get_abstract_class(cls): o = dict(cls.__dict__) o['_meta'].abstract = True o['_meta'].app_label = 'app2' o['__module__'] = 'app2.models' #return type('Abstract{}'.format(cls.__name__), cls.__bases__, o) return type('Abstract{}'.format(cls.__name__), (cls,), o)ModelB = get_abstract_class(ModelA)class ModelC(ModelB): # title and subtitle are inherited from ModelA description = models.CharField(_('description'), max_length=255)This does not work, and after this lengthy description my (simple) question would be, if and how is it possible to clone a non-abstract model class preserving all its functionality and how to change it to be abstract?Just to be clear. All upper fuzz is about, that I can't change any code in app1. May it be that app1 is a django app installed via pip.
Why not, in app1AbstractBaseModelA(models.Model): # other stuff here class Meta: is_abstract=TrueModelA(AbstractBaseModelA): # stuffin app2:MobelB(AbstractBaseModelA): # stuffSorry if I've misunderstood your aims, but I think the above should achieve the same end result.
Can Python win32com use Visio (or any program) without popping up a GUI? I have a Python script using win32com to open a Visio file and dump each tab as .png files. It briefly flashes the Visio gui up on the screen when it does this. Is there any way to do this in the background without loading the Visio window?import win32com.clientvisio = win32com.client.Dispatch("Visio.Application")visio.Documents.Open(filepath)...visio.Quit()
visio = win32com.client.Dispatch("Visio.InvisibleApp")should create a Visio instance that is invisible.See http://msdn.microsoft.com/en-us/library/aa201815(v=office.10).aspx
replace string character (') to (-) in pandas and change it to datetime i have dataset df like below; A June'11July'122018-02-01anyone can help me to replace (') character to (-)iam confuse to use pandas code df['A'].replace(''', '-', inplace=True) ???After i have changed the (') string i want to change the A column type to datetimeThank in advance
Please try below:>>> df A0 June'111 July'122 2018-02-01>>> df.replace({'\'': '-'}, regex=True) A0 June-111 July-122 2018-02-01ORIf its specific to a column A then.>>> df['A'].str.replace(r"[\']", "-")0 June-111 July-122 2018-02-01OR>>> df['A'].str.replace("'", "-")0 June-111 July-122 2018-02-01
Python - Getting error whenever I try to run program "Module cannot be found" I'm trying to do this little tutorial http://www.roguebasin.com/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod,_part_1 A little ways down the page right before it says moving around it says to test what you have so far. I'm using Pycharm and this is my first time using an outside library or whatever you call it.This is what I have so far and it is exactly what is in their example:import libtcodpy as libtcod#actual size of the windowSCREEN_WIDTH = 80SCREEN_HEIGHT = 50LIMIT_FPS = 20 #20 frames-per-second maximumlibtcod.console_set_custom_font('terminal.png', libtcod.FONT_TYPE_GREYSCALE | libtcod.FONT_LAYOUT_TCOD)libtcod.console_init_root(SCREEN_WIDTH, SCREEN_HEIGHT, 'python/libtcod tutorial', False)libtcod.sys_set_fps(LIMIT_FPS)while not libtcod.console_is_window_closed(): libtcod.console_set_default_foreground(0, libtcod.white) libtcod.console_put_char(0, 1, 1, '@', libtcod.BKGND_NONE) libtcod.console_flush()Whenever I run it I get this error.Traceback (most recent call last): File "D:\Programming\Project 1\Rogue Like\libtcodpy.py", line 57, in <module> _lib = ctypes.cdll['./libtcod-mingw.dll'] File "C:\Python34\lib\ctypes\__init__.py", line 426, in __getitem__ return getattr(self, name) File "C:\Python34\lib\ctypes\__init__.py", line 421, in __getattr__ dll = self._dlltype(name) File "C:\Python34\lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode)OSError: [WinError 126] The specified module could not be foundDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "D:/Programming/Project 1/Rogue Like/firstrl.py", line 1, in <module> import libtcodpy as libtcod File "D:\Programming\Project 1\Rogue Like\libtcodpy.py", line 60, in <module> _lib = ctypes.cdll['./libtcod-VS.dll'] File "C:\Python34\lib\ctypes\__init__.py", line 426, in __getitem__ return getattr(self, name) File "C:\Python34\lib\ctypes\__init__.py", line 421, in __getattr__ dll = self._dlltype(name) File "C:\Python34\lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode)OSError: [WinError 126] The specified module could not be foundThanks
I'm assuming you also copied libtcod-VS.dll or libtcod-mingw.dll to the project directory, not just libtcodpy.py. And also SDL.dll and a arial10x10.png. If not, go back and look at the Setting it up instructions again.But if you have, this isn't really your fault, it's theirs.libtcodpy.py tries to import the libtcod-VS.dll or libtcod-mingw.dll DLL from the current working directory. You can see that from this line:_lib = ctypes.cdll['./libtcod-mingw.dll']So, if the current working directory happens to be anything other than the directory that libtcodpy.py is in, it won't find them there.This is a silly thing to do. If you do what the Choice of code editor section suggests and always run the script from a console (a "DOS prompt"), it will work (as long as you're always running it without an explicit path), but they really shouldn't be relying on that.Still, that's obviously the simplest fix: Run the program from the console, the way they're expecting you to, instead of from PyCharm.Alternatively, you can configure PyCharm to run your project with the project directory as your working directory.There are a few ways to set this, but the one you probably want is the Run/Debug Configurations dialog (which you can find under Edit Configurations… on the Run menu). Open that dialog, open the disclosure triangle to Defaults, click Python, then look for "Working directory:" on the right. Click the … button and pick your project directory (or wherever you put libtcod-VS.dll or libtcod-mingw.dll).Or you can edit libtcodpy.py to make it look for the DLL alongside of itself, rather than in the current working directory. There are only 4 small changes you should need.First, in the middle of the import statements near the top, if there's no import os, add it.Next, right after the import statements, add this:modpath = os.path.dirname(os.path.abspath(__FILE__))Now search for the two lines that start with _lib = ctypes.dll (or just look at the line numbers from the tracebacks) and change them as follows:_lib = ctyles.cdll(os.path.join(modpath, 'libtcod-mingw.dll'))_lib = ctyles.cdll(os.path.join(modpath, 'libtcod-VS.dll'))
How can I find string with regular expressions in Python I have html string like that for example<td align="left" nowrap="nowrap">John 23</td>I want to find "John 23" between '<td align="left" nowrap="nowrap">' and '</td>'I want to find with Regular Expressions in pythonHow can I do it?
Use BeautifulSoup to parse HTML. Regex is the wrong tool; it works fine for this example but wouldn't scale well to a full document.>>> from bs4 import BeautifulSoup>>> html = '<td align="left" nowrap="nowrap">John 23</td>'>>> BeautifulSoup(html).find("td").text'John 23'
Python guess the number game I tried to make a guess the number game in python but whenever I guess it repeats 4 times 'your guess is too low'import randomnumber = random.randint(1, 20)guessestaken = 0print('I am thinking of a number between 1 and 20 ')guess = raw_input('Take a guess and hit enter')while guessestaken < 4: guessestaken = guessestaken + 1 if guess > number: print('your number is too low') if guess < number: print('your number is too high ') if guess == number: break print('well done the number was ' + number + ' and you got it in ' + guessestaken + '')
You are asking for the user input before the while loop.guess = int(raw_input('Take a guess and hit enter')) This statement should come within the while block.The function raw_input returns a string, you should convert it to an integer. You can read more about it in the Documentation.
modifying multiple fields in a structured array (Python)? Let's say I have a structured array as follows:import numpy as npfields = [('f1', np.float32), ('f2', np.float32)]k = np.ones(2, fields)I want to be able to access multiple fields and modify them simultaneously. I'm aware that I can access multiple fields using a view. But what I want to do is more like the following, where I take all the fields and modify them:k[0] = k[0] * 2But instead I get this error message:TypeError: unsupported operand type(s) for *: 'numpy.void' and 'int'Does anybody have an idea of what might work? The simpler the better - I have some rather large structured arrays that I need to perform these operations on. Best idea I can come up with is to assemble k[0] using a list comprehension from the field names, convert it to a tuple, and assign it back, but there might be more elegant solutions:k[0] = tuple([k[0][name] * 2 for name in k.dtype.names])
With a dtype like this, there are 2 direct ways of accessing and modifying the data. By field name, e.g. k['f0'], or by elements (rows) in the form of tuples. You are using the 2nd method. If the number of fields isn't that large, and you need to access many elements, then the first is better.But if the fields all have the same data types, you could do math on an equivalent 2d array.In [937]: k = np.ones(5,fields)In [938]: k2 = np.empty((5,2),dtype=np.float32)In [939]: k2.data = k.data # share the data buffersIn [940]: k2 *= 2 # do math on the 2d viewIn [942]: k2[0] += 3In [943]: k # and see the effect via the shared bufferOut[943]: array([(5.0, 5.0), (2.0, 2.0), (2.0, 2.0), (2.0, 2.0), (2.0, 2.0)], dtype=[('f1', '<f4'), ('f2', '<f4')])A view works just as wellIn [948]: k1 = k.view(np.float32).reshape(5,2)In [951]: k1[1] = [3,4]In [954]: kOut[954]: array([(5.0, 5.0), (3.0, 4.0), (2.0, 2.0), (2.0, 2.0), (2.0, 2.0)], dtype=[('f1', '<f4'), ('f2', '<f4')])But with a mixed dtypes, you may have to stick with the tuple comprehension.If k has mixed types:In [1025]: kOut[1025]: array([(1.0, 1.0, b'1'), (1.0, 1.0, b'1')], dtype=[('f1', '<f4'), ('f2', '<f4'), ('f3', 'S3')])I can view the float fields with a list of names:In [1027]: k[['f1','f2']][1]Out[1027]: (1.0, 1.0)but trying to modify those fields with a tuple does nothing (though there is no error).In [1028]: k[['f1','f2']][1]=(2.0, 2.0)Looks like I have to either set one field at a time, or all of one element via tuple.If you need to do much math, don't break up an array into named fields any more than you have to.
Process Doesn't End When Closed I built a web-scraper application with Python. It consists of three main parts:The GUI (built on tkinter)A Client (controls interface between front- and back-end)Back-end code (various threaded processes).The problem I have is that when the user hits X to exit the program instead of quitting through the interface, it seems like root.destroy() never gets called and the application runs forever, even though the window does disappear. This ends up consuming vast amounts of system resources.I have tried setting all threads to Daemon without much success. Is there any other reason the program would keep eating up CPU after exit?
You don't want to set all threads to daemon. You want to set the client thread and the back-end thread to daemon. That way, when the GUI thread dies, the threads with daemon set to True end as well.From the documentation: A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left.
Can't install matplotlib on OS X with PIP This is my first time setting up matplotlib.I'm on OS X Lion 10.7 (build 11A511s, so no updates done to the initial release of OS X Lion).I am using virtualenv and pip to do the installation.I'm aware of the incompatibility with libpng 1.5, so I didn't just run "pip install matplotlib"... instead...I tried running this from inside the virtualenv:pip install -e git+https://github.com/matplotlib/matplotlib#egg=matplotlib-devLooks like it starts installing, but then I get this error:/Users/myusername/.virtualenvs/nltk/lib/python2.7/site-packages/numpy/core/include/numpy/__multiarray_api.h:1532: warning: ‘int _import_array()’ defined but not usedlipo: can't open input file: /var/folders/wy/s1jr354d4xx7dk0lpdpbpsbc0000gn/T//ccfNUhyq.out (No such file or directory)error: command 'llvm-gcc-4.2' failed with exit status 1----------------------------------------Command /Users/sameerfx/.virtualenvs/nltk/bin/python -c "import setuptools; __file__='/Users/sameerfx/.virtualenvs/nltk/src/matplotlib/setup.py'; exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" develop --no-deps failed with error code 1Storing complete log in /Users/sameerfx/.pip/pip.log
Three years later:You should use anaconda to install matplotlib (or numpy or pandas or scipy) if you're new to the process. This suggestion applies for pretty much any platform too.
How to create a two dimensional list from imported data from text file I'm trying to import the info to create a 2D list, and I'm having a hard time filling in the list with the information. I'm stuck trying to import the info to create the listROW = 3COLS = 4myInfo = ('myInfoFile.txt', 'r')name = myInfo.readline().rsrip('\n')while name != '': address = myInfo.readline() telNum = float(myInfo.readline()) email = myInfo.readline()myInfo.close()list1 = [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]for r in range(ROWS): for c in range(COLS): #this is the main area list1[r][c] = myInfoprint(list1)
Two Things that are wrongYou are not assigning the data while reading your fileYou are assigning a closed file descriptor while using your arrayTry This info=[[]*COLS for i in range(ROWS)] for i in range(ROWS): for j in range(COLS): info[i][j]=filename.readline #if float needed info[i[[j]=float(info[i][j])
tflearn DNN gives zero loss I am using pandas to extract my data. To get an idea of my data I replicated an example dataset...data = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))which yields a dataset of shape=(100,4)... A B C D0 75 38 81 581 36 92 80 792 22 40 19 3 ... ...I am using tflearn so I will need a target label as well. So I created a target label by extracting one of the columns from data and then dropped it out of the data variable (I also converted everything to numpy arrays)...# Target label used for traininglabels = np.array(data['A'].values, dtype=np.float32)# Reshape target label from (100,) to (100, 1)labels = np.reshape(labels, (-1, 1))# Data for training minus the target label.data = np.array(data.drop('A', axis=1).values, dtype=np.float32)Then I take the data and the labels and feed it into the DNN...# Deep Neural Network. net = tflearn.input_data(shape=[None, 3])net = tflearn.fully_connected(net, 32)net = tflearn.fully_connected(net, 32)net = tflearn.fully_connected(net, 1, activation='softmax')net = tflearn.regression(net)# Define model.model = tflearn.DNN(net)model.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)This seems like it should work, but the output I get is as follows...Notice that the loss remains at 0, so I am definitely doing something wrong. I don't really know what form my data should be in. How can I get my training to work?
Your actual output is in range 0 to 100 while the activation softmax in the outermost layer outputs in range [0, 1]. You need to fix that. Also the default loss for tflearn.regression is categorical cross entropy which is used for classification problems and makes no sense in your scenario. You should try L2 loss. The reason you are getting zero error in this setting is that your network predicts 0 for all training examples and if you fit that value in formula for sigmoid cross entropy, loss indeed is zero. Here is its formula , where t[i] denotes the actual probabilities (which doesnt make sense in your problem) and o[i] is the predicted probabilities.Here is more reasoning about why default choice of loss function is not suitable for your case
Flask/Werkzeug debugger, process model, and initialization code I'm writing a Python web application using Flask. My application establishes a connection to another server at startup, and communicates with that server periodically in the background.If I don't use Flask's builtin debugger (invoking app.run with debug=False), no problem.If I do use the builtin debugger (invoking app.run with debug=True), Flask starts a second Python process with the same code. It's the child process that ends up listening for HTTP connections and generally behaving as my application is supposed to, and I presume the parent is just there to watch over it when the debugger kicks in.However, this wreaks havoc with my startup code, which runs in both processes; I end up with 2 connections to the external server, 2 processes logging to the same logfile, and in general, they trip over each other.I presume that I should not be doing real work before the call to app.run(), but where should I put this initialization code (which I only want to run once per Flask process group, regardless of the debugger mode, but which needs to run at startup and independent of client requests)?I found this question about "Flask auto-reload and long-running thread" which is somewhat related, but somewhat different, and the answer didn't help me. (I too have a separate long-running thread marked as a daemon thread, but it is killed when the reloader kicks in, but the problem I'm trying to solve is before any reload needs to happen. I'm not concerned with the reload; I'm concerned with the extra process, and the right way to avoid executing unnecessary code in the parent process.)
I confirmed this behavior is due to Werkzeug, not Flask proper, and it is related to the reloader. You can see this in Werkzeug's serving.py -- in run_simple(), if use_reloader is true, it invokes make_server via a helper function run_with_reloader() / restart_with_reloader() which does a subprocess.call(sys.executable), after setting an environment variable WERKZEUG_RUN_MAIN in the environment which will be inherited by the subprocess.I worked around it with a fairly ugly hack: in my main function, before creating the wsgi application object and calling app.run(), I look for WERKZEUG_RUN_MAIN:if use_reloader and not os.environ.get('WERKZEUG_RUN_MAIN'): logger.warning('startup: pid %d is the werkzeug reloader' % os.getpid())else: logger.warning('startup: pid %d is the active werkzeug' % os.getpid() # my real init code is invoked from hereI have a feeling this would be better done from inside the application object, if there's a method that's called before Werkzeug starts serving it. I don't know of such a method, though.This all boils down to: in Werkzeug's run_simple.py, there's only going to be one eventual call to make_server().serve_forever(), but there may be two calls to run_simple() (and the entire call stack up to that point) before we make it to make_server().
Function used to check if a variable's string starts with vowel? I am doing a sort of MadLibs thing and I need to check if three of my variables start with a vowel, and then tack "a" or "an" in front.I have this,def vowelcheck(variable): if variable[0] == "a" or variable[0] == "e" or variable[0] == "i" or variable[0] == "o" or variable[0] == "u": variable = "an " + variable else: variable = "a " + variable;and thenvowelcheck(noun1)vowelcheck(noun2)vowelcheck(noun3)after the variables, but it doesn't do anything to the words.What can I change about this to make it work?
the 'variable' argument of your function is a copy of the words noun1, noun2, nound2. you indeed modify 'variable', but it does not modify nouns.try instead:def vowelcheck(variable): if variable[0] == "a" or variable[0] == "e" or variable[0] == "i" or variable[0] == "o" or variable[0] == "u": variable = "an " + variable else: variable = "a " + variable return variablenoun1, noun2, noun3 = (vowelcheck(noun1), vowelcheck(noun2), vowelcheck(noun3))
How to set the mouse cursor on QTabWidget tabs I am dealing with a simple problem I don't succeed to solve. I am working with Python-3.6 and PyQt5.What I wish is to change the mouse cursor when the user has the mouse above the inactive tab (to understand he can click on it to change the active tab). And this only on the inactive tab, not on its content (because changing the cursor parameter in Qt Designer permits me to do this). I looked for this answer on internet during a few hours but I didn't find what I want. I think I have to deal with the stylesheet property, but I don't know how to do (it seems to be something with QTabWidget::pane maybe). Can someone help me please?Here is a sample code in which the mouse-cursor is changed over the whole the tab, not only the tab title:from PyQt5 import QtCore, QtGui, QtWidgetsimport sysclass Ui_Form(QtWidgets.QWidget): def __init__(self, nb_courbes=1, nom_courbes='', parent=None): super(Ui_Form, self).__init__(parent) self.setupUi(self) def setupUi(self, Form): Form.setObjectName("Form") Form.resize(249, 169) self.tabWidget = QtWidgets.QTabWidget(Form) self.tabWidget.setGeometry(QtCore.QRect(40, 40, 127, 80)) self.tabWidget.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor)) self.tabWidget.setObjectName("tabWidget") self.tab = QtWidgets.QWidget() self.tab.setObjectName("tab") self.tabWidget.addTab(self.tab, "") self.tab_2 = QtWidgets.QWidget() self.tab_2.setObjectName("tab_2") self.tabWidget.addTab(self.tab_2, "")if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) form = Ui_Form() form.show() sys.exit(app.exec_())I want the normal cursor (the arrow) on the blue zone, but change the mouse cursor on the inactive tab title in the red zone.
You can change the cursor for all the tabs by setting it on the tab-bar: self.tabWidget.tabBar().setCursor(QtCore.Qt.PointingHandCursor)However, to change it for only the inactive tabs, you must use an event filter to change the cursor dynamically, like this:def setupUi(self, Form): ... self.tabWidget.tabBar().installEventFilter(self) self.tabWidget.tabBar().setMouseTracking(True) ...def eventFilter(self, source, event): if (event.type() == QtCore.QEvent.MouseMove and source is self.tabWidget.tabBar()): index = source.tabAt(event.pos()) if index >= 0 and index != source.currentIndex(): source.setCursor(QtCore.Qt.PointingHandCursor) else: source.setCursor(QtCore.Qt.ArrowCursor) return super(Ui_Form, self).eventFilter(source, event)(PS: Qt does not support changing the cursor via stylesheets).
Singleton list or normal list? I'm very new to python and would like to ask some maybe very dumb question about list.I have some listlst = get_lst()element = #some elementI want to create a single element list [element] or lst. I can do it like thisresult_lst = [element] if element is not None else lstBut maybe there is some library function which already does this?
The shortest way to express this might be:result_lst = ([element], lst)[element is None]But I would not necessarily consider it a recommendable pattern or more readable. If the expressions involved get more complex, I'd even drop the ternary operator and use a good old if-else construction.
create a function with dot: myfunc.print("Hello World") in python how do you define dotted function?i try this:def myfunc.print(value) print(value);but it's said "Invalid syntax"
Here's one way:>>> class myfunc:... print = print...>>> myfunc.print("foo")fooIn this example, myfunc is actually a class, and print is a class attribute (which is initialized to point to the print function).
Unsupported characters in input (Python 2.7.9) A small question from a newbie. I am trying to do a little function where it randomizes the content of a text. #-*- coding: utf-8 -*-import randomdef glitch(text): new_text = [''] for x in text: new_text.append(x) random.shuffle(new_text) return ''.join(new_text)As you can see it is quite simple, and the output, when inputting a simple string, such as 'Hey how are you?' will result in a randomized sentence as predicted. However, when I try to paste something similar to this: print glitch('Iàäï�†n$§&0ñŒ≥Q¶µù`o¢y”—œº')...Python 2.7.9 returns 'Unsupported characters in input' -- I have looked around the forum and have tried things as far as I can understand, as I am still new to coding in general, but to no avail. Any advice?Thanks.
#-*- coding: utf-8 -*-import randomdef glitch(text): new_text = [''] for x in text: new_text.append(x) random.shuffle(new_text) return ''.join(new_text)print (glitch(u'Iàäï†n$§&0ñŒ≥Q¶µù`o¢y”—œº'))This should work, through a quick google search of my own, I found out, you have to prepend the letter 'u', to mark the following text as unicode.Source: Unsupported characters in input
Valid generic code to index 2D or 1D masked arrays into 1D arrays in Numpy I would like to have a valid code for either 2D or 1D masked array to extract a 1D array from it. In the 2D case, one column would be entirely masked and should be removed (this can be done as shown in this question for example).import numpy as npa = np.ma.masked_array(range(10*2), mask=[True, False]*10).reshape(10,2)a = np.ma.masked_equal(a, 13)b = np.ma.masked_equal(np.array(range(10)), 3)print(a)print(b)# [[-- 1]# [-- 3]# [-- 5]# [-- 7]# [-- 9]# [-- 11]# [-- --]# [-- 15]# [-- 17]# [-- 19]]# [0 1 2 -- 4 5 6 7 8 9]# HERE I would like the same indexing valid for both (2D and 1D) situations:a = a[:, ~np.all(a.mask, axis=0)].squeeze()b = b[:] # I am not supposed to know that b is actually 1D and not a problematic 2D arrayprint(a)print(b)# [1 3 5 7 9 11 -- 15 17 19]# [0 1 2 -- 4 5 6 7 8 9]print(a-b)# [1 2 3 -- 5 6 -- 8 9 10]What would be a valid, pythonic code to achieve this?Sub-question: to my surprise, during my attempts the following did work:b = b[:, ~np.all(b.mask, axis=0)].squeeze()print(b)# [1 3 5 7 9 11 -- 15 17 19]Why don't I get a IndexError: too many indices for array error while I use 2D indexing for this 1D array?Is there any better option to address the original question? Thanks!
You can use a = a[:, ~np.all(a.mask, axis=0)].squeeze() for both cases (1D and 2D).In the 1D case of your example you get b[:, ~np.all(b.mask, axis=0)] which is b[:, True]. It seems that this should throw an indexing error but True behaves like np.newaxis in this case, i.e. the result of b[:, True] is an array of shape (10,1). See this SO answer for why this is so and what's the motivation behind it (the answer pertains to the 0-dimensionsal case but it turns out to work for higher dimensions the same way). squeeze then removes this additional dimension so that you didn't notice it.
Add values from columns into a new column using pandas I have a dataframe:id category value1 1 abc2 2 abc3 1 abc4 4 abc5 4 abc6 3 abcCategory 1 = best, 2 = good, 3 = bad, 4 =uglyI want to create a new column such that, for category 1 the value in the column should be cat_1, for category 2, the value should be cat2.in new_col2 for category 1 should be cat_best, for category 2, the value should be cat_good.df['new_col'] = ''my final dfid category value new_col new_col21 1 abc cat_1 cat_best2 2 abc cat_2 cat_good3 1 abc cat_1 cat_best4 4 abc cat_4 cat_ugly5 4 abc cat_4 cat_ugly6 3 abc cat_3 cat_badI can iterate it in for loop:for index,row in df.iterrows(): df.loc[df.id == row.id,'new_col'] = 'cat_'+str(row['category'])Is there a better way of doing it (least time consuming)
I think you need join string with column converted to string and map with join for second column:d = {1:'best', 2: 'good', 3 : 'bad', 4 :'ugly'}df['new_col'] = 'cat_'+ df['category'].astype(str)df['new_col2'] = 'cat_'+ df['category'].map(d)Or:df = df.assign(new_col= 'cat_'+ df['category'].astype(str), new_col2='cat_'+ df['category'].map(d))print (df) id category value new_col new_col20 1 1 abc cat_1 cat_best1 2 2 abc cat_2 cat_good2 3 1 abc cat_1 cat_best3 4 4 abc cat_4 cat_ugly4 5 4 abc cat_4 cat_ugly5 6 3 abc cat_3 cat_bad
Fetching language detection from Google api I have a CSV with keywords in one column and the number of impressions in a second column.I'd like to provide the keywords in a url (while looping) and for the Google language api to return what type of language was the keyword in.I have it working manually. If I enter (with the correct api key):http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&key=myapikey&q=merdeI get:{"responseData": {"language":"fr","isReliable":false,"confidence":6.213709E-4}, "responseDetails": null, "responseStatus": 200}which is correct, 'merde' is French.so far I have this code but I keep getting server unreachable errors:import timeimport csvfrom operator import itemgetterimport sysimport fileinputimport urllib2import jsonE_OPERATION_ERROR = 1E_INVALID_PARAMS = 2#not workingdef parse_result(result): """Parse a JSONP result string and return a list of terms""" # Deserialize JSON to Python objects result_object = json.loads(result) #Get the rows in the table, then get the second column's value # for each row return row in result_object#not workingdef retrieve_terms(seedterm): print(seedterm) """Retrieves and parses data and returns a list of terms""" url_template = 'http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&key=myapikey&q=%(seed)s' url = url_template % {"seed": seedterm} try: with urllib2.urlopen(url) as data: data = perform_request(seedterm) result = data.read() except: sys.stderr.write('%s\n' % 'Could not request data from server') exit(E_OPERATION_ERROR) #terms = parse_result(result) #print terms print resultdef main(argv): filename = argv[1] csvfile = open(filename, 'r') csvreader = csv.DictReader(csvfile) rows = [] for row in csvreader: rows.append(row) sortedrows = sorted(rows, key=itemgetter('impressions'), reverse = True) keys = sortedrows[0].keys() for item in sortedrows: retrieve_terms(item['keywords']) try: outputfile = open('Output_%s.csv' % (filename),'w') except IOError: print("The file is active in another program - close it first!") sys.exit() dict_writer = csv.DictWriter(outputfile, keys, lineterminator='\n') dict_writer.writer.writerow(keys) dict_writer.writerows(sortedrows) outputfile.close() print("File is Done!! Check your folder") if __name__ == '__main__': start_time = time.clock() main(sys.argv) print("\n") print time.clock() - start_time, "seconds for script time"Any idea how to finish the code so that it will work? Thank you!
Try to add referrer, userip as described in the docs: An area to pay special attention to relates to correctly identifying yourself in your requests. Applications MUST always include a valid and accurate http referer header in their requests. In addition, we ask, but do not require, that each request contains a valid API Key. By providing a key, your application provides us with a secondary identification mechanism that is useful should we need to contact you in order to correct any problems. Read more about the usefulness of having an API key Developers are also encouraged to make use of the userip parameter (see below) to supply the IP address of the end-user on whose behalf you are making the API request. Doing so will help distinguish this legitimate server-side traffic from traffic which doesn't come from an end-user.Here's an example based on the answer to the question "access to google with python":#!/usr/bin/python# -*- coding: utf-8 -*-import jsonimport urllib, urllib2from pprint import pprintapi_key, userip = None, Nonequery = {'q' : 'матрёшка'}referrer = "https://stackoverflow.com/q/4309599/4279"if userip: query.update(userip=userip)if api_key: query.update(key=api_key)url = 'http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&%s' %( urllib.urlencode(query))request = urllib2.Request(url, headers=dict(Referer=referrer))json_data = json.load(urllib2.urlopen(request))pprint(json_data['responseData'])Output{u'confidence': 0.070496580000000003, u'isReliable': False, u'language': u'ru'}Another issue might be that seedterm is not properly quoted:if isinstance(seedterm, unicode): value = seedtermelse: # bytes value = seedterm.decode(put_encoding_here)url = 'http://...q=%s' % urllib.quote_plus(value.encode('utf-8'))
ImportError: No module named 'requests.exceptions' Super new to coding and i'm trying to learn Python. I have used Anaconda to manage packages, etc. I typically update Anaconda/conda in cmd with commands such as conda update conda or conda update anacondaAs of late, when using these commands, it comes up with a message: "ImportError: No module named 'Requests.exceptions'" followed by "Import Error: cannot import name 'Session'" Please see below.Traceback (most recent call last): File "C:\Program Files\Anaconda3\lib\site-packages\conda\cli\main.py", line 171, in main activate.main() File "C:\Program Files\Anaconda3\lib\site-packages\conda\cli\activate.py", line 181, in main from ..install import symlink_conda File "C:\Program Files\Anaconda3\lib\site-packages\conda\install.py", line 37, in <module> from .core.package_cache import rm_fetched # NOQA File "C:\Program Files\Anaconda3\lib\site-packages\conda\core\package_cache.py", line 9, in <module> from .path_actions import CacheUrlAction, ExtractPackageAction File "C:\Program Files\Anaconda3\lib\site-packages\conda\core\path_actions.py", line 33, in <module> from ..gateways.download import download File "C:\Program Files\Anaconda3\lib\site-packages\conda\gateways\download.py", line 10, in <module> from requests.exceptions import ConnectionError, HTTPError, InvalidSchema, SSLErrorImportError: No module named 'requests.exceptions'During handling of the above exception, another exception occurred:Traceback (most recent call last): File "C:\Program Files\Anaconda3\Scripts\conda-script.py", line 10, in <module> sys.exit(main()) File "C:\Program Files\Anaconda3\lib\site-packages\conda\cli\main.py", line 179, in main return handle_exception(e) File "C:\Program Files\Anaconda3\lib\site-packages\conda\exceptions.py", line 634, in handle_exception print_unexpected_error_message(e) File "C:\Program Files\Anaconda3\lib\site-packages\conda\exceptions.py", line 596, in print_unexpected_error_message stderrlogger.info(get_main_info_str(get_info_dict())) File "C:\Program Files\Anaconda3\lib\site-packages\conda\cli\main_info.py", line 162, in get_info_dict from ..connection import user_agent File "C:\Program Files\Anaconda3\lib\site-packages\conda\connection.py", line 12, in <module> from requests import Session, __version__ as REQUESTS_VERSIONImportError: cannot import name 'Session'I've tried using commands like pip install requests but it says that it has already says "Requirement already satisfied and lists locations where it is installed (i am guessing).At this point I can't even get a response back from conda commands like conda info --envs. It doesn't do anything when i type that in.If i need to uninstall conda/anaconda i will but am i just missing a simple fix?Thanks friends!
You should install requests with conda if you plan to use conda as your python environment.conda install requests
How to call listener class in robot.api? I have a bunch of testsuites which are executed using robot.api. For Example,from robot.api import TestSuite,ResultWritertc_dict = { 'test case #1' : 'Passed' 'test case #2' : 'Failed' }suite = TestSuite('tests_with_listener.robot')for k,v in tc_dict.items(): test = suite.tests.create(k) test.keywords.create('should be equal',args=(tc_dict[k],'Passed'))result = suite.run(output=xml_fpath)Is there any way in robot.api by which we can execute the following code?robot -b debug.txt --listener <ListenerLibrary> tests_with_listener.robot
In the documentation for the robot.api the note following note can be found: APIs related to the command line entry points are exposed directly via the robot root package.The referred documentation is robot.run or robot.run_cli.
How get python subprocess to run regex pattern without adding escape chars? I am trying to run locate from python3 using a basic regex pattern.subprocess.run( ['locate', '-r', '\.[^\~]$'] )But subprocess is adding escape characters to the regex string. This seems to cause it to break.The completed process reports that it ran the regex string thus:'\\.[^\\~]$'How do I stop it escaping the regex string?
So the question was invalid. But the answer, which is an answer to a different question, is instructive.this pattern worked'.*[^~]$'It was not necessary to escape the chars I had escaped in the first place, as @Wiktor says in his comment above.The confusion was over just how simple the basic bash regex is. In that respect, the above pattern is not quite as it seems either..* doesn't mean find everything as is usual. * on its own means find everything. The . merely matches a dot. So .* means find something with a . followed by anything.To be more precise, the actual pattern I am using is more like this:'abc.*[^~]$'... to find all files with name starting abc. and ending in anything but a ~.Oddly, this does not seem to work:'^abc.*[^~]$'
Creating a very large 2D array without a major impact to code run time in Python I've been doing competitive programming (USACO) for a couple of months now, in which there are time constraints you cannot exceed. I need to create a large matrix, or 2d array, the dimensions being 2500x2500, in which each value is [0,0]. Using list comprehension is taking too much time, and I needed an alternative (you cannot import modules so numpy isn't an option). I had initially done this:grid = [[[0,0] for i in range(2500)] for i in range(2500)]but it was taking far too much time, so I tried: grid= [[[0,0]]*2500]*2500,which gives the same result initially, but whenever I try to change a value, for example:grid[50][120][0]= 1, it changes the 0th index position of every [0,0] to False in the entire matrix instead of the specific coordinate at the [50][120] position, and this isn't the case when I use list comprehension. Does anybody know what's happening here? And any solution that doesn't involve a crazy run time? I started python just a couple of months before competitive programming so I'm not super experienced.
grid = [[[0,0] for i in range(2500)] for i in range(2500)]takes around 2.1 seconds on my PC, timing with PowerShell's Measure-Command. Now if the data specifications are strict, there is no magical way to make this faster. However, if the goal is to make this representation generate faster there is a better solution: use tuple instead of list for the inner data (0, 0).grid = [[(0, 0) for i in range(2500)] for i in range(2500)]This snippet generates the same informational value in under quarter of the time (0.48 s). Now what you have to consider here is what comes next. When updating these values in the grid, you need to always create a new tuple to replace the old one - which will always be slower than just updating the list value in the original sample code. This is because tuple doesn't support item assignment operation. Replacing a single value is still as easy as grid[50][120] = (1, grid[50][120][1]).Fast generation - slow replacement, might be handy if there aren't tons of value changes. Hope this helps.
unable to perform search on custom_field(JIRA-Python) I'm getting the below error when I search on custom_field.{"errorMessages":["Field \'customfield_10029\' does not exist or you do not have permission to view it."],"warningMessages":[]}But I have enough permissions(Admin) to access that field. And also I enabled the field visible.URL = 'https://xyz.atlassian.net/rest/api/2/search?jql=status="In+Progress"+and+customfield_10029=125&fields=id,key,status'
Custom fields in JQL searches are referenced using the abbreviation 'cf' followed by their ID inside square brackets '[id]', so your URL would be:URL ='https://xyz.atlassian.net/rest/api/2/search?jql=status="In+Progress"+and+cf[10029]=125&fields=id,key,status'Make sure you properly encode the square brackets in UTF-8 format in your language's encoding method.PS. Generally speaking, it's much easier to reference custom fields in JQL searches by their names, not their IDs. It makes the search URL easier to read and understand what is being searched for.
Regex expression to apply it on dataframe (convert a string of hour and minutes to sum of minutes) - python I have a df and a column with strings that looks like following:runtime 1h 38m 20h 4m 5h 45m emptyand I am trying to apply a function which will convert it to minutes.So far, I have come up with part of it:def runtime_to_minutes(string): try: capt_numbers = re.compile(r'[\d+][\d+]') hours = int(re.findall(capt_numbers, string)[0]) minutes = int(re.findall(capt_numbers, string)[1]) duration = hours * 60 + minutes return duration except Exception as error: return str(error)which obviously cannot handle all the situations, although it won't work for '1h 38m' either as I get an error list index out of range when I do: df['minutes'] = df['runtime'].apply(lambda s: runtime_to_minutes(s))How should I restructure the regex and the function to get the desired result?
You can useimport pandas as pddf = pd.DataFrame({'runtime':['1h 38m','20h 4m','5h','45m','empty']})df[['hours', 'minutes']] = df['runtime'].str.extract(r'(?=\d+\s*[hm]\b)(?:(\d+)\s*h)?(?:\s*(\d+)\s*m)?').fillna(0)df['minutes'] = df['hours'].astype(int) * 60 + df['minutes'].astype(int)df.drop('hours', axis=1, inplace=True)# => df# runtime minutes# 0 1h 38m 98# 1 20h 4m 1204# 2 5h 300# 3 45m 45# 4 empty 0See the regex demo. The pattern extracts two captures, hours and minutes. Both parts are optional, but the lookahead makes sure at least one part is present.(?=\d+\s*[hm]\b) - a positive lookahead that requires one or more digits, zero or more whitespaces, and then h or m not followed with any other word char(?:(\d+)\s*h)? - an optional non-capturing group capturing one or more digits into Group 1, and then just matching zero or more whitespaces and h(?:\s*(\d+)\s*m)? - an optional non-capturing group matching zero or more whitespaces, then capturing one or more digits into Group 2, and then zero or more whitespaces and m are matched.If no match occurs, .fillna(0) puts 0 as default value.The hours and minutes are saved in hours and minutes columns.Then, the minutes are calculated and hours column is dropped.
How to add an additional plot to multiple subplots I want to generate pairs of lineplots where one of them is used as a benchmark.I can generate a plot like this with the code below.however, I wish I could have 6 pair plots with Arkhangelsk as the benchmark line in each plot instead. for example, one of them will be like this:.import pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsdata = {'year': ['1998','1998','1998','1998','1998','1998','1998','1999','1999','1999','1999','1999','1999','1999'],'region':['Adygea','Altai Krai','Amur Oblast','Arkhangelsk','Astrakhan','Bashkortostan','Belgorod','Adygea','Altai Krai','Amur Oblast','Arkhangelsk','Astrakhan','Bashkortostan','Belgorod'], 'sales':[8.8, 19.2,21.2, 10.6,18,17.5,23, 10, 17.8, 20.5, 12.6, 19.9, 16, 21]}df1 = pd.DataFrame(data)plt.figure(figsize=(12, 6))palette1 = {c:'#079b51' if c=='Astrakhan' else 'grey' for c in df1['region'].unique()}sns.lineplot(x= 'year', y='sales', data=df1,hue='region', palette=palette1) # or sns.relplotplt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')My code is reproducible.I have tried the following, which apparently does not work. I am not sure how to loop through each of my 6 regions to compare the Astkrakhan line plot to. It should probably contain a condition like not equal/equal to 'Astrakhan' ? thank you.fig, axs = plt.subplots(2,3, figsize=(18,15))for i in enumerate(df1.region.unique()): sns.lineplot(x= 'year', y='sales', data=df1,ax = axs[i])and this, which brings up the ValueError: Could not interpret value Adygea for parameter y.df2 = df1.pivot(index='year', columns='region', values='sales') # converting the rows into columnsdf_a = df2[['Arkhangelsk']]df_r = df2.loc[:, ~df2.columns.isin(['Arkhangelsk'])] ## all other columnsfig, axes = plt.subplots(2, 3)for col, ax in zip(df2.columns, axes.ravel()): sns.lineplot(x = "year", y = col, data = df_a, ax = ax, linestyle="--") sns.lineplot(x = "year", y = col, data = df_r, ax = ax)
Use pandas.DataFrmame.plot, which, like seaborn, uses matplotlib# convert the year column to an intdf.year = df.year.astype(int)# pivot the data to a wide formatdfp = df.pivot(index='year', columns='region', values='sales')# get the columns to plot and comparecompare = 'Arkhangelsk'cols = dfp.columns.tolist()cols.remove(compare)# set color dictcolor = {c:'#079b51' if c=='Arkhangelsk' else 'grey' for c in df['region'].unique()}# plot the data with subplotsaxes = dfp.plot(y=cols, subplots=True, layout=(2, 3), figsize=(16, 10), sharey=True, xticks=dfp.index, color=color)# flatten the arrayaxes = axes.flat # .ravel() and .flatten() also work# extract the figure objectfig = axes[0].get_figure()# iterate through each axesfor ax in axes: # plot the comparison column dfp.plot(y=compare, ax=ax, color=color) # adjust the legend if desired ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.10), frameon=False, ncol=2) fig.suptitle('My Plots', fontsize=22, y=0.95)plt.show()
Gathering entries in a matrix based on a matrix of column indices (tensorflow/numpy) A little example to demonstrate what I needI have a question about gathering in tensorflow. Let's say I have a tensor of values (that I care about for some reason):test1 = tf.round(5*tf.random.uniform(shape=(2,3)))which gives me this output:<tf.Tensor: shape=(2, 3), dtype=float32, numpy=array([[1., 1., 2.], [4., 5., 0.]], dtype=float32)>and I also have a tensor of indices column indices that I want to pick out on every row:test_ind = tf.constant([[0,1,0,0,1], [0,1,1,1,0]], dtype=tf.int64)I want to gather this so that from the first row (0th row), I pick out items in column 0, 1, 0, 0, 1, and same for the second row.So the output for this example should be:<tf.Tensor: shape=(2, 5), dtype=float32, numpy=array([[1., 1., 1., 1., 1.], [4., 5., 5., 5., 4.]], dtype=float32)>My attemptSo I figured out a way to do this in general, I wrote the following function gather_matrix_indices() that will take in a tensor of values and a tensor of indices and do exactly what I specified above.def gather_matrix_indices(input_arr, index_arr): row, _ = input_arr.shape li = [] for i in range(row): li.append(tf.expand_dims(tf.gather(params=input_arr[i], indices=index_arr[i]), axis=0)) return tf.concat(li, axis=0)My QuestionI'm just wondering, is there a way to do this using ONLY tensorflow or numpy methods? The only solution I could come up with is writing my own function that iterates through every row and gathers indices for all columns in that row. I have not had runtime issues yet but I would much rather utilize built-in tensorflow or numpy methods when possible. I've tried tf.gather before too, but I don't know if this particular case is possible with any combination of tf.gather and tf.gather_nd. If anyone has a suggestion, I would greatly appreciate it.Edit (08/18/22)I would like to add an edit that in PyTorch, calling torch.gather() and setting dim=1 in the arguments will do EXACTLY what I wanted in this question. So if you're familiar with both libraries, and you really need this functionality, torch.gather() can do this out of the box.
You can use gather_nd() for this. It can look a bit tricky to get this working. Let me try to explain this with shapes.We got test1 -> [2, 3] and test_ind_col_ind -> [2, 5]. test_ind_col_ind has only column indices, but you also need row indices to use gather_nd(). To use gather_nd() with a [2,3] tensor, we need to create a test_ind -> [2, 5, 2] sized tensor. The inner most dimension of this new test_ind correspond to individual indices you want to index from test1. Here we have the inner most dimension = 2 in the format (<row index>, <col index>). In other words, looking at the shape of test_ind,[ 2 , 5 , 2 ] | | V | (2,5) | <- The size of the final tensor V (2,) <- The full index to a scalar in your input tensorimport tensorflow as tftest1 = tf.round(5*tf.random.uniform(shape=(2,3)))print(test1)test_ind_col_ind = tf.constant([[0,1,0,0,1], [0,1,1,1,0]], dtype=tf.int64)[:, :, tf.newaxis]test_ind_row_ind = tf.repeat(tf.range(2, dtype=tf.int64)[:, tf.newaxis, tf.newaxis], 5, axis=1)test_ind = tf.concat([test_ind_format, test_ind], axis=-1)res = tf.gather_nd(indices=test_ind, params=test1)
Unable to serve static file from flask server I have a index.html file, which has the absolute path 'c:\project\web\frontend\index.html'I am trying to return it using the following function@webserver.route('/')def home() return webserver.send_static_file(path)I have verified that the path is correct by accessing it directly in the browser. I have tried to replace '\' with '/' without any luck.It is running on a windows machine.
I had to define the path to be the static_folder, when creating the flask object. Once I defined the folder to be static, the html page was served.
MLPRegressor not giving accurate results I have been given some years data of Ozone, NO, NO2 and CO to work on. The task is to use this data to predict the value of ozone. Suppose i have data of year 2015,2016,2018 and 2019. I need to predict ozone value of 2019 using 2015,2016,2018 data which is with me.Data format is hourly recorded and is present in the form of monthsimage. So in this format data is present.What i have done: First of all the years data in one excel file which contains 4 columns NO,NO2,CO,O3. And added all the data month by month. So this is the master file which has been usedAttached imageI have used python. First the data has to be cleared. Let me explain a bit. No,No2 and CO are predecessors of ozone which means that ozone gas creation depends on these gases and the data has to be cleaned before hand and the constraints were to remove any negative value and to remove that whole row including others column so if any of the values of Ozone,No,NO2 and CO is invalid we have to remove the whole row and not count it. And the data contained some string format which also has to be removed. It was all done. Then i applied MLP regressor from sk learn Here the code which i have done.from sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScalerfrom sklearn.metrics import explained_variance_scorefrom sklearn.neural_network import MLPRegressorfrom sklearn.metrics import mean_absolute_errorimport pandas as pdimport matplotlib.pyplot as pltbugs = ['NOx', '* 43.3', '* 312', '11/19', '11/28', '06:00', '09/30', '09/04', '14:00', '06/25', '07:00', '06/02', '17:00', '04/10', '04/17', '18:00', '02/26', '02/03', '01:00', '11/23', '15:00', '11/12', '24:00', '09/02', '16:00', '09/28', '* 16.8', '* 121', '12:00', '06/24', '13:00', '06/26', 'Span', 'NoData', 'ppb', 'Zero', 'Samp<', 'RS232']dataset = pd.read_excel("Testing.xlsx")dataset = pd.DataFrame(dataset).replace(bugs, 0)dataset.dropna(subset=["O3"], inplace=True)dataset.dropna(subset=["NO"], inplace=True)dataset.dropna(subset=["NO2"], inplace=True)dataset.dropna(subset=["CO"], inplace=True)dataset.drop(dataset[dataset['O3'] < 1].index, inplace=True)dataset.drop(dataset[dataset['O3'] > 160].index, inplace=True)dataset.drop(dataset[dataset['O3'] == 0].index, inplace=True)dataset.drop(dataset[dataset['NO'] < 1].index, inplace=True)dataset.drop(dataset[dataset['NO'] > 160].index, inplace=True)dataset.drop(dataset[dataset['NO'] == 0].index, inplace=True)dataset.drop(dataset[dataset['NO2'] < 1].index, inplace=True)dataset.drop(dataset[dataset['NO2'] > 160].index, inplace=True)dataset.drop(dataset[dataset['NO2'] == 0].index, inplace=True)dataset.drop(dataset[dataset['CO'] < 1].index, inplace=True)dataset.drop(dataset[dataset['CO'] > 4000].index, inplace=True)dataset.drop(dataset[dataset['CO'] == 0].index, inplace=True)dataset = dataset.reset_index()dataset = dataset.drop(['index'], axis=1)X = dataset[["NO", "NO2", "CO"]].astype(int)Y = dataset[["O3"]].astype(int)X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.05, random_state=27)sc_x = StandardScaler()X_train = sc_x.fit_transform(X_train)X_test = sc_x.fit_transform(X_test)clf = MLPRegressor(hidden_layer_sizes=(100,100,100), max_iter=10000,verbose=True,random_state=8)clf.fit(X_train, y_train)y_pred = clf.predict(X_test)print(explained_variance_score(y_test, y_pred))print(mean_absolute_error(y_test, y_pred))y_test = pd.DataFrame(y_test)y_test = y_test.reset_index(0)y_test = y_test.drop(['index'], axis=1)# y_test = y_test.drop([19,20],axis=0)y_pred = pd.DataFrame(y_pred)y_pred = y_pred.shift(-1)# y_pred = y_pred.drop([19,20],axis=0)plt.figure(figsize=(10, 5))plt.plot(y_pred, color='r', label='PredictedO3')plt.plot(y_test, color='g', label='OriginalO3')plt.legend()plt.show()Console: y = column_or_1d(y, warn=True)Iteration 1, loss = 537.59597297Iteration 2, loss = 185.33662023Iteration 3, loss = 159.32122111Iteration 4, loss = 156.71612690Iteration 5, loss = 155.05307865Iteration 6, loss = 154.59351630Iteration 7, loss = 154.16687592Iteration 8, loss = 153.69258698Iteration 9, loss = 153.36140320Iteration 10, loss = 152.94593665Iteration 11, loss = 152.75124494Iteration 12, loss = 152.73893578Iteration 13, loss = 152.27131771Iteration 14, loss = 152.08732297Iteration 15, loss = 151.83197245Iteration 16, loss = 151.29399626Iteration 17, loss = 150.96425147Iteration 18, loss = 150.47673257Iteration 19, loss = 150.14353009Iteration 20, loss = 149.74165931Iteration 21, loss = 149.39158575Iteration 22, loss = 149.28863163Iteration 23, loss = 148.95356802Iteration 24, loss = 148.82618770Iteration 25, loss = 148.18070387Iteration 26, loss = 147.79069739Iteration 27, loss = 147.03057672Iteration 28, loss = 146.77822749Iteration 29, loss = 146.47159952Iteration 30, loss = 145.77185465Iteration 31, loss = 145.54493110Iteration 32, loss = 145.58297196Iteration 33, loss = 145.05848640Iteration 34, loss = 144.73301133Iteration 35, loss = 144.04886503Iteration 36, loss = 143.82328142Iteration 37, loss = 143.87060411Iteration 38, loss = 143.84762507Iteration 39, loss = 142.64434158Iteration 40, loss = 142.63539287Iteration 41, loss = 142.55569644Iteration 42, loss = 142.33659309Iteration 43, loss = 142.08105262Iteration 44, loss = 141.84181483Iteration 45, loss = 143.50650508Iteration 46, loss = 141.34511656Iteration 47, loss = 141.26444355Iteration 48, loss = 140.37034198Iteration 49, loss = 140.15212097Iteration 50, loss = 140.21204360Iteration 51, loss = 140.01652524Iteration 52, loss = 139.55019562Iteration 53, loss = 139.96862367Iteration 54, loss = 139.18904418Iteration 55, loss = 138.96940532Iteration 56, loss = 138.74715169Iteration 57, loss = 138.42219317Iteration 58, loss = 138.87739582Iteration 59, loss = 138.48879907Iteration 60, loss = 138.32348064Iteration 61, loss = 138.25489777Iteration 62, loss = 137.35913024Iteration 63, loss = 137.34553482Iteration 64, loss = 137.81499126Iteration 65, loss = 137.24418131Iteration 66, loss = 138.22142987Iteration 67, loss = 136.68683284Iteration 68, loss = 136.80873025Iteration 69, loss = 136.89557260Iteration 70, loss = 137.78914828Iteration 71, loss = 136.39181767Iteration 72, loss = 136.90698714Iteration 73, loss = 136.15180171Iteration 74, loss = 136.29621913Iteration 75, loss = 136.54671797Iteration 76, loss = 136.17984691Iteration 77, loss = 135.46193871Iteration 78, loss = 135.72399747Iteration 79, loss = 135.66833438Iteration 80, loss = 135.59829106Iteration 81, loss = 134.89759461Iteration 82, loss = 135.13978950Iteration 83, loss = 135.13023951Iteration 84, loss = 134.74279949Iteration 85, loss = 135.81422214Iteration 86, loss = 134.91660517Iteration 87, loss = 134.42552779Iteration 88, loss = 134.69309963Iteration 89, loss = 135.12116240Iteration 90, loss = 134.58731261Iteration 91, loss = 135.03610330Iteration 92, loss = 135.49753508Iteration 93, loss = 134.34645918Iteration 94, loss = 133.73179994Iteration 95, loss = 133.63077367Iteration 96, loss = 133.77330604Iteration 97, loss = 134.34313391Iteration 98, loss = 133.89467176Iteration 99, loss = 134.16270723Iteration 100, loss = 133.69654234Iteration 101, loss = 134.06460647Iteration 102, loss = 133.67570066Iteration 103, loss = 133.51941546Iteration 104, loss = 134.44514524Iteration 105, loss = 133.77755818Iteration 106, loss = 133.45007788Iteration 107, loss = 133.07441490Iteration 108, loss = 134.99803516Iteration 109, loss = 133.80158058Iteration 110, loss = 132.86973595Iteration 111, loss = 132.95281816Iteration 112, loss = 132.55546679Iteration 113, loss = 133.89665148Iteration 114, loss = 132.92319206Iteration 115, loss = 133.02169313Iteration 116, loss = 133.23774543Iteration 117, loss = 132.03027124Iteration 118, loss = 133.18472212Iteration 119, loss = 132.34502179Iteration 120, loss = 132.55417269Iteration 121, loss = 132.43373273Iteration 122, loss = 132.26810570Iteration 123, loss = 133.17705777Iteration 124, loss = 133.58044956Iteration 125, loss = 132.12074893Iteration 126, loss = 131.93800952Iteration 127, loss = 132.30641181Iteration 128, loss = 131.81882504Iteration 129, loss = 132.06413592Iteration 130, loss = 132.24680375Iteration 131, loss = 132.12261129Iteration 132, loss = 132.35714616Iteration 133, loss = 131.90862418Iteration 134, loss = 131.73195382Iteration 135, loss = 131.55302493Iteration 136, loss = 131.41382323Iteration 137, loss = 131.62962730Iteration 138, loss = 132.49231086Iteration 139, loss = 131.14651158Iteration 140, loss = 131.46236192Iteration 141, loss = 131.36319145Iteration 142, loss = 131.87374996Iteration 143, loss = 132.08955722Iteration 144, loss = 131.28997320Iteration 145, loss = 131.35961909Iteration 146, loss = 131.20954288Iteration 147, loss = 131.99304728Iteration 148, loss = 130.76432171Iteration 149, loss = 131.42775156Iteration 150, loss = 131.05940000Iteration 151, loss = 131.28351430Iteration 152, loss = 130.74260322Iteration 153, loss = 130.88466712Iteration 154, loss = 131.03646775Iteration 155, loss = 130.34557661Iteration 156, loss = 130.83447199Iteration 157, loss = 131.28845939Iteration 158, loss = 130.65785044Iteration 159, loss = 130.61223056Iteration 160, loss = 131.07589679Iteration 161, loss = 130.64325675Iteration 162, loss = 129.70704922Iteration 163, loss = 129.84506370Iteration 164, loss = 130.61988464Iteration 165, loss = 130.43265567Iteration 166, loss = 130.88822404Iteration 167, loss = 130.76778201Iteration 168, loss = 130.64819084Iteration 169, loss = 130.28019987Iteration 170, loss = 129.95417212Iteration 171, loss = 131.06510048Iteration 172, loss = 131.21377407Iteration 173, loss = 130.17368709Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.0.244249985191963412.796789671568312heres the final plot hereIf i am doing something wrong correct me.Regards
Such questions are actually difficult to answer exactly, since the answer depends crucially on the dataset used, which we don't have.Nevertheless, since your target variable seems to have a rather high dynamic range, you should try scaling it using a separate scaler; you should take care to inverse-transform the predictions back to their original scale, before computing errors or plotting:sc_y = StandardScaler()y_train = sc_y.fit_transform(y_train.reshape(-1, 1))y_test = sc_y.transform(y_test.reshape(-1, 1))# model definition and fitting...y_pred_scaled = clf.predict(X_test) # get scaled predictionsy_pred = sc_y.inverse_transform(y_pred_scaled) # transform back to original scaleYou should be able from this point on to continue further with y_pred as you do in your code.Also, irrelevant to your issue, but your are scaling your features in a wrong way. We never use fit_transform on the test data; the correct way is:sc_x = StandardScaler()X_train = sc_x.fit_transform(X_train)X_test = sc_x.transform(X_test) # transform hereAs said, this is just a tip; the keyword here is experiment (with different number of layers, different numbers of units per layer, different scalers etc).
How do I use Python code for moving files I have 5 files and they exist in 5 different locationsI would like to write a piece of code preferably in Python(I am a newbie) and the code should check all these 5 folders and code should check if the file exists, if it does, it should move over all those files from different locations to a single shared drive location. I tried the below URL and used import shutil, this worked but this is for one or many files from the same location to another location. Any pointers, thoughts, suggestions on how I can do this will be greatly appreciated.https://linuxhint.com/move-file-to-other-directory-python/
Speaking just about principles on how to solve your problem without writing any code:From the code you linked, you have a solution for copying one file at a time. Moving that code inside a function will let you easily re-use it for many different input files, and even more than one destination file. You can define the file(s) to be copied and the destination folder as arguments.Python's os library is great for operating system actions like checking if file exists, making the target directory if needed, just to name a few that might be useful. But this is just one tool for getting there, you'll likely see a few different but valid answers.
Problem using CSV to query a website, input isn't right I am fairly new at python programming but have a column of terms I want to search a website for. The code is as follows:import requestsimport pandas as pdfrom bs4 import BeautifulSoup as BScol_list = ['Molecular Formula'] #this is a column title in my csv fileChem = pd.read_csv('single.csv', usecols=col_list)res = requests.get('https://hmdb.ca/unearth/q?utf8=✓&query='+ Chem +'&searcher=metabolites&button=')html_page = res.contentsoup = BS(html_page, 'html.parser')body = soup.find_all('div', attrs={'class':'hit-name'})for div in body: print(div.text)I want to use the column information to fill in for the "Chem" term in the search. If I just use Chem = "some specific chemical" it works great. As it is written I get the following error - No connection adapters were found for 'Molecular Formula\n0 https://hmdb.ca/unearth/q?utf8=✓&query=C10H7NO...\n1 https://hmdb.ca/unearth/q?utf8=✓&query=C11N12O...'. Maybe this has to do with the numbers that pandas adds to each row? Any help is appreciated!
You can use for-loop to iterate over the values in "Molecular Formula" column. For example:import requestsimport pandas as pdfrom bs4 import BeautifulSoup as BScol_list = ["Molecular Formula"] # this is a column title in my csv fileChem = pd.read_csv("data.csv", usecols=col_list)for c in Chem["Molecular Formula"]: res = requests.get( "https://hmdb.ca/unearth/q?utf8=✓&query=" + c + "&searcher=metabolites&button=" ) html_page = res.content soup = BS(html_page, "html.parser") body = soup.find_all("div", attrs={"class": "hit-name"}) for div in body: print(div.text) print("-" * 80)Prints:Succinylcholine2-Ethyl-4,5-dimethylthiazoleWater--------------------------------------------------------------------------------Licoricesaponin C2Illudin C2Eremopetasitenin C2Cinncassiol C2Gladiatoside C2Prostaglandin-c2Capsicoside C2Schidigerasaponin C2Ganoderic acid C2Ginsenoside CDiethyl sulfideMangiferin4-NitrophenolL-AcetylcarnitineMalonic acid11-trans-Leukotriene C4(-)-EpigallocatechinTryptophan 2-C-mannoside--------------------------------------------------------------------------------contents of data.csv:Molecular FormulaH2OC2EDIT: To save results to CSV:import requestsimport pandas as pdfrom bs4 import BeautifulSoup as BScol_list = ["Molecular Formula"] # this is a column title in my csv fileChem = pd.read_csv("data.csv", usecols=col_list)all_data = []for c in Chem["Molecular Formula"]: print(f"Getting {c=}") res = requests.get( "https://hmdb.ca/unearth/q?utf8=✓&query=" + c + "&searcher=metabolites&button=" ) html_page = res.content soup = BS(html_page, "html.parser") body = soup.find_all("div", attrs={"class": "hit-name"}) for div in body: all_data.append([c, div.text])df = pd.DataFrame(all_data, columns=["Molecular Formula", "Value"])print(df)df.to_csv("result.csv", index=False)Prints:Getting c='H2O'Getting c='C2' Molecular Formula Value0 H2O Succinylcholine1 H2O 2-Ethyl-4,5-dimethylthiazole2 H2O Water3 C2 Licoricesaponin C24 C2 Illudin C25 C2 Eremopetasitenin C26 C2 Cinncassiol C27 C2 Gladiatoside C28 C2 Prostaglandin-c29 C2 Capsicoside C210 C2 Schidigerasaponin C211 C2 Ganoderic acid C212 C2 Ginsenoside C13 C2 Diethyl sulfide14 C2 Mangiferin15 C2 4-Nitrophenol16 C2 L-Acetylcarnitine17 C2 Malonic acid18 C2 11-trans-Leukotriene C419 C2 (-)-Epigallocatechin20 C2 Tryptophan 2-C-mannosideand saves result.csv
can't import kornia.augmentation.functional I have installed kornia and imorting it like,from kornia.color import *import kornia.augmentation.functional as F_kimport kornia as Kbut the second line is giving errorModuleNotFoundError: No module named 'kornia.augmentation.functional'. Also, this is my directory structure.But I getting errorModuleNotFoundError: No module named 'FewShot_models'when I try to import from FewShot_models.manipulate import *.I am following a code from github and trying to implement that.
kornia.augmentation.functional was removed in version 0.5.4 and the most of the functions are available through kornia.augmentation.Regarding your second question, you need to add empty file named __init__.py to FewShot_models directory. Check this answer for details about __init__.py.
Where are stored wheels .whl cached files? $ python3 -m venv ~/venvs/vtest$ source ~/venvs/vtest/bin/activate(vtest) $ pip install numpyCollecting numpy Cache entry deserialization failed, entry ignored Using cached https://files.pythonhosted.org/packages/d2/ab/43e678759326f728de861edbef34b8e2ad1b1490505f20e0d1f0716c3bf4/numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whlInstalling collected packages: numpySuccessfully installed numpy-1.17.4(vtest) $I'm looking for where this wheel numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whl has been cached ?$ sudo updatedb$ locate numpy-1.17.4$ # nada ;(Documentation https://pip.pypa.io/en/stable/reference/pip_install/#wheel-cache tell us that Pip will read from the subdirectory wheels within the pip cache directory and use any packages found there.$ pip --versionpip 9.0.1 from ~/venvs/vtest/lib/python3.6/site-packages (python 3.6)$To answer Hamza Khurshid numpy is not on ~/.cache/pip/wheels$ find ~/.cache/pip/wheels -name '*.whl' |grep -i numpy$it look like .cache/pip/wheels is only used for user created wheels not for downloaded wheels, should I use export PIP_DOWNLOAD_CACHE=$HOME/.pip/cache ?
The messageUsing cached https://files.pythonhosted.org/packages/d2/ab/43e678759326f728de861edbef34b8e2ad1b1490505f20e0d1f0716c3bf4/numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whlmeans pip is using the HTTP cache, not the wheel cache (which is only used for locally-built wheels, like you mentioned).The name of the file in the HTTP cache is the sha224 of the URL being requested. You can retrieve the file like$ pwd/home/user/.cache/pip/http$ find . -name "$(printf 'https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl' | sha224sum - | awk '{print $1}')"./f/6/0/2/d/f602daffc1b0025a464d60b3e9f8b1f77a4538b550a46d67018978dbThe format of the file is not stable though, and depends on pip version. For specifics you can see the implementation that's used in the latest cachecontrol, which pip uses.If you want to get the actual file, an easier way is to use pip download, which will take the file from the cache into your current directory if it matches the URL that would be otherwise downloaded.
Send automated messages to Microsoft Teams using Python I want to run a script of Python and in the end send the results in a text format to a couple of employees through MS TeamsIs there any already build library that would allow me to send a message in Microsoft Teams through Python code?
1. Create a webhook in MS TeamsAdd an incoming webhook to a Teams channel:Navigate to the channel where you want to add the webhook and select (•••) Connectors from the top navigation bar.Search for Incoming Webhook, and add it.Click Configure and provide a name for your webhook.Copy the URL which appears and click "OK".2. Install pymsteamspip install pymsteams3. Create your python scriptimport pymsteamsmyTeamsMessage = pymsteams.connectorcard("<Microsoft Webhook URL>")myTeamsMessage.text("this is my text")myTeamsMessage.send()More information available here:Add a webook to MS TeamsPython pymsteams library
SQLAlchemy: How to set alias for insert statement? Need such a request: INSERT INTO public.cm_floor as r (load_date, centre, id_floor, name_floor) VALUES (now(), 'CentreName', 12345678, 'Floor 2') ON CONFLICT ON CONSTRAINT cm_floor_pkey DO UPDATE SET load_date=now(), centre=excluded.name_floor=excluded.name_floor WHERE (row_to_json(EXCLUDED)::jsonb - 'load_date') IS DISTINCT FROM (row_to_json(r.*)::jsonb - 'load_date');Python code:table = metadata.tables["public.cm_floor"]records = { ...}insert_stmt = insert(table).values(records)do_update_stmt = insert_stmt.on_conflict_do_update(index_elements=primary_keys, set_=update_column, where=text("(row_to_json(EXCLUDED)::jsonb - 'load_date') IS DISTINCT FROM (row_to_json(r.*)::jsonb - 'load_date')')"))I do not understand how to set the alias r. Or write the request differently. Without a alias.
my solution to the problem:filter = [c != insert_stmt.excluded[c.name] for c in table.c if (not c.primary_key and c.name != "load_date")]do_update_stmt = insert_stmt.on_conflict_do_update(index_elements=primary_keys, set_=update_column, where=or_(*filter))The final query looks like this:INSERT INTO public.cm_floor (load_date, centre, id_floor, name_floor)VALUES (now(), 'Centre', 12345678, 'Floor 2')ON CONFLICT ON CONSTRAINT cm_floor_pkeyDO UPDATE SET load_date=now(), centre=excluded.name_floor=excluded.name_floorWHERE public.cm_floor.centre<>excluded.centre OR public.cm_floor.name_floor<>excluded.name_floor;
Saving a string in python associated with an API it is my first post on stackoverflow so please go easy on me! :) I am also relatively new to python so bear with me :)With all that said here is my issue: I am writing a bit of code for fun which calls an API and grabs the latest Bitcoin Nonce data. I have managed to do this fine, however now I want to be able to save the first nonce value found as a string such as Nonce1 and then recall the API every few seconds till I get another Nonce value and name it Nonce2 for example? Is this possible? My code is down bellow.from __future__ import print_functionimport blocktrailclient = blocktrail.APIClient(api_key="x", api_secret="x", network="BTC", testnet=False)address = client.address('x')latest_block = client.block_latest()nonce = latest_block['nonce']print(nonce)noncestr = str(nonce)Thanks, again please go easy on me I am very new to Python :)
A very simple-minded solution:import timenonce = "some string"while True: latest_nonce = client.block_latest()['nonce'] if latest_nonce != nonce: nonce = latest_nonce time.sleep(2)Ideally you should use something like asyncio for unblocking execution.
Tensorflow apparently installs OK but then fails check I'm using Debian 10.2 (buster) and followed the procedure on https://www.tensorflow.org/install/pip?lang=python3 , using the virtual environment procedure as recommended.Everything works, down to and including:pip install --upgrade tensorflowThis generates a bunch of progress messages, all of which look OK.The very last step is "Verify the install:"python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"I type that (still in the venv environment) and it generates the message:Illegal instruction (core dumped)Nothing else. No hint as to what went wrong.I used gdb to look at the core dump and found:Program terminated with signal SIGILL, Illegal instruction.#0 0x00007fafbfd99820 in nsync::nsync_mu_init(nsync::nsync_mu_s_*) () from/home/me/venv/lib/python3.7/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
https://www.tensorflow.org/install/pip says: Starting with TensorFlow 1.6, binaries use AVX instructionsMy box says "Core i7" on the outside, but my /proc/cpuinfo gives the following flags:fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1dSo I conclude it doesn't have avx and the pre-built binaries require it. The pre-built binaries are therefore useless except for newer computers.
How to display new or modified lines between two files using Python I have two files: file1.txt and file2.txt. I would like to only display the lines in result2.txt that are new / different from those in result1.txt. I do this in bash using the following command: diff file1.txt file2.txt | grep -E "^>" | sed 's/^..//'Is this achievable using Python (without calling an OS command)?
See difflib a Python library for exactly this
Google Fusion Maps Info Window Dynamic Templating I'm doing some web scraping with Python and the last step is to use Google Fusion maps, but as somebody who has never touched any CSS styling before, I have no idea how to do something probably incredibly simple: hide a column title in the info window if it's blank. Not all the data have entries in my Amenities column, so I would like that to be gone from the info window if it's blank.I've read this (https://support.google.com/fusiontables/answer/3081246?hl=en&ref_topic=2575652), but it's complete gibberish to me at this stage. This is the default HTML they provide for the info window (with my data):<div class='googft-info-window'><b>Location:</b> {Location}<br><b>Movie Title:</b> {Movie Title}<br><b>Date:</b> {Date}<br><b>Amenities:</b> {Amenities}</div>This shot in the dark didn't work:<div class='googft-info-window'><b>Location:</b> {Location}<br><b>Movie Title:</b> {Movie Title}<br><b>Date:</b> {Date}<br><b>{if Amenities.value} Amenities: {/if}Amenities:</b> {Amenities}<br></div>
This question isn't related to CSS, try this:{template .contents}<div class='googft-info-window'><b>Location:</b> {$data.value.Location}<br/><b>Movie Title:</b> {$data.value['Movie Title']}<br/><b>Date:</b> {$data.value.Date}<br/>{if $data.value.Amenities} <b>Amenities:</b>{$data.value.Amenities}<br/>{/if}</div>{/template}
How to change a string to NaN when applying astype? I have a column in a dataframe that has integers like: [1,2,3,4,5,6..etc]My problem: In this field one of the field has a string, like this: [1,2,3,2,3,'hello form France',1,2,3]the Dtype of this column is object.I want to cast it to float with column.astype(float) but I get an error because that string.The columns has over 10.000 records and there is only this record with string. How can I cast to float and change this string to NaN for example?
You can use pd.to_numeric with errors='coerce'import pandas as pddf = pd.DataFrame({ 'all_nums':range(5), 'mixed':[1,2,'woo',4,5],})df['mixed'] = pd.to_numeric(df['mixed'], errors='coerce')df.head()Before:After:
How to access python flask application in windows when running it in a linux container? I am working on a microservice written in python. My microservice works fine on my windows machine and I can easily test it. However, on my Linux container it may work fine too but I cannot test it. Even when optimizing my code for network access as explained here.So here is my microservice:from flask import Flaskapp = Flask(__name__)@app.route('/')def hello_world(): return 'Hello World!'if __name__ == '__main__': app.run(host='0.0.0.0')-I tested this by running on windows by typing in the command prompt:python.exe app.py-I got a message ending with Running on http://0.0.0.0:5000/ (Press CTRL C to quit).To test my microservice, I executed my browser to access: http://localhost:5000/I can see the "Hello World" message. So my application seems to work perfectly.After that, I started working with docker. Here is my Dockerfile (named "Dockerfile").FROM pythonRUN apt-get update -yRUN apt-get install -y python-pip python-dev build-essentialRUN pip install FlaskCOPY . /appWORKDIR /appENTRYPOINT ["python3"]CMD ["app.py"]Here is my build command:docker build -t docktoflask .Which shows some output considering the build process.Then I started running the container with this command:docker run -p 5002:5002 docktoflaskI got the same ending message as I got on windows (which is remarkable because I specified port 5002).After that I tried testing with this link:http://localhost:5002/(of course I also tried http://localhost:5000/ )No success.... (which means a browser error instead the hello world message). The error is in Dutch (my browser language) but means the same as this.This is annoying because it seems to work and the port is really active.There are two applications, I should be able to access from my browser (portainer and my own application). I can access portainer without any problems. How can I access my microservice, when running it in a docker container?
The commanddocker run -p 5002:5002 docktoflaskmeans that the internal container port 5002 will be exposed as host port 5002 (localhost:5002), even if the internal container port 5002 is not opened yet.You have to change this to docker run -p 5000:5000 docktoflask
Checking the clickability of an element in selenium using python I've been trying to write a script which will give me all the links to the episodes present on this page :- http://www.funimation.com/shows/assassination-classroom/videos/episodesAs you can see that the links can be seen in 'Outer HTML', I used selenium and PhantomJS with python.Link Example: http://www.funimation.com/shows/assassination-classroom/videos/official/karma-timeHowever, I can't seem to get my code right. I do have a basic Idea of what I want to do. Here's the process :-1.) Copy the Outer HTML of the very first page and then save it as 'Source_html' file.2.) Look for links inside this file.3.) Move to the next page to see rest of the videos and their links.4.) Repeat the step 2.This is what my code looks like :from selenium import webdriverfrom selenium import seleniumfrom bs4 import BeautifulSoupimport time# ---------------------------------------------------------------------------------------------driver = webdriver.PhantomJS()driver.get('http://www.funimation.com/shows/assassination-classroom/videos/episodes')elem = driver.find_element_by_xpath("//*")source_code = elem.get_attribute("outerHTML")f = open('source_code.html', 'w')f.write(source_code.encode('utf-8'))f.close()print 'Links On First Page Are : \n'soup = BeautifulSoup('source_code.html')subtitles = soup.find_all('div',{'class':'popup-heading'})official = 'something'for official in subtitles: x = official.findAll('a') for a in x: print a['href']sbtn = driver.find_element_by_link_text(">"):print sbtnprint 'Entering The Loop Now'for driver.find_element_by_link_text(">"): sbtn.click() time.sleep(3) elem = driver.find_element_by_xpath("//*") source_code = elem.get_attribute("outerHTML") f = open('source_code1.html', 'w') f.write(source_code.encode('utf-8')) f.close()Things I already know :-soup = BeautifulSoup('source_code.html') won't work, because I need to open this file via python and feed it into BS after that. That I can manage.That official variable isn't really doing anything. Just helping me start a loop.for driver.find_element_by_link_text(">"): Now, this is what I need to fix somehow. I'm not sure how to check if this thing is still clickable or not. If yes, then proceed to next page, get the links, click this again to go to page 3 and repeat the process.Any help would be appreciated.
You don't need to use BeautifulSoup here at all. Just grab all the links via selenium. Proceed to next page only if the > link is visible. Here is the complete implementation including gathering the links, necessary waits. It should work for any page count:import timefrom selenium import webdriverfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECdriver = webdriver.PhantomJS()driver.get("http://www.funimation.com/shows/assassination-classroom/videos/episodes")wait = WebDriverWait(driver, 10)links = []while True: # wait for the page to load wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "a.item-title"))) # wait until the loading circle becomes invisible wait.until(EC.invisibility_of_element_located((By.ID, "loadingCircle"))) links.extend([link.get_attribute("href") for link in driver.find_elements_by_css_selector("a.item-title")]) print("Parsing page number #" + driver.find_element_by_css_selector("a.jp-current").text) # click next next_link = driver.find_element_by_css_selector("a.next") if not next_link.is_displayed(): break next_link.click() time.sleep(1) # hardcoded delayprint(len(links))print(links)For the mentioned in the question URL, it prints:Parsing page number #1Parsing page number #293['http://www.funimation.com/shows/assassination-classroom/videos/official/assassination-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/assassination-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/assassination-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/baseball-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/baseball-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/baseball-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/karma-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/karma-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/karma-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/grown-up-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/grown-up-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/grown-up-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/assembly-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/assembly-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/assembly-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/test-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/test-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/test-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/school-trip-time1st-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/school-trip-time1st-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/school-trip-time1st-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/school-trip-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/school-trip-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/school-trip-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/transfer-student-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/transfer-student-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/transfer-student-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/l-and-r-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/l-and-r-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/l-and-r-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/transfer-student-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/transfer-student-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/transfer-student-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/ball-game-tournament-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/ball-game-tournament-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/ball-game-tournament-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/talent-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/talent-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/talent-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/vision-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/vision-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/vision-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/end-of-term-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/end-of-term-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/end-of-term-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/schools-out1st-term', 'http://www.funimation.com/shows/assassination-classroom/videos/official/schools-out1st-term', 'http://www.funimation.com/shows/assassination-classroom/videos/official/schools-out1st-term', 'http://www.funimation.com/shows/assassination-classroom/videos/official/island-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/island-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/island-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/action-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/action-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/action-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/pandemonium-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/pandemonium-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/pandemonium-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/karma-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/karma-time2nd-period', 'http://www.funimation.com/shows/assassination-classroom/videos/official/karma-time2nd-period', 'http://www.funimation.com/shows/deadman-wonderland', 'http://www.funimation.com/shows/deadman-wonderland', 'http://www.funimation.com/shows/riddle-story-of-devil', 'http://www.funimation.com/shows/riddle-story-of-devil', 'http://www.funimation.com/shows/soul-eater', 'http://www.funimation.com/shows/soul-eater', 'http://www.funimation.com/shows/assassination-classroom/videos/official/xx-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/xx-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/xx-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/nagisa-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/nagisa-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/nagisa-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/summer-festival-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/summer-festival-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/summer-festival-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/kaede-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/kaede-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/kaede-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/itona-horibe-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/itona-horibe-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/itona-horibe-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/spinning-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/spinning-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/spinning-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/leader-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/leader-time', 'http://www.funimation.com/shows/assassination-classroom/videos/official/leader-time', 'http://www.funimation.com/shows/deadman-wonderland', 'http://www.funimation.com/shows/deadman-wonderland', 'http://www.funimation.com/shows/riddle-story-of-devil', 'http://www.funimation.com/shows/riddle-story-of-devil', 'http://www.funimation.com/shows/soul-eater', 'http://www.funimation.com/shows/soul-eater']
how can I track a specific item presence in hierarchy clustering I have a question related to hierarchy clustering. I have a relative complex data sets with 2000 items/samples. I cluster the items using scipy and give the clusters different cutoff e.g. from 0.1 -0.9from scipy.cluster import hierarchy as hacZ=hac.linkage(distance, single,'euclidean')results=hac.fcluster(Z, cutoff,'distance')how can I check/track a certain item say when cutoff is 0.1 in group x, and when the cutoff is 0.2 is in group y. etcI considered about showing the dendrogram ,but to track 1 item in 2000 samples from a dendrogram would be too messy?
Try to build a set of Clusters IDs using set(list(..)) to remove duplicates, then go through the elements and filter your data depends on the cluster where they belong. Give it a try, as you didn't give a sample of data to test it. Your code would look like:clusterIDs = set(list(results))D= {} # Dictinary where you store ClusterID: [list of points that belong to that cluster]for i, clusterID in enumerate(clusterIDs): clusterItems = data[np.where(results == clusterID)] D[clusterID]=clusterItems
Filter dataframe based on groupby sum() I want to filter my dataframe based on a groupby sum(). I am looking for lines where the amounts for a spesific date, gets to zero. I have solve this by creating a for loop. I suspect this will reduce performance if the dataframe is large.It also seems clunky.newdf = pd.DataFrame()newdf['name'] = ('leon','eurika','monica','wian')newdf['surname'] = ('swart','swart','swart','swart')newdf['birthdate'] = ('14051981','198001','20081012','20100621')newdf['tdate'] = ('13/05/2015','14/05/2015','15/05/2015', '13/05/2015')newdf['tamount'] = (100.10, 111.11, 123.45, -100.10)df = newdf.groupby(['tdate'])[['tamount']].sum().reset_index()df2 = df.loc[df["tamount"] == 0, "tdate"]df3 = pd.DataFrame()for i in df2: df3 = df3.append(newdf.loc[newdf["tdate"] == i])print (df3)The below code is creating an output of the two lines getting to zero when combined on tamount name surname birthdate tdate tamount0 leon swart 1981-05-14 13/05/2015 100.13 wian swart 2010-06-21 13/05/2015 -100.1
Just use basic numpy :) import numpy as npdf = newdf.groupby(['tdate'])[['tamount']].sum().reset_index()dates = df['tdate'][np.where(df['tamount'] == 0)[0]]newdf[np.isin(newdf['tdate'], dates) == True]Hope this helps; let me know if you have any questions.
Python folder paths on synched pcs I use .py files on two different pcs and synch the files using google drive.As I handle files quite often with subfolders I use the complete path to read csvpassport = pd.read_csv(r'C:\Users\turbo\Google Drive\Studium\Master_thesis\Python\Databases\passport_uzb.csv')However, when switching pcs I have to change the path manually since for my second pc its:C:\Users\turbo\My Drive\Studium\Master_thesis\Python\Databasesso the only difference really is 'Google Drive' =/= 'My Drive'Is there a work around using the complete filepath to read files?
You can use a relative path to access the CSV instead of an absolute one. The pathlib module is useful for this. For example, assuming your script is directly inside the ...Python/Databases folder, you can compute the path to the CSV like so, using the __file__ module attribute:from pathlib import Path# this is a path object that always refers to the script in which it is definedthis_file = Path(__file__).resolve()# this path refers to .../Python/Databases, no matter where it is locatedthis_folder = this_file.parentcsv_path = this_folder / "passport_uzb.csv"passport = pd.read_csv(csv_path)Edit: no matter where your script is located, you can use some combination of .parent and / "child" to construct a relative path that will work. If your script is in ...Python/Databases/nasapower then simply add another .parent:this_file = Path(__file__).resolve()nasapower_folder = this_file.parentdatabases_folder = nasapower_folder.parentOr you can use the .parents sequence to get there faster:databases_folder = Path(__file__).resolve().parents[1]Likewise, for the output folder:output_folder = Path(__file__).resolve().parent / "json"
Stop sqlalchemy from managing the connection I'm trying to initialize SQLAlchemy with existing DB connection, but I would like it to completely stop managing it (opening, closing, rolling back etc). This is because I use it alongside a different ORM (django) and SQLAlchemy is really only a way to perform more complicated queries. It's gonna be used for reads only and I just want it to take a connection, use it and leave as it is.What I tried so far:from sqlalchemy.pool import NullPoolfrom sqlalchemy import create_enginefrom django.db import connectionclass DummyNullPool(NullPool): def _do_return_conn(self, conn): # avoid closing the connection as it belongs to django # orm, sql alchemy is only a tool to read passdef get_engine(dummy=True): kwargs = {} if dummy: kwargs['poolclass'] = DummyNullPool kwargs['creator'] = lambda: connection.connection return create_engine(conn_string, **kwargs)This almost works, but it hangs (not always though) on def _create_connection(self): return _ConnectionRecord(self, False)So i guess there must be some kind of race condition.The reason why I want to reuse the same connection is because I would like it to have an access to records created inside the current transaction.
Ok, I think I've found a way that "works":from django.conf import settingsfrom django.db import connectionfrom sqlalchemy.pool import NullPoolfrom sqlalchemy import create_engine as sa_create_enginedef do_nothing(dbapi_connection): returndef create_engine(db_name='default'): db = settings.DATABASES[db_name] conn_string = 'postgresql://{}{}@{}:{}/{}'.format( db['USER'], ':' + db['PASSWORD'] if db['PASSWORD'] else '', db['HOST'], db['PORT'], db['NAME'] ) engine = sa_create_engine( conn_string, poolclass=NullPool, creator=lambda: connection.connection ) engine.dialect.do_close = do_nothing engine.dialect.do_commit = do_nothing engine.dialect.do_rollback = do_nothing return enginePlease note that it hasn't been fully tested, its more kind of experimental code. It assumes that connection.connection was initialised and is open. Any comments much appreciated.
Machine Learning: Question regarding processing of RGBD streams and involved components I would like to experiment with machine learning (especially CNNs) on the aligned RGB and depth stream of either an Intel RealSense or an Orbbec Astra camera. My goal is to do some object recognisation and highlight/mark them in the output video stream (as a starting point). But after having read many articles I am still confused about the involved frameworks and how the data flows from the camera through the involved software components. I just can't get a high level picture.This is my assumption regarding the processing flow: Sensor => Driver => libRealSense / Astra SDK => TensorFlow QuestionsIs my assumption correct regarding the processing?Orbbec provides an additional Astra OpenNI SDK besides the Astra SDK where as Intel has wrappers (?) for OpenCV and OpenNI. When or why would I need this additional libraries/support?What would be the quickest way to get started? I would prefer C# over C++
Your assumptions are correct: the data acquisition flow is: sensor -> driver -> camera library -> other libraries built on top of it (see OpenCV support for Intel RealSense)-> captured image. Once you got the image, you can do whatever you want of course.The various libraries allow you to work easily with the device. In particular, OpenCV compiled with the Intel RealSense support allows you to use OpenCV standard data acquisition stream, without bothering about the image format coming from the sensor and used by the Intel library. 10/10 use these libraries, they make your life easier.You can start from the OpenCV wrapper documentation for Intel RealSense (https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv). Once you are able to capture the RGBD images, you can create your input pipeline for your model using tf.data and develop in tensorflow any application that uses CNNs on RGDB images (just google it and look on arxiv to have ideas about the possible applications).Once your model has been trained, just export the trained graph and use it in inference, hence your pipeline will become: sensor -> driver -> camera library -> libs -> RGBD image -> trained model -> model output
python - binary encoding of column containing multiple terms I need to do a binary transformation of a column containing lists of strings separated by comma.Can you help me in getting from here:df = pd.DataFrame({'_id': [1,2,3], 'test': [['one', 'two', 'three'], ['three', 'one'], ['four', 'one']]})df_id test 1 [one, two, three] 2 [three, one] 3 [four, one]to:df_result = pd.DataFrame({'_id': [1,2,3], 'one': [1,1,1], 'two': [1,0,0], 'three': [1,1,0], 'four': [0,0,1]})df_result[['_id', 'one', 'two', 'three', 'four']] _id one two three four 1 1 1 1 0 2 1 0 1 0 3 1 0 0 1Any help would be very appreciated!
You can use str.get_dummies, pop for extract column out, convert to str by str.join and last join:df = df.join(df.pop('test').str.join('|').str.get_dummies())print (df) _id four one three two0 1 0 1 1 11 2 0 1 1 02 3 1 1 0 0Instead pop is possible use drop:df = df.drop('test', axis=1).join(df.pop('test').str.join('|').str.get_dummies())print (df) _id four one three two0 1 0 1 1 11 2 0 1 1 02 3 1 1 0 0Solution with new DataFrame:df1 = pd.get_dummies(pd.DataFrame(df.pop('test').values.tolist()), prefix='', prefix_sep='')df = df.join(df1.groupby(level=0, axis=1).max())print (df) _id four one three two0 1 0 1 1 11 2 0 1 1 02 3 1 1 0 0I try also solution with converting to string by astype, but some cleaning is necessary:df1=df.pop('test').astype(str).str.strip("'[]").str.replace("',\s+'", '|').str.get_dummies()df = df.join(df1)print (df) _id four one three two0 1 0 1 1 11 2 0 1 1 02 3 1 1 0 0
Printing regression results from python statsmodel into a Excel worksheet My job requires running several regressions on different types of data and then need to present these results on a presentation - I use Powerpoint and they link very well to my Excel objects such as charts and tablesIs there a way to print the results into a specific set of cells in an existing worksheet?import statsmodels.api as smmodel = sm.OLS(y,x)results = model.fit()Would like to send the regression result.params into column B for example. Any ideas?Thanks very much.
import pandas as pdimport statsmodels.api as smdta = sm.datasets.longley.load_pandas()dta.exog['constant'] = 1res = sm.OLS(dta.endog, dta.exog).fit()df = pd.concat((res.params, res.tvalues), axis=1)df.rename(columns={0: 'beta', 1: 't'}).to_excel('output.xls', 'sheet1')
Access Python Wrapper from ASP Classic I have written a python wrapper for a c dll. I now wish to interact with this wrapper from an ASP classic script, served online via IIS7.How would you recommend I do this?
If it's a Python wrapper, you need Python to use it. You can use Python as a scripting language from ASP, I recommend activestate's Python distribution, because it integrates with Windows and IIS by default. You should be able to write an ASP page in Python and load your library in it.Here is more info on using Python from ASP.It should even be possible to write a page in VBscript and call a piece of Python code in a Python WSC (Windows Scripting Component). I have researched this a great deal, it works, but if you want to do this you can only pass simple data types from one language to the other (strings, booleans, numbers)
Generating a custom ID based on other columns in python I have a pandas df which looks like this UID DOB BEDNUM 0 1900-01-01 CICU1 1 1927-05-21 CICU1 2 1929-10-03 CICU1 3 1933-06-29 CICU1 4 1936-01-09 CICU1 5 1947-11-14 CICU1 6 1900-01-01 CICU1 7 1927-05-21 CICU1 8 1929-10-03 CICU1 9 1933-06-29 CICU1 10 1936-01-09 CICU1 11 1947-11-14 CICU1 Now I would like to add a new column TID to that data frame which should be in 'YYYY-0000000-P' format UID DOB BEDNUM TID 0 1900-01-01 CICU1 1900-0000000-P 1 1927-05-21 CICU1 1927-0000001-P 2 1929-10-03 CICU1 1929-0000002-P 3 1933-06-29 CICU1 1933-0000003-P 4 1936-01-09 CICU1 1936-0000004-P 5 1947-11-14 CICU1 1947-0000005-P 6 1900-01-01 CICU1 1900-0000006-P 7 1927-05-21 CICU1 1927-0000007-P 8 1929-10-03 CICU1 1929-0000008-P 9 1933-06-29 CICU1 1933-0000009-P 10 1936-01-09 CICU1 1936-0000010-P 11 1947-11-14 CICU1 1947-0000011-PI have 24000 records in a table and the last record TID should look like 'YYYY-0024000-P'. I would really appreciate if anyone could help me with this. Thanks in advance!!
This answer assumes that DOB is datetime:year = df.DOB.dt.yearnums = df.UID.astype(str).str.zfill(7)df.assign(TID=[f'{y}-{num}-P' for y, num in zip(year, nums)]) UID DOB BEDNUM TID0 0 1900-01-01 CICU1 1900-0000000-P1 1 1927-05-21 CICU1 1927-0000001-P2 2 1929-10-03 CICU1 1929-0000002-P3 3 1933-06-29 CICU1 1933-0000003-P4 4 1936-01-09 CICU1 1936-0000004-P5 5 1947-11-14 CICU1 1947-0000005-P6 6 1900-01-01 CICU1 1900-0000006-P7 7 1927-05-21 CICU1 1927-0000007-P8 8 1929-10-03 CICU1 1929-0000008-P9 9 1933-06-29 CICU1 1933-0000009-P10 10 1936-01-09 CICU1 1936-0000010-P11 11 1947-11-14 CICU1 1947-0000011-P
Jupyter notebook fails with "Kernel didn't respond" I am running into a strange bug related to the sequential execution of Jupyter notebooks (Python 3 kernels). The main loop runs sequentially the following execution of a set of notebooks through nbconvert[...]from nbconvert.preprocessors import ExecutePreprocessor[...]class Report: [..] def execute_notebook(self, timeout=3600): [...] notebook = nbformat.read(str(self.notebook_path), as_version=4) kernel_name = notebook["metadata"]["kernelspec"]["name"] ep = ExecutePreprocessor(timeout=timeout, kernel_name=kernel_name)The executions are run on a daily basis. Some days, the loop over the notebooks works smoothly but other it fails when the program tries to execute the second notebook withTraceback (most recent call last): File "/home/data/ds-metrics/scripts/recurring_reports.py", line 52, in main report.execute_notebook() File "/home/data/miniconda3/envs/analytics/lib/python3.6/site-packages/report/__init__.py", line 49, in execute_notebook ep.preprocess(notebook, dict(metadata=dict(path=self.notebook_folder))) File "/home/data/miniconda3/envs/analytics/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 359, in preprocess with self.setup_preprocessor(nb, resources, km=km): File "/home/data/miniconda3/envs/analytics/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/home/data/miniconda3/envs/analytics/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 304, in setup_preprocessor self.km, self.kc = self.start_new_kernel(cwd=path) File "/home/data/miniconda3/envs/analytics/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 258, in start_new_kernel kc.wait_for_ready(timeout=self.startup_timeout) File "/home/data/miniconda3/envs/analytics/lib/python3.6/site-packages/jupyter_client/blocking/client.py", line 124, in wait_for_ready raise RuntimeError("Kernel didn't respond in %d seconds" % timeout)RuntimeError: Kernel didn't respond in 60 secondsAfter the failures, if I run the loop again, it works. It “looks” really random.The executions are run on a Debian server from a conda virtual environment with Python 3.6.6 and the following list of relevant packagesipykernel 5.1.0 py36h24bf2e0_1001 conda-forgeipython 7.1.1 py36h24bf2e0_1000 conda-forgeipython_genutils 0.2.0 py_1 conda-forgejupyter 1.0.0 py_1 conda-forgejupyter_client 5.2.3 py_1 conda-forgejupyter_console 6.0.0 py_0 conda-forgejupyter_core 4.4.0 py_0 conda-forgenbconvert 5.4.0 1 conda-forgenbformat 4.4.0 py_1 conda-forgenotebook 5.7.2 py36_1000 conda-forgepexpect 4.6.0 py36_1000 conda-forgepython 3.6.6 h5001a0f_3 conda-forgepyzmq 17.1.2 py36hae99301_1 conda-forgetraitlets 4.3.2 py36_1000 conda-forgezeromq 4.2.5 hfc679d8_6 conda-forgeThanks for your help.
After some digging, I figured out that I could reduce the problem to the following minimal codefrom nbconvert.preprocessors import ExecutePreprocessorep = ExecutePreprocessor(kernel_name="python3")km, kc = ep.start_new_kernel()km.shutdown_kernel()On the cloud server, the script is, sometimes, stuck in (from strace)open("/dev/random", O_RDONLY) = 6poll([{fd=6, events=POLLIN}], 1, -1Apparently, the server is running out of entropy between the starting of kernels which leads to the long waiting time. A similar issue was indicated here.I followed the idea and the script is now running smoothly. I guess that, in the original script, instead of starting a new kernel each time, I could reuse the first kernel, preventing the call to the random number generator.
After reading csv file, is infinite returns true I am doing a simple read_csv() on a 1 year stock data downloaded from Yahoo finance. df2 = pd.read_csv(name2, index_col=0, parse_dates=True)This is for stock market prediction algorithm. The problem is, np.isfinite(df2.all())) is returning true for all the columns and I dont understand why. Because of this issue, my Random forest clf.fit() is throwing a value error that the numbers are too large to handle.
Actually the function is called isfinite, and it returns TRUE if the data is finite, false if the data is infinite or not a Number. Therefore I really believe the return True is what you would have expected in this case. Please refer to: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.isfinite.html#numpy.isfinite