questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
English dictionary library in Python3 I need a library in Python 3 to define terms (vocabulary words), building a English dictionary. Input a term and the output will be the definition of this term. | Have you tried PyDictionary?from PyDictionary import PyDictionarydictionary=PyDictionary("dog","cat","tree")print(dictionary.printMeanings()) #This prints the definitionshttps://pypi.org/project/PyDictionary/ |
Error message when installing scipy with pip install I am attempting to use pip install to install the scipy library through the command prompt.When I type:pip install scipyI get a wall of white text, ending with a section of red text, shown below.Command "C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\stuart\\appdata\\local\\temp\\pip-build-edorla\\scipy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\stuart\appdata\local\temp\pip-vegpqd-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\stuart\appdata\local\temp\pip-build-edorla\scipyI have tried searching for a fix, and followed instructions to upgrage to the latest version of setuptools which I have done using the pip install --upgrade setuptoolsHowever when trying to install again I get the same error.I am able to install numerous other libraries using pip install so it seems to be specific to scipy.Would anyone have an idea why the installation may be failing? | It seems that a fortran compiler may be needed to install scipy through pip - I used the .exe installer from sourceforge and it went through fine.http://sourceforge.net/projects/scipy/files/scipy/0.11.0/ |
using pandas dataframes fill rows based on condition if both values are same in two dataframes am having two dataframes , first dataframe has 3 columns user_id, value, year_month columnssecond dataframe has 4 columns sep_month,01,02,03 .extract & create month from year_month column, now with month column need to create & fill rows from other dataframe based on condition if sep_month & month values are sameInput Dataframe df_1### user_id date_month value 1 1 2020-01 1000 2 1 2020-02 2000 3 1 2020-03 6000 4 2 2020-01 1800 5 2 2020-02 2700 6 2 2020-03 3600 df_2### month 01 02 03 01 1.5 0.2 3.6 02 3.6 1.6 0 03 6.3 5.1 7.2Output Dataframe### user_id date_month value 01 02 031 1 2020-01 1000 1.5 0.2 3.62 1 2020-02 2000 3.6 1.6 03 1 2020-03 6000 6.3 5.1 7.24 2 2020-01 1800 1.5 0.2 3.65 2 2020-02 2700 3.6 1.6 06 2 2020-03 3600 6.3 5.1 7.2 | You can first split the date_month field and do joindf_1[['year','month']] = df_1['date_month'].str.split('-',expand=True)result = pandas.df_1.merge(df_1,df_2, on='month', how='left') |
Creating Memory game (flipping tiles game) I am having trouble with coordinates and flipping the cards separately. This is my first time handling with coordinates in python.When trying to flip the cards separately, the code registers the rows of cards as lists. I did use a list of lists to display the cards.The code is shown below.import randomwrong = 0used = []cards = deck()def mmain(): random.shuffle(cards) selected_cards = cards[:int(result / 2)] selected_cards = selected_cards * 2 random.shuffle(selected_cards) i = 0 while i < len(selected_cards): row = selected_cards[i: i + columns] i = i + columns grid1.append(row1) squares = [""] grid = [] row = [] for i in range(rows): row.append(squares) for e in range(columns): grid.append(row) while True: for i in range(len(grid)): print(*grid[i]) squares = str(squares) coordinate1 = int(input("Enter the first coordinate: ")) coordinate2 = int(input("Enter the second coordinate: ")) used.append(coordinate1) used.append(coordinate2) if coordinate1 in range(len(grid[i])): for k in grid[i]: k[coordinate1] = str(selected_cards[coordinate1]) elif coordinate2 in range(len(grid[i])): for k in grid[i]: k[coordinate2] = str(selected_cards[coordinate2]) elif selected_cards[coordinate1] == selected_cards[coordinate1]: grid9 = "⬛" grid10 = "⬛" else: wrong = wrong + 1 if grid[i] == "⬛": print("You win! ") print("Your score is: ") breakmmain()I would like help on this since I am struggling with it for weeks. I appreciate the answers to solve the problem. Thank you.Note: I already have a program that helps displays the cards.Edit: I am sure if someone helps me with this, the question I asked could help others.Edit2: I use Google Colab for coding in python. | Your code needs to be rewritten for clarityHere's a template you can use, then you just need to implement each functionTest each function with test cases to ensure it works properly, then move to the next onedef main(): # create your deck cards = initialize_cards() # iterates until the winning condition is satisfied. # could be a simple check that the deck is empty. while not check_winning_condition(cards): # ask the user for coordinates, do all the checks to ensure they are valid coordinates. coord1, coord2 = ask_user_coordinates() # reveals the cards to the user. Either a simple console o a sophisticated GUI card1, card2 = reveal_cards(cards, coord1, coord2) # checks if the cards match and removes them from the deck if needed cards = check_cards(cards, card1, card2)Ideally you should use classes, but for now try organizing like this.You might need additional parameters, feel free to add the ones you need and avoid global variables.Each function does one thing, if it does two things make two functions.You see how simple the main function is?Just make now each function do what is designed to do and test test test. |
Sphinx code validation for Django projects I have a Django project with a couple of apps - all of them with 100% coverage unit tests. And now I started documenting the whole thing in a new directory using ReST and Sphinx. I create the html files using the normal approach: make html.Since there are a couple of code snippets in these ReST files, I want to make sure that these snippets stay valid over time. So what is a good way to test these ReST files, so I get an error if some API changes made such a snippet invalid? I guess there have to be some changes in conf.py? | Turned out that I had some python paths wrong. Everything works as expected - as noted by bmu in his comment. (I'm writing this answer so I can close the question in a normal way) |
Regex pattern to match two datetime formats I am doing a directory listening and need to get all directory names that follow the pattern: Feb14-2014 and 14022014-sometext. The directory names must not contain dots, so I dont want to match 14022014-sometext.more. Like you can see I want to match just the directories that follow the pattern %b%d-%Y and %d%m%Y-textofanylengthWithoutDots.For the first case it should be something like [a-zA-Z]{3}\d{2}. I dont know how to parse the rest because my regex skills are poor, sorry. So I hope someone can tell me what the correct patterns look like. Thanks. | That's quite easy.The best one I can make is:((Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\d\d-\d\d\d\d)|(\d\d\d\d\d\d\d\d-\w+)The first part ((Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\d\d-\d\d\d\d) matches the first kind of dates and the second part (\d\d\d\d\d\d\d\d-\w+) - the second kind. |
How can I download my Google API key as PEM file? I'm trying to do a simple python script that pulls data from Google BigQuery. I've found many posts and google documents on Bigquery, but I've yet to have any success. My current problem is that I need to read my API key from a PEM file, but I can't find any way to download my API key from the google dev console. All I can do is copy/paste the text. | You have to create a new client Id with a type of "service account" this will download a new p12 file. |
Index out of range error when comparing values a=[8,24,3,20,1,17]r=[]for i in a: for j in a: s=a[i]-a[j] r.append(s)print rWhen I run this programWhy the index out of range error for this question? | Use s = i - j instead of s = a[i] - a[j]:a=[8,24,3,20,1,17]r=[]for i in a: for j in a: s = i - j r.append(s)print rOutput:[0, -16, 5, -12, 7, -9, 16, 0, 21, 4, 23, 7, -5, -21, 0, -17, 2, -14, 12, -4, 17, 0, 19, 3, -7, -23, -2, -19, 0, -16, 9, -7, 14, -3, 16, 0]Try it here! |
trying to understand LSH through the sample python code the concise python code i study for is hereQuestion A @ line 8 i do not really understand the syntax meaning for "res = res << 1" for the purpose of "get_signature"Question B @ line 49 (SOLVED BY myself through another Q&A)"xor = r1^r2" does not really make any sense to me, which the author later tried "(d-nna(vor))" to calculate "hash_sim" -- (refer to line 50)Question C @ about hash_sim in generalthat question is more to do with LSH understanding, what variable "d" (line 38) is doing in the sample code ---- which is later used to calculate hash_sim in line 50Question D @ line 20 and 24 -- synatx for "&"not only having problem in understand the syntax "num = num & (num-1)", but also unsure what function "nnz" is doing in the context of hash_similarlity. this question may relate to my question (-b-) when the author apply the "xor" into "nnz", and again equation for "xor" (question b) looks odd to me.p.s.both my python and LSH understanding are at the beginner level, and I kind of entering in the loop for this problem. thanks for taking your time to going through my confusion as well as the codes | a. It's a left shift: https://docs.python.org/2/reference/expressions.html#shifting-operations It shifts the bits one to the left.b. Note that ^ is not the "to the power of" but "bitwise XOR" in Python.c. As the comment states: it defines "number of bits per signature" as 2**10 → 1024d. The lines calculate the bitwise AND of num and num + 1. The purpose of the function is documented in the comment above: "# get number of '1's in binary" |
How to assess Random Forests classifier performance? I recently started using a random forest implementation in Python using the scikit learn sklearn.ensemble.RandomForestClassifier. There is a sample script that I found on Kaggle to classify landcover using Random Forests (see below) that I am trying to use to hone my skills. I am interested in assessing the results of the random forests classification. For example, if I were to perform the analysis using randomForest in R, I would assess the variable importance with varImpPlot() from the randomForest package:require(randomForests)...myrf = randomForests(predictors, response)varImpPlot(myrf)And to get an idea of the out-of-box estimate of error rate and the error matrix for the classification, I would simply type 'myrf' into the interpreter. How can I programmatically assess these error metrics using Python?Note, that I am aware there are several potentially useful attributes in the documentation (e.g. feature_importances_, oob_score_, and oob_decision_function_), although I am not sure how to actually apply these.Sample RF Scriptimport pandas as pdfrom sklearn import ensembleif __name__ == "__main__": loc_train = "kaggle_forest\\train.csv" loc_test = "kaggle_forest\\test.csv" loc_submission = "kaggle_forest\\kaggle.forest.submission.csv" df_train = pd.read_csv(loc_train) df_test = pd.read_csv(loc_test) feature_cols = [col for col in df_train.columns if col not in ['Cover_Type','Id']] X_train = df_train[feature_cols] X_test = df_test[feature_cols] y = df_train['Cover_Type'] test_ids = df_test['Id'] clf = ensemble.RandomForestClassifier(n_estimators = 500, n_jobs = -1) clf.fit(X_train, y) with open(loc_submission, "wb") as outfile: outfile.write("Id,Cover_Type\n") for e, val in enumerate(list(clf.predict(X_test))): outfile.write("%s,%s\n"%(test_ids[e],val)) | After training, if you have test data and labels, you could check accuracy and generate an ROC plot/ AUC score via: from sklearn.metrics import classification_reportfrom sklearn.metrics import roc_curve, aucimport matplotlib.pyplot as plt# overall accuracyacc = clf.score(X_test,Y_test)# get roc/auc infoY_score = clf.predict_proba(X_test)[:,1]fpr = dict()tpr = dict()fpr, tpr, _ = roc_curve(Y_test, Y_score)roc_auc = dict()roc_auc = auc(fpr, tpr)# make the plotplt.figure(figsize=(10,10))plt.plot([0, 1], [0, 1], 'k--')plt.xlim([-0.05, 1.0])plt.ylim([0.0, 1.05])plt.xlabel('False Positive Rate')plt.ylabel('True Positive Rate')plt.grid(True)plt.plot(fpr, tpr, label='AUC = {0}'.format(roc_auc)) plt.legend(loc="lower right", shadow=True, fancybox =True) plt.show() |
Python lxml, matching attributes I'm having some troubles wrapping my head around lxml. I have some html I want to parse, and I managed to do it, but it doesn't feel like the best way to do it.I want to extract the value of the value attribute, but only if the value of name is "myInput"<input name="myInput" value="This is what i want"/>I manage to do this, but I feel there is a better solution.doc = html.fromstring(data)tr = doc.cssselect("input")for x in tr: if x.get("name") == "myInput": print(x.get("value")) | You could do it with an XPath:import lxml.html as LHcontent='<input name="myInput" value="This is what i want"/>'doc=LH.fromstring(content)for val in doc.xpath("//input[@name='myInput']/@value"): print(val)yieldsThis is what i wantThe XPath used above has the following meaning: //input # find all input tags [@name='myInput'] # such that the name attribute equals myInput /@value # return the value of the value attribute |
using base layout templates in chameleon In the pyramid docs there is a nice tutorial on UX stuff here:http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/humans/creatingux/step07/index.htmlOne thing I noticed though is in the tutorial they are setting up and passing around the 'global layout' explicitly in the code (see below). I thought this was unusual and unnecessary because I've always just used the 'load' command as shown in the docs here:http://chameleon.repoze.org/docs/latest/Is this just a personal preference issue or are there real advantages to setting up and using the 'global layout' in this way?Tutorial base view class:class Layouts(object): @reify def global_template(self): renderer = get_renderer("templates/global_layout.pt") return renderer.implementation().macros['layout']Tutorial template file:<div metal:use-macro="view.global_template"> <div metal:fill-slot="content"> <p>Home page content goes here.</p> </div></div>But in my template files I just use:<div metal:use-macro="load: global_layout.pt"> <div metal:fill-slot="content"> <p>Home page content goes here.</p> </div></div> | The indirect way (via view) gives you more flexibility. The benefits are not so obvious in a small project, but this approach surely pays off in a larger one. The "load:" is harcoding your main_template (in Zope/Plone-speak) to be here. With the view, it can come from anywhere and changed independently of your templates. |
To send threekeys using send_keys() in selenium python webdriver I am trying to type a float number into a textbox with default value 0.00.But it tries to get appended instead of overwriting it.I tried with .clear() and then send_keys('123.00') but still it gets appended. Then i tried with send_keys(Keys.CONTROL+'a','123.00').It updates 0.00 only.Any help is really appreciated.For more info ..URL : http://new.ossmoketest.appspot.comuserid: senthil.arumugam@mycompanyname.com -- mycompanyname = orangescape (sorry to avoid spam mails)password not needed now.click purchaseorder... in the form please new product and new price... sample application for automation.. thanks | I've had good results with:from selenium.webdriver.common.keys import Keyselement.send_keys(Keys.CONTROL, 'a')element.send_keys('123.00')If that doesn't work it may have something to do with the code in the web page. |
How do I created nested JSON object with Python? I have the following code:data = {}data['agentid'] = 'john'data['eventType'] = 'view'json_data = json.dumps(data)print json_date = {"eventType":"view,"agentid":"john"}I would like to create a nested JSON object- for example::{ "agent": { "agentid", "john"} , "content": { "eventType": "view", "othervar": "new" }}How would I do this? I am using Python 2.7.CheersNick | You could nest the dictionaries as follows:jsondata = {}agent={}content={}agent['agentid'] = 'john'content['eventType'] = 'view'content['othervar'] = "new"jsondata['agent'] = agentjsondata['content'] = contentprint(json.dumps(jsondata))Output: print {"content": {"eventType": "view", "othervar": "new"}, "agent": {"agentid": "john"}} |
How to access a pickled model file saved in desktop to Jupiter notebook? I have a pickled model file on my desktop on Mac. I want to load it to my Jupyter notebook. However, when I try this code:import picklefile_1 = open('RFonevsrest2_model.sav', 'r')loaded_model = pickle.load(file_1)I get an error saying there is No such file or directory. I do not know how to get the file inside my working directory or access is from local.Please help. | Specifying the path where the model resides and then using Joblib to access it does the trick:RFmodel=open("/Users/sayontimondal/Desktop/RFonevsrest2_model.sav")loaded_model = joblib.load(RFmodel) |
Pandas can't find columns, ValueError import numpy as npimport matplotlib.pyplot as pltimport pandas as pdsymbols = ["AAPL", "GLD", "TSLA", "GBL", "GOOGL"]def compare_security(symbols): start_date = "01-01-2019" end_date = "01-12-2020" dates = pd.date_range(start_date, end_date) df1 = pd.DataFrame(index=dates) df_SPY = pd.read_csv( "https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol=SPY&apikey=XXXX&datatype=csv", index_col="timestamp", usecols=["timestamp", "adjusted_close"], parse_dates=True, na_values=['nan']) df_SPY = df_SPY.rename(columns={"adjusted_close": "SPY"}) df1 = df1.join(df_SPY, how="inner") for symbol in symbols: df_temp= pd.read_csv("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol={}&apikey=XXXX&datatype=csv".format(symbol), index_col = "timestamp", usecols = ["timestamp", "adjusted_close"], parse_dates=True, na_values=['nan']) df_temp = df_temp.rename(columns={"adjusted_close":symbol}) df1 = df1.join(df_temp) return df1def test_run(): df = compare_security(symbols) print(df) df.plot() plt.title(symbols) plt.show()if __name__ == "__main__": test_run()It reads the error "ValueError: Usecols do not match columns, columns expected but not found: ['timestamp', 'adjusted_close']"However, I checked all the files the code would retrieve and all of them have the respective columns. Any clarification as to where I went wrong would be greatly appreciated. | You're hitting the API limit with a standard key. The standard key is allowed 5 API calls / minute and 500 / day, that's why it works sometimes. You can see that if you paste your URL into your browser and refresh it 5 - 10 times in 60 seconds you'll manually hit the limit. You can either:Upgrade to a premium key. Space out your API calls (wait 60seconds after you run this to run it again)A note on privacy that may also relate to your API threshold hitting. You have publicly shared your API key. Place your API key in an environment variableWhen you post, use "XXXX" or something to that affect as an API key substitute.If you publicly share your API key others can use it and means someone else could be using your 5 API calls / minute.Sample:import numpy as npimport matplotlib.pyplot as pltimport pandas as pdimport ossymbols = ["AAPL", "GLD", "TSLA", "GBL", "GOOGL"]def compare_security(symbols): start_date = "01-01-2019" end_date = "01-12-2020" dates = pd.date_range(start_date, end_date) df1 = pd.DataFrame(index=dates) df_SPY = pd.read_csv( "https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol=SPY&apikey={}&datatype=csv".format( os.getenv("ALPHAVANTAGE_API_KEY")), index_col="timestamp", usecols=["timestamp", "adjusted_close"], parse_dates=True, na_values=['nan']) df_SPY = df_SPY.rename(columns={"adjusted_close": "SPY"}) df1 = df1.join(df_SPY, how="inner") for symbol in symbols: df_temp = pd.read_csv("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol={}&apikey={}&datatype=csv".format(symbol, os.getenv("ALPHAVANTAGE_API_KEY")), index_col="timestamp", usecols=["timestamp", "adjusted_close"], parse_dates=True, na_values=['nan']) df_temp = df_temp.rename(columns={"adjusted_close": symbol}) df1 = df1.join(df_temp) return df1def test_run(): df = compare_security(symbols) print(df) df.plot() plt.title(symbols) plt.show()if __name__ == "__main__": test_run() |
Compare lists, the order of content within a given column is unimportant I want to compare lists, while the below code does this it doesn't exactly do what I want to achieve.Currently it will output:Lines only found in TEST_1: 4 6034 L LAL,LALLAL5 4231 N ADLines only found in TEST_2: 4 6034 L LALLAL,LAL5 4231 N PL6 5231 T PALLines match in both: 1 1231 L LA1 1234 L T2 1434 A C3 1634 L TWhat I want is:Lines only found in TEST_1: 5 4231 N ADLines only found in TEST_2: 5 4231 N PL6 5231 T PALLines match in both: 1 1231 L LA1 1234 L T2 1434 A C3 1634 L T4 6034 L LAL,LALLALHow can I make the order of data in the last column not matter?So that 4 6034 L LAL,LALLAL matches 4 6034 L LALLAL,LAL.Sample code:TEST_1 = [['1', '1231', 'L', 'LA'],['1', '1234', 'L', 'T'], ['2', '1434', 'A', 'C'],['3', '1634', 'L', 'T'], ['4', '6034', 'L', 'LAL,LALLAL'],['5', '4231', 'N', 'AD']]TEST_2 = [['1', '1231', 'L', 'LA'],['1', '1234', 'L', 'T'], ['2', '1434', 'A', 'C'],['3', '1634', 'L', 'T'], ['4', '6034', 'L', 'LALLAL,LAL'],['5', '4231', 'N', 'PL'], ['6', '5231', 'T', 'PAL']]MATCH_1 = []MATCH_2 = []NO_MATCH_1 = []NO_MATCH_2 = []ENTRY = [TEST_1, TEST_2, MATCH_1, MATCH_2, NO_MATCH_1, NO_MATCH_2]for i in range(0, 2): for word in ENTRY[i]: if word not in ENTRY[1-i]: ENTRY[4+i].append(word) else: ENTRY[2+i].append(word)print 'Lines only found in TEST_1:\t'for word in ENTRY[4]: print '\t'.join(word)print '\nLines only found in TEST_2:\t'for word in ENTRY[5]: print '\t'.join(word)print '\nLines match in both:\t'for word in ENTRY[2]: print '\t'.join(word) | GivenTEST_1 = [['1', '1231', 'L', 'LA'],['1', '1234', 'L', 'T'], ['2', '1434', 'A', 'C'],['3', '1634', 'L', 'T'], ['4', '6034', 'L', 'LAL,LALLAL'],['5', '4231', 'N', 'AD']]you want them to be sets so you can do set operations ({1, 2, 3, 4} - {3, 4, 5, 6} == {1, 2}), so make them sets:TEST_2 = [['1', '1231', 'L', 'LA'],['1', '1234', 'L', 'T'], ['2', '1434', 'A', 'C'],['3', '1634', 'L', 'T'], ['4', '6034', 'L', 'LALLAL,LAL'],['5', '4231', 'N', 'PL'], ['6', '5231', 'T', 'PAL']]TEST_1 = {tuple(frozenset(x.split(",")) for x in t) for t in TEST_1}TEST_2 = {tuple(frozenset(x.split(",")) for x in t) for t in TEST_2}Note that I transformed each of ['4', '6034', 'L', 'LALLAL,LAL'] to ({'4'}, {'6034'}, {'L'}, {'LALLAL', 'LAL'}) because sets don't have order ({'LALLAL', 'LAL'} == {'LAL', 'LALLAL'}).I also used frozensets and tuples because they're immutable and thus can go into a set, lists and normal sets cannot.Then you can just print it out:print("ONLY IN TEST 1:")for thing in TEST_1 - TEST_2: print('\t'.join(",".join(x) for x in thing))print()print("ONLY IN TEST 2:")for thing in TEST_2 - TEST_1: print('\t'.join(",".join(x) for x in thing))print()print("IN BOTH:")for thing in TEST_1 & TEST_2: print('\t'.join(",".join(x) for x in thing))#>>> ONLY IN TEST 1:#>>> 5 4231 N AD#>>> #>>> ONLY IN TEST 2:#>>> 5 4231 N PL#>>> 6 5231 T PAL#>>> #>>> IN BOTH:#>>> 1 1231 L LA#>>> 3 1634 L T#>>> 4 6034 L LAL,LALLAL#>>> 1 1234 L T#>>> 2 1434 A C |
Python 2: global variable being changed in a function isn't updating to the new value Sorry if the title is confusing. And if I'm using the wrong terms. I just started coding last week. I'm writing a dice roll function for the boss battle of a text adventure game and while I can get the the dice function to use the original global variable outside the function, subtract a number and report it within the function, it doesn't update the global variable after the function runs. So the next time I try to call the function, it uses the original value again, which completely defeats the purpose of having the dice there in the first place. (You can't ever kill the boss, lol)Here's what I've been playing with trying to debug. Thanks in advance!player = "Dib"playerhealth = 3boss = "Zim"bosshealth = 5import randomdef dice(who, whohealth): min = 1 max = 3 dice = random.randint(min, max) if dice == 1: print "Your opponent lost no health" print "Your opponent has %d health" % whohealth elif dice == 2: print "%s hits" % who whohealth = whohealth - 1 print "Your opponent lost 1 health" print "Your opponent has %d health" % whohealth elif dice == 3: print "%s crits" % who whohealth = whohealth - 2 print "Your opponent lost 2 health" print "Your opponent has %d health" % whohealth else: print "stuff"dice(player, bosshealth)dice(player, bosshealth)dice(boss, playerhealth)dice(boss, playerhealth) | You never returned whohealth back; Python passes objects by reference, but you are rebinding the reference in the function:whohealth = whohealth - 1That assigns a new value only to the local name whohealth; the original reference is not updated.The best way to deal with this is by returning the new value:def dice(who, whohealth): min = 1 max = 3 dice = random.randint(min, max) if dice == 1: print "Your opponent lost no health" print "Your opponent has %d health" % whohealth elif dice == 2: print "%s hits" % who whohealth = whohealth - 1 print "Your opponent lost 1 health" print "Your opponent has %d health" % whohealth elif dice == 3: print "%s crits" % who whohealth = whohealth - 2 print "Your opponent lost 2 health" print "Your opponent has %d health" % whohealth else: print "stuff" return whohealthbosshealth = dice(player, bosshealth)bosshealth = dice(player, bosshealth)playerhealth = dice(boss, playerhealth)playerhealth = dice(boss, playerhealth)Now the function returns the new health value, and you can assign that value back to the bosshealth or playerhealth globals. |
how to melt on multiple level in pandas I have this excel data: x1 x2 x3id a b a b a bfoo 1 2 3 4 2 4Column x1, x2, and x3 are made for both a and b and there are possibility where the number of x will keep increasing, so before going to the database I decided to transform the data into:id category type valuesfoo x1 a 1foo x1 b 2foo x2 a 3foo x2 b 4foo x3 a 2foo x3 b 4how can I transform my dataframe with pandas? thanks in advance | If there is MultiIndex in columns use DataFrame.unstack:print (df.columns)MultiIndex([('x1', 'a'), ('x1', 'b'), ('x2', 'a'), ('x2', 'b'), ('x3', 'a'), ('x3', 'b')], )df = (df.unstack() .reorder_levels([2,0,1]) .rename_axis(['id','category','type']) .reset_index(name='values'))print (df) id category type values0 foo x1 a 11 foo x1 b 22 foo x2 a 33 foo x2 b 44 foo x3 a 25 foo x3 b 4Solution with DataFrame.stack by both levels:df = df.stack([0,1]).rename_axis(['id','category','type']).reset_index(name='values')print (df) id category type values0 foo x1 a 11 foo x1 b 22 foo x2 a 33 foo x2 b 44 foo x3 a 25 foo x3 b 4 |
Seaborn fails to plot heatmap for a particular feature (titanic dataset) I am working with some neural networks and I am struggling to plot a correlation heatmap for the titanic dataset using seaborn. To be concise: it seems that there is a problem with the 'n_siblings_spouses' features during the plotting. I don't know if the problem is due to the feature itself (spacing, maybe?) or if there is an intrinsic issue with seaborn.Would it be possible to solve the issue without the need to remove the feature from the dataset?Here is a MWE. And thanks in advance!from __future__ import absolute_import,division,print_function,unicode_literalsimport numpy as np import pandas as pdimport matplotlibimport matplotlib.pyplot as pltfrom matplotlib import rc, font_manager%matplotlib inlinefrom IPython.display import clear_outputfrom six.moves import urllibimport tensorflow.compat.v2.feature_column as fc import tensorflow as tf import seaborn as snsrc('text', usetex=True)matplotlib.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']# only if needed#!apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipngplt.rc('font', family='serif')# URL address of dataTRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"# Downloading datatrain_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)# Setting numpy default values.np.set_printoptions(precision=3, suppress=True)# Reading datadata_train = pd.read_csv(train_file_path)print("\n TRAIN DATA SET")print(data_train.head(),"\n")def heatMap(df): #Create Correlation df corr = df.corr() #Plot figsize fig, ax = plt.subplots(figsize=(10, 10)) #Generate Color Map colormap = sns.diverging_palette(220, 10, as_cmap=True) #Generate Heat Map, allow annotations and place floats in map sns.heatmap(corr, cmap=colormap, annot=True, fmt=".2f") #Apply xticks plt.xticks(range(len(corr.columns)), corr.columns); #Apply yticks plt.yticks(range(len(corr.columns)), corr.columns) #show plot plt.show()heatMap(data_train)Here is the issue that is raised when trying to execute the heatMap function (I am working in Colab. However, this also happens in console):---------------------------------------------------------------------------CalledProcessError Traceback (most recent call last)/usr/local/lib/python3.7/dist-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex) 305 cwd=self.texcache,--> 306 stderr=subprocess.STDOUT) 307 except FileNotFoundError as exc:22 framesCalledProcessError: Command '['latex', '-interaction=nonstopmode', '--halt-on-error', '/root/.cache/matplotlib/tex.cache/bf616eae1512bede263889c8e1d8fb21.tex']' returned non-zero exit status 1.The above exception was the direct cause of the following exception:RuntimeError Traceback (most recent call last)/usr/local/lib/python3.7/dist-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex) 317 prog=command[0], 318 tex=tex.encode('unicode_escape'),--> 319 exc=exc.output.decode('utf-8'))) from exc 320 _log.debug(report) 321 return reportRuntimeError: latex was not able to process the following string:b'n_siblings_spouses'Here is the full report generated by latex:This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=latex) restricted \write18 enabled.entering extended mode(/root/.cache/matplotlib/tex.cache/bf616eae1512bede263889c8e1d8fb21.texLaTeX2e <2017-04-15>Babel <3.18> and hyphenation patterns for 3 language(s) loaded.(/usr/share/texlive/texmf-dist/tex/latex/base/article.clsDocument Class: article 2014/09/29 v1.4h Standard LaTeX document class(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))(/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty)(/usr/share/texmf/tex/latex/cm-super/type1ec.sty(/usr/share/texlive/texmf-dist/tex/latex/base/t1cmr.fd))(/usr/share/texlive/texmf-dist/tex/latex/base/textcomp.sty(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.def))(/usr/share/texlive/texmf-dist/tex/latex/base/inputenc.sty(/usr/share/texlive/texmf-dist/tex/latex/base/utf8.def(/usr/share/texlive/texmf-dist/tex/latex/base/t1enc.dfu)(/usr/share/texlive/texmf-dist/tex/latex/base/ot1enc.dfu)(/usr/share/texlive/texmf-dist/tex/latex/base/omsenc.dfu)(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.dfu)))(/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty)(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty)(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty)(/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)Package geometry Warning: Over-specification in `h'-direction. `width' (5058.9pt) is ignored.Package geometry Warning: Over-specification in `v'-direction. `height' (5058.9pt) is ignored.) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.styFor additional information on amsmath, use the `?' option.(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty))(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty)(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty))(./bf616eae1512bede263889c8e1d8fb21.aux)(/usr/share/texlive/texmf-dist/tex/latex/base/ts1cmr.fd)*geometry* driver: auto-detecting*geometry* detected driver: dvips! Missing $ inserted.<inserted text> $l.19 {\rmfamily n_ siblings_spouses}No pages of output.Transcript written on bf616eae1512bede263889c8e1d8fb21.log.---------------------------------------------------------------------------CalledProcessError Traceback (most recent call last)/usr/local/lib/python3.7/dist-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex) 305 cwd=self.texcache,--> 306 stderr=subprocess.STDOUT) 307 except FileNotFoundError as exc:21 framesCalledProcessError: Command '['latex', '-interaction=nonstopmode', '--halt-on-error', '/root/.cache/matplotlib/tex.cache/bf616eae1512bede263889c8e1d8fb21.tex']' returned non-zero exit status 1.The above exception was the direct cause of the following exception:RuntimeError Traceback (most recent call last)/usr/local/lib/python3.7/dist-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex) 317 prog=command[0], 318 tex=tex.encode('unicode_escape'),--> 319 exc=exc.output.decode('utf-8'))) from exc 320 _log.debug(report) 321 return reportRuntimeError: latex was not able to process the following string:b'n_siblings_spouses'Here is the full report generated by latex:This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=latex) restricted \write18 enabled.entering extended mode(/root/.cache/matplotlib/tex.cache/bf616eae1512bede263889c8e1d8fb21.texLaTeX2e <2017-04-15>Babel <3.18> and hyphenation patterns for 3 language(s) loaded.(/usr/share/texlive/texmf-dist/tex/latex/base/article.clsDocument Class: article 2014/09/29 v1.4h Standard LaTeX document class(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))(/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty)(/usr/share/texmf/tex/latex/cm-super/type1ec.sty(/usr/share/texlive/texmf-dist/tex/latex/base/t1cmr.fd))(/usr/share/texlive/texmf-dist/tex/latex/base/textcomp.sty(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.def))(/usr/share/texlive/texmf-dist/tex/latex/base/inputenc.sty(/usr/share/texlive/texmf-dist/tex/latex/base/utf8.def(/usr/share/texlive/texmf-dist/tex/latex/base/t1enc.dfu)(/usr/share/texlive/texmf-dist/tex/latex/base/ot1enc.dfu)(/usr/share/texlive/texmf-dist/tex/latex/base/omsenc.dfu)(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.dfu)))(/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty)(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty)(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty)(/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)Package geometry Warning: Over-specification in `h'-direction. `width' (5058.9pt) is ignored.Package geometry Warning: Over-specification in `v'-direction. `height' (5058.9pt) is ignored.) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.styFor additional information on amsmath, use the `?' option.(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty))(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty)(/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty))(./bf616eae1512bede263889c8e1d8fb21.aux)(/usr/share/texlive/texmf-dist/tex/latex/base/ts1cmr.fd)*geometry* driver: auto-detecting*geometry* detected driver: dvips! Missing $ inserted.<inserted text> $l.19 {\rmfamily n_ siblings_spouses}No pages of output.Transcript written on bf616eae1512bede263889c8e1d8fb21.log.<Figure size 720x720 with 2 Axes> | To solve this problem, I came across this information that Colab needs a Tex-related module. There was also an excellent answer to SO.You will need to install the following! sudo apt-get install texlive-latex-recommended! sudo apt-get install dvipng texlive-fonts-recommended! wget http://mirrors.ctan.org/macros/latex/contrib/type1cm.zip! unzip type1cm.zip -d /tmp/type1cm! cd /tmp/type1cm/type1cm/ && sudo latex type1cm.ins! sudo mkdir /usr/share/texmf/tex/latex/type1cm! sudo cp /tmp/type1cm/type1cm/type1cm.sty /usr/share/texmf/tex/latex/type1cm! sudo texhash! sudo apt install cm-superfrom __future__ import absolute_import,division,print_function,unicode_literalsimport numpy as np import pandas as pdimport matplotlibimport matplotlib.pyplot as plt# from matplotlib import rc, font_manager%matplotlib inlinefrom IPython.display import clear_outputfrom six.moves import urllibimport tensorflow.compat.v2.feature_column as fc import tensorflow as tf import seaborn as sns# rc('text', usetex=True)# matplotlib.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']# only if needed#!apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipng# plt.rc('font', family='serif')# URL address of dataTRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"# Downloading datatrain_file_path = tf.keras.utils.get_file("/content/sample_data/train.csv", TRAIN_DATA_URL)# Setting numpy default values.np.set_printoptions(precision=3, suppress=True)# Reading datadata_train = pd.read_csv(train_file_path)print("\n TRAIN DATA SET")print(data_train.head(),"\n")def heatMap(df): #Create Correlation df corr = df.corr() print(corr) #Plot figsize fig, ax = plt.subplots(figsize=(10, 10)) #Generate Color Map colormap = sns.diverging_palette(220, 10, as_cmap=True) #Generate Heat Map, allow annotations and place floats in map sns.heatmap(corr, cmap=colormap, annot=True, fmt=".2f") #Apply xticks plt.xticks(range(len(corr.columns)), corr.columns); #Apply yticks plt.yticks(range(len(corr.columns)), corr.columns) #show plot plt.show()heatMap(data_train) |
Can classmethod/staticmethod access attributes of initial caller? I am learning and try to write python api client for some third-party APIFor simplicity:I have 2 classes: Session, EndpointGroup:class Session: def __init__(self, token=None): self.token = token self.group1 = EndpointGroupclass EndpointGroup: @classmethod def ping(cls, token, **kwargs): return f'Calling ping() with params: {kwargs} and token {token}' I have many EndpointGroups which contain same-category api calls.I want to give end-user to call methods like this:session = Session(token='123')r = session.group1.ping(data='somedata') # Calling ping with params: {'data':'somedata'} and token 123I want to pass token to callable automaticly/implicitly so user don't have to on each call pass to callable something like session.group1.ping(data='somedata', token=session.token)Is there "built-in" solution to do it? Because with _getattribute_ I only can access EndpointGroup class but not method which is called.What i have already tried:Creating instances of EndpointGroup passing session/token (Don't like solution because it creates too many instances for each single session) with _getattribute_Let Session inherit EndpointGroup methods, but then I lose desirable "interface" like session.groupN.method_X to session.method_XMake in EndpointGroup class attr 'token' and On each call change it so method can access this token of current caller(session) (Don't like this solution because this way I will not be able to work asynchronously) | class Session: def __init__(self, token=None): self.token = token self.group1 = EndpointGroup def __getattribute__(self, item): attribute = super(Session, self).__getattribute__(item) if isinstance(attribute, type(EndpointGroup)): return ProxyEndpointGroup(self.token,attribute) else: return attributeimport functoolsclass ProxyEndpointGroup: def __init__(self, token, endpoint_group): self.token = token self.endpoint_group = endpoint_group def __getattr__(self, item): return functools.partial(getattr(self.endpoint_group,item), token=self.token)class EndpointGroup: @classmethod def ping(cls,token, **kwargs): return f'Calling ping() with params: {kwargs} and token {token}'session = Session(token='12345')r = session.group1.ping(data='endpoint data')print(r)>> Calling ping() with params: {'data': 'endpoint data'} and token 12345this way you can create the proxy object only on call add token param and return the partial method finally destroy the proxy object.It solves your concern of not having to create multiple unnecessary objects in memoryIt also solves the problem of async calls as token in passed in each call |
IronPython.Runtime.Exceptions.ImportException: 'No module named 'pyodbc'' I am trying to run a python script from VB.Net using IronPython. So far, I have installed Python and IronPython. I have the ExecPython method shown below. It works fine when I call a simple print/hello world type of script. This DBTest.py script is just using pyodbc and connecting to the database and executing a basic select query.The error I get at the source.Execute(scope) line is "IronPython.Runtime.Exceptions.ImportException: 'No module named 'pyodbc''"I've installed pyodbc using pip install pyodbc. The DBTest.py script runs fine when I run it with IDLE.I'm not sure if this is a limitation or if there's something I'm missing in the setup.Thanks in advance for your help!!Sub ExecPython(ByVal argv As List(Of String)) Dim engine As ScriptEngine = Python.CreateEngine Dim scriptPath As String = "C:\scripts\DBTest.py" Dim source As ScriptSource = engine.CreateScriptSourceFromFile(scriptPath) argv.Add("") engine.GetSysModule.SetVariable("argv", argv) engine.SetSearchPaths({"C:\Users\MYUSERNAME\AppData\Local\Programs\Python\Python39\Scripts", "C:\Users\MYUSERNAME\AppData\Local\Programs\Python\Python39\include", "C:\Users\MYUSERNAME\AppData\Local\Programs\Python\Python39\Lib", "C:\Program Files\IronPython 3.4\Lib"}) Dim eIO As ScriptIO = engine.Runtime.IO Dim errors As New MemoryStream eIO.SetErrorOutput(errors, Text.Encoding.Default) Dim results As New MemoryStream eIO.SetOutput(results, Text.Encoding.Default) Dim scope As ScriptScope = engine.CreateScope source.Execute(scope) Console.WriteLine("ERRORS:") Console.WriteLine(FormatResult(errors.ToArray)) Console.WriteLine("") Console.WriteLine("RESULTS:") Console.WriteLine(FormatResult(results.ToArray))End SubHere is the python script that I am calling. It runs when I run the module from IDLE.import pyodbcconn = pyodbc.connect('Driver={SQL Server};' 'Server=MYSERVERNAME;' 'Database=MYDBNAME;' 'Trusted_Connection=yes;')cursor = conn.cursor() cursor.execute('SELECT * FROM dbo.TABLENAME')for row in cursor: print(row) | It's been a while, probably you already found a solution.But in general, for being able to use the 'module' you need to import clr and use the addReference function. Then you must do an import of the namespace with the python style.Example:import clrclr.AddReference('System')from System import * |
Django API how validation message back to user I'm using TastyPie to register a new user. I would like to display any validation messages back to the user in an alert box. I have noticed that TastyPie gives me the following back: responseJSON and responseText.responseJSON Object { accounts/create={...}}accounts/create Object { email=[1], password2=[1]}responseText "{"accounts/create": {"email": ["This field is required."], "password2": ["This field is required."]}}"How do I show the validation back to the user and parse this correctly? Does TastyPie have any inbuilt functions to parse errors? This is what I have so far on error which runs but I don't get any error message: error: function(errorThrown){ data = JSON.parse(errorThrown.responseText); alert(data.error) console.log(errorThrown); } }); | Your error seems to be on the jQuery side. The error key points to a function defined differently (see here). error Type: Function( jqXHR jqXHR, String textStatus, String errorThrown ) A function to be called if the request fails. The function receives three arguments: The jqXHR (in jQuery 1.4.x, XMLHttpRequest) object, a string describing the type of error that occurred and an optional exception object, if one occurred. Possible values for the second argument (besides null) are "timeout", "error", "abort", and "parsererror". When an HTTP error occurs, errorThrown receives the textual portion of the HTTP status, such as "Not Found" or "Internal Server Error." As of jQuery 1.5, the error setting can accept an array of functions. Each function will be called in turn. Note: This handler is not called for cross-domain script and cross-domain JSONP requests. This is an Ajax Event.So your JS code ought to be,error: function(jqXHR, textStatus, errorThrown) { ...},But actually, when you return the data from django, that is a success. "Error" in jQuery means a communication error, not a semantic error.If you use justerror: function(jqXHR, textStatus, errorThrown) { alert("An error!")},you'll probably see that it does not get called (you need to return a 400/500 error code from django to have it be called).So what you can do is differentiate between what you successfully return in case of "true" success, and what you successfully return in case of a server side failure: something like,# and/or answer.data['errorMessage'] = "Error type II"answer.data['success'] = Falsereturn answerand then in jQuery:success: function(data, textStatus, jqXHR) { // instead of checking for success, you can check the content of // the "accounts/create" key // if (!data.success) { // I assume that this key won't be present in case of error. if (data["accounts/create"]) { alert("Errors will now be displayed") // ...here, the same code in Corinne Kubler's answer; or you // can make it more jQueryish with e.g. .each() return; } alert("Everything OK, proceeding") // ...} |
Plot the search volume over time Plot the search volume over time. Verify that your result looks similar to THISFirst, I should import some data and print the 2 first lines. I did that successfully, but I can't figure out how to do the question above?So far my code looks like this:import csvwith open('iphonevsandroid.csv', 'rb') as f: reader = csv.reader(f) for i in range(2): print reader.next()N=2f=open('iphonevsandroid.csv', 'rb')for i in range(N): line=f.next().strip() print linef.close() | You can use pygame to plot the data.A short example:import pygame,syspygame.init()screen = pygame.display.set_mode((640,360),0,32)lines = list(zip(data,data[1::]))while True: screen.fill((255,255,255)) for x,(start,end) in enumerate(lines): pygame.draw.line(screen,(0,0,0),(x*50,start),((x+1)*50,end) pygame.display.flip() for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() |
Oracle Stored Procedure and PyODBC I am having some trouble with getting PyODBC to work with a proc in Oracle. Below is the code and the outputdb = pyodbc.connect('DSN=TEST;UID=cantsay;PWD=cantsay')print('-' * 20)try: c = db.cursor() rs = c.execute("select * from v$version where banner like 'Oracle%'") for txt in c.fetchall(): print('%s' % (txt[0])) test = "" row = c.execute("call DKB_test.TESTPROC('7894','789465','789465')").fetchall()finally: db.close()OUTPUT> C:\Documents and Settings\dex\Desktop>orctest.py> -------------------- Oracle Database 10g Release 10.2.0.4.0 - 64bit Production Traceback (most recent call last): File "C:\Documents and> Settings\dex\Desktop\orctest.py", line 31, in <module>> row = c.execute("{call DKB_test.TESTPROC(12354,78946,123 4)}").fetchall()> pyodbc.Error: ('HY000', "[HY000] [Oracle][ODBC][Ora]ORA-06550: line 1,> column 7: \nPLS-00221: 'TESTPROC' is not a procedure> or is undefined\nORA- 06550: line 1, column 7:\nPL/SQL: Statement> ignored\n (6550) (SQLExecDirectW)")But I can see this procedure and coding it in c# it works, but this project I am doing is requiring python for now.I did some Google searches and nothing comes up that helps me.Any thing will be greatly appreciated. | Not 100% sure, The procedure name is Get_SC_From_Comp_Ven_Job or GET_SC_FROM_COMP_VEN_JOB?check the userspace is correct or not.check the name case-sensitive, if we create procedure Get_SC_From_Comp_Ven_Job, actually it is GET_SC_FROM_COMP_VEN_JOB. but if we create procedure "Get_SC_From_Comp_Ven_Job", then it is |
Updating "To:" Email-Header in a while loop in python Below is a code to send multiple emails to contacts loaded from a text file.import time from time import sleep from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText import smtplib uname = #testmail@gmail.com name = "KTester" password = #password1 server = smtplib.SMTP('smtp.gmail.com: 587') server.starttls() server.login(uname, password) message="Test" msg = MIMEMultipart('Alternative') f= open("list.txt","r")clear if f.mode == "r": cont = f.read().splitlines() for x in cont: print time.ctime() msg['Subject'] = "Test Mail - cripted Sample" msg['To'] = x msg['From'] = name+"\x0A\x0D"+uname msg.attach(MIMEText(message, 'html')) print "successfully sent email to %s:" % (msg['To']) f.close() server.quit()OUTPUT:In this case, the first compilation is the expected outcome, which we can get if we use print "successfully sent email to %s:" % (x)The Variable 'x' changes its value at the end of each iteration.However, msg['To'] = x does not accept value from second iteration of the loop(The second code run above).Assignment operation does not work on the message object.Kindly help with whats going wrong. Thanks! | This behaviour is by design.Reassigning to msg['to'] doesn't overwrite the existing mail header, it adds another. To send the existing message to a new address you need to delete the 'to' header before setting it.del msg['to']msg['to'] = 'spam@example.com'This applies to other headers too. From the docs for Message.__setitem__: Note that this does not overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name name, delete the field first, e.g.: del msg['subject'] msg['subject'] = 'Python roolz!' |
How to remove GPU prints in TensorFlow? I use this code to use GPU in TensorFlow:gpus = tf.config.list_physical_devices('GPU')print("Num GPUs Available: ", len(gpus))if gpus: tf.debugging.set_log_device_placement(True)but when I execute this cell:model=keras.Sequential([ keras.Input(( X_train.shape[1],)), keras.layers.Dense(1024,activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(1024,activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(1024,activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(1024,activation="relu"), keras.layers.Dense(1),])model.compile( optimizer="adam", loss=correlation_coefficient_loss)The output is:Executing op VarHandleOp in device/job:localhost/replica:0/task:0/device:GPU:0 Executing opAssignVariableOp in device/job:localhost/replica:0/task:0/device:GPU:0 Executing op VarHandleOpin device /job:localhost/replica:0/task:0/device:GPU:0 Executing opAssignVariableOp in device/job:localhost/replica:0/task:0/device:GPU:0 Executing op VarHandleOpin device /job:localhost/replica:0/task:0/device:GPU:0 Executing opAssignVariableOp in device/job:localhost/replica:0/task:0/device:GPU:0 Executing op _EagerConstin device /job:localhost/replica:0/task:0/device:GPU:0 Executing opRandomUniform in device /job:localhost/replica:0/task:0/device:GPU:0Executing op Sub in device/job:localhost/replica:0/task:0/device:GPU:0 Executing op Mul indevice /job:localhost/replica:0/task:0/device:GPU:0 Executing op AddV2in device /job:localhost/replica:0/task:0/device:GPU:0 Executing opVarHandleOp in device /job:localhost/replica:0/task:0/device:GPU:0Executing op AssignVariableOp in device/job:localhost/replica:0/task:0/device:GPU:0 Executing op _EagerConstin device /job:localhost/replica:0/task:0/device:GPU:0 Executing opFill in device /job:localhost/replica:0/task:0/device:GPU:0 Executingop VarHandleOp in device /job:localhost/replica:0/task:0/device:GPU:0Executing op AssignVariableOp in device/job:localhost/replica:0/task:0/device:GPU:0 Executing op _EagerConstin device /job:localhost/replica:0/task:0/device:GPU:0This print is annoying.My question is, How to remove these GPU Prints of my output in TensorFlow?I try whit:tf.autograph.set_verbosity(3)but I was not successful | You just need to remove the debugging line ...and remember to restart your kernel!tf.debugging.set_log_device_placement(True) |
im getting json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) import jsonimport reimport scrapyimport astclass Scraper(scrapy.spiders.Spider): name = 'scraper' #mandatory=None def __init__(self, page=None, config=None, *args, **kwargs): self.page =page self.config = json.loads(config) print(type(self.config)) #self.mandatory_fields = mandatory.split(',') super(Scraper, self).__init__(*args, **kwargs) def start_requests(self): self.logger.info('Start url: %s' % self.page) yield scrapy.Request(url=self.page, callback=self.parse) def parse(self, response): item = dict(url=response.url) # iterate over all keys in config and extract value for each of thems for key in self.config: print("++"+key) # extract the data for the key from the html response #print("++++++++++"+type(key)) print("+++"+self.config) res = response.css(self.config[key]).extract() # if the label is any type of url then make sure we have an absolute url instead of a relative one if bool(re.search('url', key.lower())): res = self.get_absolute_url(response, res) item[key] = ' '.join(elem for elem in res).strip() # ensure that all mandatory fields are present, else discard this scrape mandatory_fileds_present = True for key in self.mandatory_fields: if not item[key]: mandatory_fileds_present = False if mandatory_fileds_present: yield dict(data=item) @staticmethod def get_absolute_url(response, urls): final_url = [] for url in urls: if not bool(re.match('^http', url)): final_url.append(response.urljoin(url)) else: final_url.append(url) return final_urlim getting this error :json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)im passing css selector as argument in config : scrapy crawl scraper -a page=appeloffres.com/appels-offres/telecom -a config='{"Nom":".table_taille td > b::text","des":".desc_text b::text"}'when im doing : self.config = json.loads(config)any solution??? | This means that you are trying to convert a variable into dict that loads method cannot convert.json.loads() converts a string into a dictionary.For example:>>> import json>>> >>> my_str = '{"key1": "value1", "key2": "value2"}'>>> >>> loaded_dict = json.loads(my_str)>>> >>> loaded_dict{'key1': 'value1', 'key2': 'value2'}>>> type(loaded_dict)<class 'dict'>This is how json.loads work, i.e., converting a string to dict.However, if you try something like this:>>> import json>>> >>> some_var = 'cannot be converted to dict'>>> >>> loaded_dict = json.loads(some_var)Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/python@3.9/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/local/Cellar/python@3.9/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/Cellar/python@3.9/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from Nonejson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)This means that you not every string can be converted into a dictionary object. You should debug and carefully check what config variable holds. |
Python Selenium not clicking I am trying to get my selenium script to click a <a> link that is in a modal.So this is the html;<div class="modal-btns"> <a href="" id="confirm-btn" class="btn-primary-md">Get Now</a> <a href="" id="decline-btn" class="btn-control-md">Cancel</a> </div>Now I want to click the button Get Now. I tried this; button = browser.find_elements_by_css_selector('#confirm-btn') for e in elements: try: e.click() except: what = 0 print "Clicked!"I started this script without headless mode. In the console it printed out Clicked! while in the browser nothing has happend.How can I make this work?==== EDIT ====There is a also a button that triggers the modal, that just works..That button html is;<div class="action-button"> <button type="button" class="btn-fixed-width-lg btn-primary-lg PurchaseButton" data-button-type="main" data-button-action="get" data-expected-price="0" data-bc-requirement="0" data-product-id="222406" data-item-id="1744006" data-item-name="SBC Cannon 2.0" data-asset-type="TShirt" data-asset-type-display-name="T-Shirt" data-item-type="Asset" data-expected-currency="1" data-expected-seller-id="52246" data-seller-name="ViacomIsPoo" data-userasset-id="" style=""> Get </button> </div>It clicks that button by this code: elements = browser.find_elements_by_xpath("//button[contains(@class, 'PurchaseButton')]") for e in elements: try: e.click() except: what = 0So the full code will be; elements = browser.find_elements_by_xpath("//button[contains(@class, 'PurchaseButton')]") for e in elements: try: e.click() except: what = 0 time.sleep(4) button = browser.find_element_by_id('confirm-btn') for e in elements: try: e.click() except: what = 0 print "Clicked!"But it doesn't click the confirm button... | Try this:driver.execute_script("document.getElementById('confirm-btn').click()") |
Plotly Bar Chart Based on Pandas dataframe grouped by year I have a pandas dataframe that I've tried to group by year on 'Close Date' and then plot 'ARR (USD)' on the y-axis against the year on the x-axis. All seems fine after grouping:sumyr = brandarr.groupby(brandarr['Close Date'].dt.year,as_index=True).sum() ARR (USD)Close Date 2017 17121174.332018 15383130.32But when I try to plot:trace = [go.Bar(x=sumyr['Close Date'],y=sumyr['ARR (USD)'])]I get the error: KeyError: 'Close Date'I'm sure it's something stupid, I'm a newbie, but I've been messing with it for an hour and well, here I am. Thanks! | In your groupby function you have used as_index=True so Close Date is now an index. If you want to have access to an index, use pandas .loc or .iloc.To have access to the index values directly, use:sumyr.index.tolist()Check here: Pandas - how to get the data frame index as an array |
Adding values to non zero elements in a Sparse Matrix I have a sparse matrix in which I want to increment all the values of non-zero elements by one. However, I cannot figure it out. Is there a way to do it using standard packages in python? Any help will be appreciated. | I cannot comment on it's performance but you can do (Scipy 1.1.0);>>> from scipy.sparse import csr_matrix>>> a = csr_matrix([[0, 2, 0], [1, 0, 0]])>>> print(a)(0, 1) 2(1, 0) 1>>> a[a.nonzero()] = a[a.nonzero()] + 1>>> print(a)(0, 1) 3(1, 0) 2 |
Websocket message handler in Python I have successfully subscribed to a websocket and am receiving data. Like:Received '[0, 'hb']'Received '[1528, 'hb']'Received '[1528, [6613.2, 21.29175815, 6613.3, 37.02985217, 81.6, 0.0125, 6611.6, 33023.06141807, 6826.41966538, 6491]]'Received '[1528, 'hb']'Received '[0, 'hb']'Now I want the individual values in python as variable memory.For example:ChanID = 1528Price = 6613.2I need this module in Python: https://pypi.org/project/websocket-client/Many thanks for your helpHere is the code:import jsonimport hmac, hashlibfrom time import time, sleepfrom websocket import create_connectionurl = 'wss://api.bitfinex.com/ws/2'key = ""secret = ""ws = create_connection(url)class Bitfinex(): def __init__(self, secret, key): self.secret = secret self.key = key self.url = url self.nonce = str(int(time.time() * 1000000)) self.signature = hmac.new(str.encode(self.secret), bytes('AUTH' + self.nonce, 'utf8'), hashlib.sha384).hexdigest() self.payload = {'apiKey': self.key, 'event': 'auth', 'authPayload': 'AUTH' + self.nonce, 'authNonce': self.nonce, 'authSig': self.signature} ws.send(json.dumps(self.payload)) def get_chanid(self, symbol): get_chanid = { 'event': "subscribe", 'channel': "ticker", 'symbol': "t" + symbol, } ws.send(json.dumps(get_chanid))Bitfinex.__init__(Bitfinex, secret, key)sleep(1)Bitfinex.get_chanid(Bitfinex, 'BTCUSD')sleep(1) | this may be helpimport jsonimport hmacimport hashlibimport timefrom websocket import create_connectionkey = ""secret = ""class Bitfinex(object): def __init__(self, secret, key): self.secret = secret self.key = key self.url = 'wss://api.bitfinex.com/ws/2' self.nonce = str(int(time.time() * 1000000)) self.signature = hmac.new(str.encode(self.secret), bytes('AUTH' + self.nonce, 'utf8'), hashlib.sha384).hexdigest() self.payload = {'apiKey': self.key, 'event': 'auth', 'authPayload': 'AUTH' + self.nonce, 'authNonce': self.nonce, 'authSig': self.signature} self.ws = create_connection(url) def get_chanid(self, symbol): self.ws.send(json.dumps(self.payload)) get_chanid = { 'event': "subscribe", 'channel': "ticker", 'symbol': "t" + symbol, } self.ws.send(json.dumps(get_chanid)) while True: data = self.ws.recv() print(data)bit = Bitfinex(secret, key)bit.get_chanid('BTCUSD') |
Is Python byte-code, interpreter independent? This is an obvious question, that I haven't been able to find a concrete answer to. Is the Python Byte-Code and Python Code itself interpreter independent, Meaning by this, that If I take a CPython, PyPy, Jython, IronPython, Skulpt, etc, Interpreter, and I attempt to run, the same piece of code in python or bytecode, will it run correctly? (provided that they implement the same language version, and use modules strictly written in Python or standard modules)If so, is there is a benchmark, or place where I can compare performance comparison from many interpreters? I've been playing for a while with CPython, and now I want to explore new interpreters.And also a side question, What are the uses for the others implementations of python? Skulpt I get it, browsers, but the rest? Is there a specific industry or application that requires a different interpreter (which)? | If so, is there is a benchmark, or place where I can compare performance comparison from many interpreters?speed.pypy.org compares pypy to cpython |
How can i ignore the characters between brackets? Example:The hardcoded input in the system:Welcome to work {sarah} have a great {monday}! The one i get from an api call might differ by the day of the week or the name example:Welcome to work Roy have a great Tuesday!I want to compare these 2 lines and give an error if anything but the terms in brackets doesn't match.The way I started is by using assert which is the exact function I need then tested with ignoring a sentence if it starts with { by using .startswith() but I haven't been successful working my way in specifics between the brackets that I don't want them checked. | Regular expressions are good for matching text.Convert your template into a regular expression, using a regular expression to match the {} tags:>>> import re>>> template = 'Welcome to work {sarah} have a great {monday}!'>>> pattern = re.sub('{[^}]*}', '(.*)', template)>>> pattern'Welcome to work (.*) have a great (.*)!'To make sure the matching halts at the end of the pattern, put a $:>>> pattern += '$'Then match your string against the pattern:>>> match = re.match(pattern, 'Welcome to work Roy have a great Tuesday!')>>> match.groups()('Roy', 'Tuesday')If you try matching a non-matching string you get nothing:>>> match = re.match(pattern, 'I wandered lonely as a cloud')>>> match is NoneTrueIf the start of the string matches but the end doesn't, the $ makes sure it doesn't match. The $ says "end here":>>> match = re.match(pattern, 'Welcome to work Roy have a great one! <ignored>')>>> match is NoneTrueedit: also you might want to escape your input in case anyone's playing silly beggars. |
How do I retrieve all the elements from an xml to a pandas DataFrame? PYTHON this is my fist time asking a question, thanks in advance!So, I am trying to process hundreds of XML files which have a very particular format (python script, xml outputs, pandas output, and original XML below)I was able to capture a specific part of the XML,by stripping the CDATA tag, which is awesome, but now I need to pass the “detalles” items in the extracted XML to a pandas data frame. I have tried many approaches and checked on different questions, and I still get a NoneType, with empty cells.Any ideas on how I can go about capturing All the data? Thanks!This is my code:from lxml import etreeimport pandas as pdroot= etree.parse(r'Factura 2.xml')root2 = etree.XML(etree.tostring(root)) invoice = root2[3]print(invoice.text)#i see here under the tag deatlles , taht the invoice has items in itnew_xml = invoice.textnew_xml= new_xml.encode()#forces to encode XML to default encodingroott=etree.XML(new_xml)#pass data to pandas dataframedata = []cols = []for i, child in enumerate(roott): data.append([subchild.text for subchild in child]) cols.append(child.tag)df = pd.DataFrame(data).T # Write in DF and transpose itdf.columns = cols # Update column namesdf.to_excel('table.xlsx')print(df)This is the output:<?xml version="1.0" encoding="UTF-8" standalone="no"?><factura id="comprobante" version="2.1.0"> <infoTributaria> <ambiente>2</ambiente> <tipoEmision>1</tipoEmision> <razonSocial>INVERNEG S.A.</razonSocial> <ruc>0990658498001</ruc> <claveAcceso>2201202101099065849800120030120000802950008029516</claveAcceso> <codDoc>01</codDoc> <estab>003</estab> <ptoEmi>012</ptoEmi> <secuencial>000080295</secuencial> <dirMatriz>AV. DE LAS AMERICAS 807 Y CALLE SEGUNDA</dirMatriz> </infoTributaria> <infoFactura> <fechaEmision>22/01/2021</fechaEmision> <dirEstablecimiento>AV. 10 DE AGOSTO # 132 Y DE LOS CEREZOS</dirEstablecimiento> <contribuyenteEspecial>136</contribuyenteEspecial> <obligadoContabilidad>SI</obligadoContabilidad> <tipoIdentificacionComprador>04</tipoIdentificacionComprador> <razonSocialComprador>SANTOS ANDINO JOSE RODRIGO</razonSocialComprador> <identificacionComprador>1704484185001</identificacionComprador> <direccionComprador>AV. MARISCAL SUCRE S8-493 Y JOSE MENDOZA</direccionComprador> <totalSinImpuestos>84.15</totalSinImpuestos> <totalDescuento>0</totalDescuento> <totalConImpuestos> <totalImpuesto> <codigo>2</codigo> <codigoPorcentaje>2</codigoPorcentaje> <descuentoAdicional>0.00</descuentoAdicional> <baseImponible>84.15</baseImponible> <tarifa>12.00</tarifa> <valor>10.10</valor> </totalImpuesto> </totalConImpuestos> <propina>0.00</propina> <importeTotal>94.25</importeTotal> <moneda>DOLAR</moneda> <pagos> <pago> <formaPago>20</formaPago> <total>94.25</total> <plazo>30</plazo> <unidadTiempo>Dias</unidadTiempo> </pago> </pagos> </infoFactura> <detalles> <detalle> <codigoPrincipal>SH6607-XPL</codigoPrincipal> <descripcion>20K KM SYNTHETIC LF PH2876 Ford Mazda.</descripcion> <cantidad>34.00</cantidad> <precioUnitario>2.4750</precioUnitario> <descuento>0.00</descuento> <precioTotalSinImpuesto>84.15</precioTotalSinImpuesto> <impuestos> <impuesto> <codigo>2</codigo> <codigoPorcentaje>2</codigoPorcentaje> <tarifa>12.00</tarifa> <baseImponible>84.15</baseImponible> <valor>10.10</valor> </impuesto> </impuestos> </detalle> </detalles> <infoAdicional> <campoAdicional nombre="emailCliente">motozone25@gmail.com</campoAdicional> <campoAdicional nombre="OrdenCompra">NN</campoAdicional> <campoAdicional nombre="CodSociedad">Dynamics</campoAdicional> <campoAdicional nombre="CodInternoSAP">ivn</campoAdicional> </infoAdicional><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:etsi="http://uri.etsi.org/01903/v1.3.2#" Id="Signature260661"><ds:SignedInfo Id="Signature-SignedInfo959896"><ds:CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference Id="SignedPropertiesID72112" Type="http://uri.etsi.org/01903#SignedProperties" URI="#Signature260661-SignedProperties267295"><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>2XMqZGYiZj19+ASI+0cw/ZpY32U=</ds:DigestValue></ds:Reference><ds:Reference URI="#Certificate1621720"><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>7ypTHELgqKKlA36P9wVJ8tB+LLY=</ds:DigestValue></ds:Reference><ds:Reference Id="Reference-ID-24390" URI="#comprobante"><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>IMXyzVuehrGVc8DIwS/O7z+yiEs=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue Id="SignatureValue783577">e7j105IptFmzcpMNYsbkrByTaMnV1XPmnyJ69dXCCfifwooEKpHxiNDlimPYcfQV7ZpLJ4V5g4H1CtobnB9U/GgKQF1uQz6uCFzyvyxp3P6TBg5iJ2Tbv4txCWu7OZBlQmbeilqVOkV15KAbVPZlqdxJXkJPvqRxxOVoPzidRbGWCZR9Q19lNdNEV8yHz4AtkpMWl3JtRi1k7n4aRPFDl1PC7nhcLWIuDXuDsImgREjbGzY+PBoBw48DlSyXM/eABSBtwuZESSmYcC9k8ZWmD59VuFUqz1bFgb4LxhMTWjzNlLf2CmbplKnqGZfaE6Kp+h0tJi6+7PFdlhm+dGNaCg==</ds:SignatureValue><ds:KeyInfo Id="Certificate1621720"><ds:X509Data><ds:X509Certificate>MIIKFzCCB/+gAwIBAgIEW2EYeTANBgkqhkiG9w0BAQsFADCBoTELMAkGA1UEBhMCRUMxIjAgBgNVBAoTGUJBTkNPIENFTlRSQUwgREVMIEVDVUFET1IxNzA1BgNVBAsTLkVOVElEQUQgREUgQ0VSVElGSUNBQ0lPTiBERSBJTkZPUk1BQ0lPTi1FQ0lCQ0UxDjAMBgNVBAcTBVFVSVRPMSUwIwYDVQQDExxBQyBCQU5DTyBDRU5UUkFMIERFTCBFQ1VBRE9SMB4XDTE5MDkyMzEzNDkyNFoXDTIxMDkyMzE0MTkyNFowgbYxCzAJBgNVBAYTAkVDMSIwIAYDVQQKExlCQU5DTyBDRU5UUkFMIERFTCBFQ1VBRE9SMTcwNQYDVQQLEy5FTlRJREFEIERFIENFUlRJRklDQUNJT04gREUgSU5GT1JNQUNJT04tRUNJQkNFMQ4wDAYDVQQHEwVRVUlUTzE6MBEGA1UEBRMKMDAwMDA5MTg5NjAlBgNVBAMTHkpVU1RPIEVOUklRVUUgR09OWkFMRVogQUxNRUlEQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMqyjftcUeinEL0FI3Sc2um7sr2dhfuSdOhPgZQPwr64/0Du9Uo475cqS4oD3QyW8m4HK2OFsqiJr7Pp9DmqyuONYX6nWrq8sdlRKoSnML2MDGgmHvEjBc/ezq/04vpcT7i8BWXQoe2Pk3mI2mPIWoimv91eg5euPYvvUX+cF44kNE3KSBGsPezFyOB1FglFQxG2wlm8s6Yl4TJqvjo6NpTErpesugXntNEq2tlPoOtPt4cw9BQ/buh7oDIP1HA0ilTioXkNqPY88elJYheuKJI/lPWM2+fBeXNVjTV7SIqJrPM5R+A8GLlWDYgevOle5W9pwLvbx3r1+KYToqoSsx8CAwEAAaOCBT4wggU6MAsGA1UdDwQEAwIHgDBnBgNVHSAEYDBeMFwGCysGAQQBgqg7AgIBME0wSwYIKwYBBQUHAgEWP2h0dHA6Ly93d3cuZWNpLmJjZS5lYy9wb2xpdGljYS1jZXJ0aWZpY2Fkby9wZXJzb25hLWp1cmlkaWNhLnBkZjCBkQYIKwYBBQUHAQEEgYQwgYEwPgYIKwYBBQUHMAGGMmh0dHA6Ly9vY3NwLmVjaS5iY2UuZWMvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMD8GCCsGAQUFBzABhjNodHRwOi8vb2NzcDEuZWNpLmJjZS5lYy9lamJjYS9wdWJsaWN3ZWIvc3RhdHVzL29jc3AwGwYKKwYBBAGCqDsDCgQNEwtJTlZFUk5FRyBTQTAdBgorBgEEAYKoOwMLBA8TDTA5OTA2NTg0OTgwMDEwGgYKKwYBBAGCqDsDAQQMEwowOTA1MzMxMTc5MB0GCisGAQQBgqg7AwIEDxMNSlVTVE8gRU5SSVFVRTAYBgorBgEEAYKoOwMDBAoTCEdPTlpBTEVaMBcGCisGAQQBgqg7AwQECRMHQUxNRUlEQTAaBgorBgEEAYKoOwMFBAwTClBSRVNJREVOVEUwOgYKKwYBBAGCqDsDBwQsEypBViBERSBMQVMgIEFNRVJJQ0FTICAgODA3IFkgQ0FMTEUgIFNFR1VOREEwGQYKKwYBBAGCqDsDCAQLEwkwNDI2OTA4MDAwGQYKKwYBBAGCqDsDCQQLEwlHdWF5YXF1aWwwFwYKKwYBBAGCqDsDDAQJEwdFQ1VBRE9SMB0GCisGAQQBgqg7AzIEDxMNMDk5MDY1ODQ5ODAwMTAgBgorBgEEAYKoOwMzBBITEFNPRlRXQVJFLUFSQ0hJVk8wJgYDVR0RBB8wHYEbanVzdG8uZ29uemFsZXpAaW52ZXJuZWcuY29tMIIB3wYDVR0fBIIB1jCCAdIwggHOoIIByqCCAcaGgdVsZGFwOi8vYmNlcWxkYXBzdWJwMS5iY2UuZWMvY249Q1JMODQyLGNuPUFDJTIwQkFOQ08lMjBDRU5UUkFMJTIwREVMJTIwRUNVQURPUixsPVFVSVRPLG91PUVOVElEQUQlMjBERSUyMENFUlRJRklDQUNJT04lMjBERSUyMElORk9STUFDSU9OLUVDSUJDRSxvPUJBTkNPJTIwQ0VOVFJBTCUyMERFTCUyMEVDVUFET1IsYz1FQz9jZXJ0aWZpY2F0ZVJldm9jYXRpb25MaXN0P2Jhc2WGNGh0dHA6Ly93d3cuZWNpLmJjZS5lYy9DUkwvZWNpX2JjZV9lY19jcmxmaWxlY29tYi5jcmykgbUwgbIxCzAJBgNVBAYTAkVDMSIwIAYDVQQKExlCQU5DTyBDRU5UUkFMIERFTCBFQ1VBRE9SMTcwNQYDVQQLEy5FTlRJREFEIERFIENFUlRJRklDQUNJT04gREUgSU5GT1JNQUNJT04tRUNJQkNFMQ4wDAYDVQQHEwVRVUlUTzElMCMGA1UEAxMcQUMgQkFOQ08gQ0VOVFJBTCBERUwgRUNVQURPUjEPMA0GA1UEAxMGQ1JMODQyMCsGA1UdEAQkMCKADzIwMTkwOTIzMTM0OTI0WoEPMjAyMTA5MjMxNDE5MjRaMB8GA1UdIwQYMBaAFEii3yMfHfgsUXqMA81JMqUJwZSrMB0GA1UdDgQWBBRo1dATwJqFbr5m83H1ebKUyJ296jAJBgNVHRMEAjAAMBkGCSqGSIb2fQdBAAQMMAobBFY4LjEDAgSwMA0GCSqGSIb3DQEBCwUAA4ICAQA5UfSwQYbsGAz9Ygq6AoBVFBvzrbG/ebqTM7DCnPh9C6vNEgZ2LqfWENb05h0AdP+6lhVz6RXBhMKnoh9bfJkTDbBj6SOxOQkiVueUgrTHJOm45sTW2Rd6Sv/My7wleKR6muSWGOSILXvp3zxPjHklMfTRMsAYDpzRY0OhpOzKreJWXeI/BxAxrPW/D18BjwojKjeuSsNd8PSMmye8ACJtZ05C6cZcljtM0Fu3YGRCW5rLR2U79OtKq7FFSGyPwXdzK5b4E0WgbHcEMmkYh7n0IWxdhOyzfHdMGE+5NHef07/EWRKgyadtw6/TR4bcoXBBPyysvzmySx0iiAw0OGhLl86vxAC24Tj705j2LMYbIPrzUUuYQEpJ+FCwF6/n/DxYgjwURCIEq6GSSWRAdOXUVWgHoNfGRQ9I6K8BsFsI6PKHZZ56SCwq/RvWwlpe2r622IunN9QiMgMt1WeYmVK4EzSiOh6Vr2tUyyYE3F9n2s+FEwXCtV/lacFyRyoPopQo53Sj+BjanHAZ6NmMAqpDUgc8ZhSCXQ3U0OZlZGtxAjxtkoSPbdqMj+YeL4Ummo+I85Cjcw4sBYSLjaPeQzEp5xYVI9Zxec4j3w6dygHv3z4rzjX7Hp5FT3gorEGgoB7Chwf6/O/tOhNJIHnbaAIJCeOVwlLyRJUAxRKagkcA8w==</ds:X509Certificate></ds:X509Data><ds:KeyValue><ds:RSAKeyValue><ds:Modulus>yrKN+1xR6KcQvQUjdJza6buyvZ2F+5J06E+BlA/Cvrj/QO71SjjvlypLigPdDJbybgcrY4WyqImvs+n0OarK441hfqdauryx2VEqhKcwvYwMaCYe8SMFz97Or/Ti+lxPuLwFZdCh7Y+TeYjaY8haiKa/3V6Dl649i+9Rf5wXjiQ0TcpIEaw97MXI4HUWCUVDEbbCWbyzpiXhMmq+Ojo2lMSul6y6Bee00Sra2U+g60+3hzD0FD9u6HugMg/UcDSKVOKheQ2o9jzx6UliF64okj+U9Yzb58F5c1WNNXtIioms8zlH4DwYuVYNiB686V7lb2nAu9vHevX4phOiqhKzHw==</ds:Modulus><ds:Exponent>AQAB</ds:Exponent></ds:RSAKeyValue></ds:KeyValue></ds:KeyInfo><ds:Object Id="Signature260661-Object151833"><etsi:QualifyingProperties Target="#Signature260661"><etsi:SignedProperties Id="Signature260661-SignedProperties267295"><etsi:SignedSignatureProperties><etsi:SigningTime>2021-01-22T17:39:56-05:00</etsi:SigningTime><etsi:SigningCertificate><etsi:Cert><etsi:CertDigest><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>m1xycXw93GXwQi44F/n8Fr4r2TM=</ds:DigestValue></etsi:CertDigest><etsi:IssuerSerial><ds:X509IssuerName>CN=AC BANCO CENTRAL DEL ECUADOR,L=QUITO,OU=ENTIDAD DE CERTIFICACION DE INFORMACION-ECIBCE,O=BANCO CENTRAL DEL ECUADOR,C=EC</ds:X509IssuerName><ds:X509SerialNumber>1533089913</ds:X509SerialNumber></etsi:IssuerSerial></etsi:Cert></etsi:SigningCertificate></etsi:SignedSignatureProperties><etsi:SignedDataObjectProperties><etsi:DataObjectFormat ObjectReference="#Reference-ID-24390"><etsi:Description>comprobante</etsi:Description><etsi:MimeType>text/xml</etsi:MimeType></etsi:DataObjectFormat></etsi:SignedDataObjectProperties></etsi:SignedProperties></etsi:QualifyingProperties></ds:Object></ds:Signature></factura> infoTributaria ... {http://www.w3.org/2000/09/xmldsig#}Signature0 2 ... \n1 1 ... \ne7j105IptFmzcpMNYsbkrByTaMnV1XPmnyJ69dXCCfif...2 INVERNEG S.A. ... \n3 0990658498001 ... None4 2201202101099065849800120030120000802950008029516 ... None5 01 ... None6 003 ... None7 012 ... None8 000080295 ... None9 AV. DE LAS AMERICAS 807 Y CALLE SEGUNDA ... None10 None ... None11 None ... None12 None ... None13 None ... None14 None ... None[15 rows x 5 columns]>>> ORIGINAL XML<autorizacion><estado>PENDIENTE</estado><numeroAutorizacion>2201202101099065849800120030120000802950008029516</numeroAutorizacion><ambiente>PRODUCCIÓN</ambiente><comprobante><![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?><factura id="comprobante" version="2.1.0"> <infoTributaria> <ambiente>2</ambiente> <tipoEmision>1</tipoEmision> <razonSocial>INVERNEG S.A.</razonSocial> <ruc>0990658498001</ruc> <claveAcceso>2201202101099065849800120030120000802950008029516</claveAcceso> <codDoc>01</codDoc> <estab>003</estab> <ptoEmi>012</ptoEmi> <secuencial>000080295</secuencial> <dirMatriz>AV. DE LAS AMERICAS 807 Y CALLE SEGUNDA</dirMatriz> </infoTributaria> <infoFactura> <fechaEmision>22/01/2021</fechaEmision> <dirEstablecimiento>AV. 10 DE AGOSTO # 132 Y DE LOS CEREZOS</dirEstablecimiento> <contribuyenteEspecial>136</contribuyenteEspecial> <obligadoContabilidad>SI</obligadoContabilidad> <tipoIdentificacionComprador>04</tipoIdentificacionComprador> <razonSocialComprador>SANTOS ANDINO JOSE RODRIGO</razonSocialComprador> <identificacionComprador>1704484185001</identificacionComprador> <direccionComprador>AV. MARISCAL SUCRE S8-493 Y JOSE MENDOZA</direccionComprador> <totalSinImpuestos>84.15</totalSinImpuestos> <totalDescuento>0</totalDescuento> <totalConImpuestos> <totalImpuesto> <codigo>2</codigo> <codigoPorcentaje>2</codigoPorcentaje> <descuentoAdicional>0.00</descuentoAdicional> <baseImponible>84.15</baseImponible> <tarifa>12.00</tarifa> <valor>10.10</valor> </totalImpuesto> </totalConImpuestos> <propina>0.00</propina> <importeTotal>94.25</importeTotal> <moneda>DOLAR</moneda> <pagos> <pago> <formaPago>20</formaPago> <total>94.25</total> <plazo>30</plazo> <unidadTiempo>Dias</unidadTiempo> </pago> </pagos> </infoFactura> <detalles> <detalle> <codigoPrincipal>SH6607-XPL</codigoPrincipal> <descripcion>20K KM SYNTHETIC LF PH2876 Ford Mazda.</descripcion> <cantidad>34.00</cantidad> <precioUnitario>2.4750</precioUnitario> <descuento>0.00</descuento> <precioTotalSinImpuesto>84.15</precioTotalSinImpuesto> <impuestos> <impuesto> <codigo>2</codigo> <codigoPorcentaje>2</codigoPorcentaje> <tarifa>12.00</tarifa> <baseImponible>84.15</baseImponible> <valor>10.10</valor> </impuesto> </impuestos> </detalle> </detalles> <infoAdicional> <campoAdicional nombre="emailCliente">motozone25@gmail.com</campoAdicional> <campoAdicional nombre="OrdenCompra">NN</campoAdicional> <campoAdicional nombre="CodSociedad">Dynamics</campoAdicional> <campoAdicional nombre="CodInternoSAP">ivn</campoAdicional> </infoAdicional><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:etsi="http://uri.etsi.org/01903/v1.3.2#" Id="Signature260661"><ds:SignedInfo Id="Signature-SignedInfo959896"><ds:CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference Id="SignedPropertiesID72112" Type="http://uri.etsi.org/01903#SignedProperties" URI="#Signature260661-SignedProperties267295"><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>2XMqZGYiZj19+ASI+0cw/ZpY32U=</ds:DigestValue></ds:Reference><ds:Reference URI="#Certificate1621720"><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>7ypTHELgqKKlA36P9wVJ8tB+LLY=</ds:DigestValue></ds:Reference><ds:Reference Id="Reference-ID-24390" URI="#comprobante"><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>IMXyzVuehrGVc8DIwS/O7z+yiEs=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue Id="SignatureValue783577">e7j105IptFmzcpMNYsbkrByTaMnV1XPmnyJ69dXCCfifwooEKpHxiNDlimPYcfQV7ZpLJ4V5g4H1CtobnB9U/GgKQF1uQz6uCFzyvyxp3P6TBg5iJ2Tbv4txCWu7OZBlQmbeilqVOkV15KAbVPZlqdxJXkJPvqRxxOVoPzidRbGWCZR9Q19lNdNEV8yHz4AtkpMWl3JtRi1k7n4aRPFDl1PC7nhcLWIuDXuDsImgREjbGzY+PBoBw48DlSyXM/eABSBtwuZESSmYcC9k8ZWmD59VuFUqz1bFgb4LxhMTWjzNlLf2CmbplKnqGZfaE6Kp+h0tJi6+7PFdlhm+dGNaCg==</ds:SignatureValue><ds:KeyInfo Id="Certificate1621720"><ds:X509Data><ds:X509Certificate>MIIKFzCCB/+gAwIBAgIEW2EYeTANBgkqhkiG9w0BAQsFADCBoTELMAkGA1UEBhMCRUMxIjAgBgNVBAoTGUJBTkNPIENFTlRSQUwgREVMIEVDVUFET1IxNzA1BgNVBAsTLkVOVElEQUQgREUgQ0VSVElGSUNBQ0lPTiBERSBJTkZPUk1BQ0lPTi1FQ0lCQ0UxDjAMBgNVBAcTBVFVSVRPMSUwIwYDVQQDExxBQyBCQU5DTyBDRU5UUkFMIERFTCBFQ1VBRE9SMB4XDTE5MDkyMzEzNDkyNFoXDTIxMDkyMzE0MTkyNFowgbYxCzAJBgNVBAYTAkVDMSIwIAYDVQQKExlCQU5DTyBDRU5UUkFMIERFTCBFQ1VBRE9SMTcwNQYDVQQLEy5FTlRJREFEIERFIENFUlRJRklDQUNJT04gREUgSU5GT1JNQUNJT04tRUNJQkNFMQ4wDAYDVQQHEwVRVUlUTzE6MBEGA1UEBRMKMDAwMDA5MTg5NjAlBgNVBAMTHkpVU1RPIEVOUklRVUUgR09OWkFMRVogQUxNRUlEQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMqyjftcUeinEL0FI3Sc2um7sr2dhfuSdOhPgZQPwr64/0Du9Uo475cqS4oD3QyW8m4HK2OFsqiJr7Pp9DmqyuONYX6nWrq8sdlRKoSnML2MDGgmHvEjBc/ezq/04vpcT7i8BWXQoe2Pk3mI2mPIWoimv91eg5euPYvvUX+cF44kNE3KSBGsPezFyOB1FglFQxG2wlm8s6Yl4TJqvjo6NpTErpesugXntNEq2tlPoOtPt4cw9BQ/buh7oDIP1HA0ilTioXkNqPY88elJYheuKJI/lPWM2+fBeXNVjTV7SIqJrPM5R+A8GLlWDYgevOle5W9pwLvbx3r1+KYToqoSsx8CAwEAAaOCBT4wggU6MAsGA1UdDwQEAwIHgDBnBgNVHSAEYDBeMFwGCysGAQQBgqg7AgIBME0wSwYIKwYBBQUHAgEWP2h0dHA6Ly93d3cuZWNpLmJjZS5lYy9wb2xpdGljYS1jZXJ0aWZpY2Fkby9wZXJzb25hLWp1cmlkaWNhLnBkZjCBkQYIKwYBBQUHAQEEgYQwgYEwPgYIKwYBBQUHMAGGMmh0dHA6Ly9vY3NwLmVjaS5iY2UuZWMvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMD8GCCsGAQUFBzABhjNodHRwOi8vb2NzcDEuZWNpLmJjZS5lYy9lamJjYS9wdWJsaWN3ZWIvc3RhdHVzL29jc3AwGwYKKwYBBAGCqDsDCgQNEwtJTlZFUk5FRyBTQTAdBgorBgEEAYKoOwMLBA8TDTA5OTA2NTg0OTgwMDEwGgYKKwYBBAGCqDsDAQQMEwowOTA1MzMxMTc5MB0GCisGAQQBgqg7AwIEDxMNSlVTVE8gRU5SSVFVRTAYBgorBgEEAYKoOwMDBAoTCEdPTlpBTEVaMBcGCisGAQQBgqg7AwQECRMHQUxNRUlEQTAaBgorBgEEAYKoOwMFBAwTClBSRVNJREVOVEUwOgYKKwYBBAGCqDsDBwQsEypBViBERSBMQVMgIEFNRVJJQ0FTICAgODA3IFkgQ0FMTEUgIFNFR1VOREEwGQYKKwYBBAGCqDsDCAQLEwkwNDI2OTA4MDAwGQYKKwYBBAGCqDsDCQQLEwlHdWF5YXF1aWwwFwYKKwYBBAGCqDsDDAQJEwdFQ1VBRE9SMB0GCisGAQQBgqg7AzIEDxMNMDk5MDY1ODQ5ODAwMTAgBgorBgEEAYKoOwMzBBITEFNPRlRXQVJFLUFSQ0hJVk8wJgYDVR0RBB8wHYEbanVzdG8uZ29uemFsZXpAaW52ZXJuZWcuY29tMIIB3wYDVR0fBIIB1jCCAdIwggHOoIIByqCCAcaGgdVsZGFwOi8vYmNlcWxkYXBzdWJwMS5iY2UuZWMvY249Q1JMODQyLGNuPUFDJTIwQkFOQ08lMjBDRU5UUkFMJTIwREVMJTIwRUNVQURPUixsPVFVSVRPLG91PUVOVElEQUQlMjBERSUyMENFUlRJRklDQUNJT04lMjBERSUyMElORk9STUFDSU9OLUVDSUJDRSxvPUJBTkNPJTIwQ0VOVFJBTCUyMERFTCUyMEVDVUFET1IsYz1FQz9jZXJ0aWZpY2F0ZVJldm9jYXRpb25MaXN0P2Jhc2WGNGh0dHA6Ly93d3cuZWNpLmJjZS5lYy9DUkwvZWNpX2JjZV9lY19jcmxmaWxlY29tYi5jcmykgbUwgbIxCzAJBgNVBAYTAkVDMSIwIAYDVQQKExlCQU5DTyBDRU5UUkFMIERFTCBFQ1VBRE9SMTcwNQYDVQQLEy5FTlRJREFEIERFIENFUlRJRklDQUNJT04gREUgSU5GT1JNQUNJT04tRUNJQkNFMQ4wDAYDVQQHEwVRVUlUTzElMCMGA1UEAxMcQUMgQkFOQ08gQ0VOVFJBTCBERUwgRUNVQURPUjEPMA0GA1UEAxMGQ1JMODQyMCsGA1UdEAQkMCKADzIwMTkwOTIzMTM0OTI0WoEPMjAyMTA5MjMxNDE5MjRaMB8GA1UdIwQYMBaAFEii3yMfHfgsUXqMA81JMqUJwZSrMB0GA1UdDgQWBBRo1dATwJqFbr5m83H1ebKUyJ296jAJBgNVHRMEAjAAMBkGCSqGSIb2fQdBAAQMMAobBFY4LjEDAgSwMA0GCSqGSIb3DQEBCwUAA4ICAQA5UfSwQYbsGAz9Ygq6AoBVFBvzrbG/ebqTM7DCnPh9C6vNEgZ2LqfWENb05h0AdP+6lhVz6RXBhMKnoh9bfJkTDbBj6SOxOQkiVueUgrTHJOm45sTW2Rd6Sv/My7wleKR6muSWGOSILXvp3zxPjHklMfTRMsAYDpzRY0OhpOzKreJWXeI/BxAxrPW/D18BjwojKjeuSsNd8PSMmye8ACJtZ05C6cZcljtM0Fu3YGRCW5rLR2U79OtKq7FFSGyPwXdzK5b4E0WgbHcEMmkYh7n0IWxdhOyzfHdMGE+5NHef07/EWRKgyadtw6/TR4bcoXBBPyysvzmySx0iiAw0OGhLl86vxAC24Tj705j2LMYbIPrzUUuYQEpJ+FCwF6/n/DxYgjwURCIEq6GSSWRAdOXUVWgHoNfGRQ9I6K8BsFsI6PKHZZ56SCwq/RvWwlpe2r622IunN9QiMgMt1WeYmVK4EzSiOh6Vr2tUyyYE3F9n2s+FEwXCtV/lacFyRyoPopQo53Sj+BjanHAZ6NmMAqpDUgc8ZhSCXQ3U0OZlZGtxAjxtkoSPbdqMj+YeL4Ummo+I85Cjcw4sBYSLjaPeQzEp5xYVI9Zxec4j3w6dygHv3z4rzjX7Hp5FT3gorEGgoB7Chwf6/O/tOhNJIHnbaAIJCeOVwlLyRJUAxRKagkcA8w==</ds:X509Certificate></ds:X509Data><ds:KeyValue><ds:RSAKeyValue><ds:Modulus>yrKN+1xR6KcQvQUjdJza6buyvZ2F+5J06E+BlA/Cvrj/QO71SjjvlypLigPdDJbybgcrY4WyqImvs+n0OarK441hfqdauryx2VEqhKcwvYwMaCYe8SMFz97Or/Ti+lxPuLwFZdCh7Y+TeYjaY8haiKa/3V6Dl649i+9Rf5wXjiQ0TcpIEaw97MXI4HUWCUVDEbbCWbyzpiXhMmq+Ojo2lMSul6y6Bee00Sra2U+g60+3hzD0FD9u6HugMg/UcDSKVOKheQ2o9jzx6UliF64okj+U9Yzb58F5c1WNNXtIioms8zlH4DwYuVYNiB686V7lb2nAu9vHevX4phOiqhKzHw==</ds:Modulus><ds:Exponent>AQAB</ds:Exponent></ds:RSAKeyValue></ds:KeyValue></ds:KeyInfo><ds:Object Id="Signature260661-Object151833"><etsi:QualifyingProperties Target="#Signature260661"><etsi:SignedProperties Id="Signature260661-SignedProperties267295"><etsi:SignedSignatureProperties><etsi:SigningTime>2021-01-22T17:39:56-05:00</etsi:SigningTime><etsi:SigningCertificate><etsi:Cert><etsi:CertDigest><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>m1xycXw93GXwQi44F/n8Fr4r2TM=</ds:DigestValue></etsi:CertDigest><etsi:IssuerSerial><ds:X509IssuerName>CN=AC BANCO CENTRAL DEL ECUADOR,L=QUITO,OU=ENTIDAD DE CERTIFICACION DE INFORMACION-ECIBCE,O=BANCO CENTRAL DEL ECUADOR,C=EC</ds:X509IssuerName><ds:X509SerialNumber>1533089913</ds:X509SerialNumber></etsi:IssuerSerial></etsi:Cert></etsi:SigningCertificate></etsi:SignedSignatureProperties><etsi:SignedDataObjectProperties><etsi:DataObjectFormat ObjectReference="#Reference-ID-24390"><etsi:Description>comprobante</etsi:Description><etsi:MimeType>text/xml</etsi:MimeType></etsi:DataObjectFormat></etsi:SignedDataObjectProperties></etsi:SignedProperties></etsi:QualifyingProperties></ds:Object></ds:Signature></factura>]]></comprobante></autorizacion> | It is only partial solution.EDIT:I added version which uses recursion to get all children, subchildren, etc.So now it seems full solution.First: find('detalle') search direct child 'detalle' but not nested in other nodes - ie. in 'detalles'. You need xpath with .// to search in all nodes.detalle = tree.find('.//detalle')Second: subchild.text for subchild in child search text in direct children but it doesn't get direct text - ie. in <codigoPrincipal> - and it doesn't get text from nested child - ie. in <impuestos>.For <codigoPrincipal> you has to get directly child.text but for <impuestos>you would have to create nested loop or function with recursion. And this loop/recursion is not resolved in my solution, yet.Code:from lxml import etreeimport pandas as pdtree = etree.parse('Factura 2.xml')#root = tree.getroot() # I don't need it. I can use directly `tree.find()`invoice = tree.find('comprobante') # instead of `root[3]tree = etree.XML(invoice.text.encode())detalle = tree.find('.//detalle') # need `.//` to get nested element# pass data to pandas dataframe (PEP8: one space after `#`)data = dict()print('--- children ---')for child in detalle: column = child.tag text = child.text.strip() data[column] = [text] print(column, '=', text)print('--- DataFrame ---')df = pd.DataFrame(data).T # Write in DF and transpose it (PEP8: two spaces before `#`)#df.to_excel('table.xlsx')print(df)Result:--- children ---codigoPrincipal = SH6607-XPLdescripcion = 20K KM SYNTHETIC LF PH2876 Ford Mazda.cantidad = 34.00precioUnitario = 2.4750descuento = 0.00precioTotalSinImpuesto = 84.15impuestos = --- DataFrame --- 0codigoPrincipal SH6607-XPLdescripcion 20K KM SYNTHETIC LF PH2876 Ford Mazda.cantidad 34.00precioUnitario 2.4750descuento 0.00precioTotalSinImpuesto 84.15impuestos It didn't check nested nodes in impuestosEDIT:Version which uses recursion to get all children and subchildren and subsubchildren, etc.from lxml import etreeimport pandas as pd# --- functions ---def get_all_children(node): data = dict() for child in node: tag = child.tag text = child.text if text: text = text.strip() if text: data[tag] = [text] data.update(get_all_children(child)) return data# --- main ---tree = etree.parse('Factura 2.xml')#root = tree.getroot()invoice = tree.find('comprobante')tree = etree.XML(invoice.text.encode())detalle = tree.find('.//detalle') # need `.//` to get nested elementprint('--- children ---')# data = get_all_children(tree)data = get_all_children(detalle)for key, value in data.items(): print(key, '=', value)print('--- DataFrame ---')df = pd.DataFrame(data).Tprint(df)Result for get_all_children(detalle):--- children ---codigoPrincipal = ['SH6607-XPL']descripcion = ['20K KM SYNTHETIC LF PH2876 Ford Mazda.']cantidad = ['34.00']precioUnitario = ['2.4750']descuento = ['0.00']precioTotalSinImpuesto = ['84.15']codigo = ['2']codigoPorcentaje = ['2']tarifa = ['12.00']baseImponible = ['84.15']valor = ['10.10']--- DataFrame --- 0codigoPrincipal SH6607-XPLdescripcion 20K KM SYNTHETIC LF PH2876 Ford Mazda.cantidad 34.00precioUnitario 2.4750descuento 0.00precioTotalSinImpuesto 84.15codigo 2codigoPorcentaje 2tarifa 12.00baseImponible 84.15valor 10.10And result for get_all_children(tree)--- DataFrame --- 0ambiente 2tipoEmision 1razonSocial INVERNEG S.A.ruc 0990658498001claveAcceso 2201202101099065849800120030120000802950008029516codDoc 01estab 003ptoEmi 012secuencial 000080295dirMatriz AV. DE LAS AMERICAS 807 Y CALLE SEGUNDAfechaEmision 22/01/2021dirEstablecimiento AV. 10 DE AGOSTO # 132 Y DE LOS CEREZOScontribuyenteEspecial 136obligadoContabilidad SItipoIdentificacionComprador 04razonSocialComprador SANTOS ANDINO JOSE RODRIGOidentificacionComprador 1704484185001direccionComprador AV. MARISCAL SUCRE S8-493 Y JOSE MENDOZAtotalSinImpuestos 84.15totalDescuento 0codigo 2codigoPorcentaje 2descuentoAdicional 0.00baseImponible 84.15tarifa 12.00valor 10.10propina 0.00importeTotal 94.25moneda DOLARformaPago 20total 94.25plazo 30unidadTiempo DiascodigoPrincipal SH6607-XPLdescripcion 20K KM SYNTHETIC LF PH2876 Ford Mazda.cantidad 34.00precioUnitario 2.4750descuento 0.00precioTotalSinImpuesto 84.15campoAdicional ivn{http://www.w3.org/2000/09/xmldsig#}DigestValue m1xycXw93GXwQi44F/n8Fr4r2TM={http://www.w3.org/2000/09/xmldsig#}SignatureValue e7j105IptFmzcpMNYsbkrByTaMnV1XPmnyJ69dXCCfifwo...{http://www.w3.org/2000/09/xmldsig#}X509Certifi... MIIKFzCCB/+gAwIBAgIEW2EYeTANBgkqhkiG9w0BAQsFAD...{http://www.w3.org/2000/09/xmldsig#}Modulus yrKN+1xR6KcQvQUjdJza6buyvZ2F+5J06E+BlA/Cvrj/QO...{http://www.w3.org/2000/09/xmldsig#}Exponent AQAB{http://uri.etsi.org/01903/v1.3.2#}SigningTime 2021-01-22T17:39:56-05:00{http://www.w3.org/2000/09/xmldsig#}X509IssuerName CN=AC BANCO CENTRAL DEL ECUADOR,L=QUITO,OU=ENT...{http://www.w3.org/2000/09/xmldsig#}X509SerialN... 1533089913{http://uri.etsi.org/01903/v1.3.2#}Description comprobante{http://uri.etsi.org/01903/v1.3.2#}MimeType text/xml |
Plotly: How to display different color/line segments on a line chart for specified condition? I'm trying to plot a line chart that differentiates the line color and the line itself(i.e. using a dash or dotted lines) based on a specific condition.The figure isn't showing the lines/change in line color but I could see the change in the color of data points on the plot when I hovered over the chart and also in the legend.fig = px.line(df2, x=df2['Time_Stamp'], y=df2['Blob_1_Prob_48H'])count = 0for j, y in enumerate(df2.Blob_1_Prob_48H): if count==0: count +=1 fig.add_traces(go.Scatter( x = [df2['Time_Stamp'][j]], y = [df2['Blob_1_Prob_48H'][j]], mode = "lines+markers", name = "48H", text= df2.Name, hovertemplate="Prediction Percentage: %{y}<br>Date &Time: %{x}<br>Name: %{text}<extra></extra>")) else: if df2['Blob_1_Prob_48H'][j] < df2['Blob_1_Prob_48H'][j-1]: fig.add_traces(go.Scatter( x = [df2['Time_Stamp'][j]], y = [df2['Blob_1_Prob_48H'][j]], mode = 'lines', name = "48H", line = dict(color='red',width=5, dash='dot'), text= df2.Name, hovertemplate="Prediction Percentage: %{y}<br>Date &Time: %{x}<br>Name: %{text}<extra></extra>")) fig.update_layout(title="ALEX(S)",xaxis_title='Date & time',yaxis_title='Blob percentage',yaxis_range=[0,105],showlegend=True,width=1200, height=600)fig.show()Image | There are certainly better ways to do this, but here is one code where I separate a list into many increasing and decreasing curves and then plot them with different styles:import plotly.graph_objects as goimport randomxs = [i for i in range(100)]ys = [random.randint(1,30) for i in range(100)]# Temporary lists to store each consecutive increasing or decreasing valuesx_inc = []y_inc = []x_dec = []y_dec = []# List to gather all increasing and decreasing curvesxs_inc = []ys_inc = []xs_dec = []ys_dec = []# Indicate if it is the first point of new listfirst_inc = Truefirst_dec = True# Looping over original valuesfor i in range(len(ys)-1): if ys[i+1] < ys[i]: if first_inc: # If it is the first, add initial point... x_dec.append(xs[i]) y_dec.append(ys[i]) if y_inc: # ... and add other to list, if not empty xs_inc.append(x_inc) ys_inc.append(y_inc) x_dec.append(xs[i+1]) y_dec.append(ys[i+1]) first_inc = False # Restarting all x_inc = [] y_inc = [] first_dec = True else: if first_dec: # If it is the first, add initial point... x_inc.append(xs[i]) y_inc.append(ys[i]) if y_dec: # ... and add other to list, if not empty xs_dec.append(x_dec) ys_dec.append(y_dec) x_inc.append(xs[i+1]) y_inc.append(ys[i+1]) first_dec = False # Restarting all x_dec = [] y_dec = [] first_inc = Truefig = go.Figure()# Plotting all increasing curvesfor i, (x, y) in enumerate(zip(xs_inc,ys_inc)): fig.add_trace(go.Scatter(x=x, y=y, name = 'Increase', legendgroup = 'increase', # Group them together showlegend=(True if i == 0 else False), # Only add legend to the first line = dict(color='blue',width=3, dash='dash'), mode="lines", ))# Plotting all decreasing curvesfor i, (x, y) in enumerate(zip(xs_dec,ys_dec)): fig.add_trace(go.Scatter(x=x, y=y, name = 'Decrease', legendgroup = 'decrease', # Group them together showlegend=(True if i == 0 else False), # Only add legend to the first line = dict(color='red',width=3, dash='dot'), mode="lines", ))fig.show()Output:If you have large amount of numbers, you should adapt to numpy or pandas, for example. |
Python pandas.read_csv split column into multiple new columns using comma to separate I've used pandas.read_csv to load in a file.I've stored the file into a variable.The first column is a series of numbers separated by a comma (,)I want to split these numbers, and put each number to a new column. I can't seem to find the write functionality for pandas.dataframe.Side NoteI would prefer a different library for loading in my file, but pandasprovides some other different functionality which I need. My Code:Data = pandas.read_csv(pathFile,header=None)doing: print Data gives me: 0 1 2 ...0 [2014, 8, 26, 5, 30, 0.0] 0 0.25 ...(as you can see its a date)Question: How to split/separate each number and save it in a new arrayp.s. I'm trying to achieve the same thing the matlab method datevec() does | If the CSV data looks like"[2014, 8, 26, 5, 30, 0.0]",0,0.25 then import pandas as pdimport jsondf = pd.read_csv('data', header=None)dates, df = df[0], df.iloc[:, 1:]df = pd.concat([df, dates.apply(lambda x: pd.Series(json.loads(x)))], axis=1, ignore_index=True)print(df)yields 0 1 2 3 4 5 6 70 0 0.25 2014 8 26 5 30 0with the values parsed as numeric values.How it works:dates, df = df[0], df.iloc[:, 1:]peels off the first column, and reassigns df to the rest of the DataFrame:In [217]: datesOut[217]: 0 [2014, 8, 26, 5, 30, 0.0]Name: 0, dtype: objectdates contains strings:In [218]: dates.iloc[0]Out[218]: '[2014, 8, 26, 5, 30, 0.0]'We can convert these to a list using json.loads:In [219]: import jsonIn [220]: json.loads(dates.iloc[0])Out[220]: [2014, 8, 26, 5, 30, 0.0]In [221]: type(json.loads(dates.iloc[0]))Out[221]: listWe can do this for each row of dates by using apply:In [222]: dates.apply(lambda x: pd.Series(json.loads(x)))Out[222]: 0 1 2 3 4 50 2014 8 26 5 30 0By making lambda, above, return a Series, apply will return a DataFrame,with the index of the Series becoming the column index of the DataFrame.Now we can use pd.concat to concatenate this DataFrame with df:In [228]: df = pd.concat([df, dates.apply(lambda x: pd.Series(json.loads(x)))], axis=1, ignore_index=True)In [229]: dfOut[229]: 0 1 2 3 4 5 6 70 0 0.25 2014 8 26 5 30 0In [230]: df.dtypesOut[230]: 0 int641 float642 float643 float644 float645 float646 float647 float64dtype: object |
What does an empty parenthesis "()" in logging config dictionary mean? I am playing around with the Python Logstash Formatter and in its wiki it recommended setting the following option for the formatter:"formatters": { "logstash":{ "()": "logstash_formatter.LogstashFormatter" } }This is working for me, but I'm unsure of what the empty parentheses are for, or what exactly logstash_formatter.LogstashFormatter is being set to in this example. Can someone explain to me what the empty parentheses mean here in relation to the Python logger? It almost seems like it would be an empty tuple, except I can't fathom how setting an empty tuple to a class would work. | If you check out the python docs for logging, you'll see this: Objects to be configured are described by dictionaries which detail their configuration. In some places, the logging system will be able to infer from the context how an object is to be instantiated, but when a user-defined object is to be instantiated, the system will not know how to do this. In order to provide complete flexibility for user-defined object instantiation, the user needs to provide a ‘factory’ - a callable which is called with a configuration dictionary and which returns the instantiated object. This is signalled by an absolute import path to the factory being made available under the special key '()'. Basically what it means is that logstash_formatter.LogstashFormatter is the factory that is going to create a new formatter. So when the logging framework would like to create a formatter, it's going to make sure to import logstash_formatter and then do something like logstash_formatter.LogstashFormatter(*args, **kwargs).Indeed, if you use the source, Luke, you can see thatThe value is extractedThen resolved/importedAnd the created factory is used hereif '()' in config: factory = config['()'] # for use in exception handlerAnd later the factory is called with kwargs: result = factory(**kwargs) |
How to combine 3 tuples together in Python I tried to count the character combine in the list. In my script, I can count the tuples, but I don't know how I can count the total of characters in my list.This is the script I have used. def count(num): character = 0 for i in num: character += 1 return charactercount ('1234','1234',1234')my answer is 3I want to show 12. How can I do that? | I'd prefer James' method since it feels more idiomatic, but this is what a functional approach would look like:def count(num): return sum(map(len, num))If you want this to work by calling count('1234', '1234', '1234') instead of count(('1234', '1234', '1234')), you can define the function with def count(*num).Finally, if you want both of those calls to work (and you're sure the atomic type is only ever a string), you can do this:def count(*num): return sum(len(n) if isinstance(n, str) else count(*n) for n in num) |
How to parse a 'JSON string' file in Python? I am working on something that is quite similar to this topic.I downloaded a file which seems like to be a JSON file. But when I open it in notepad, I found that it is a very long list of dictionaries. The file essentially looks like this:[{'time':1, 'value':100},{'time':2, 'value':105},{'time':3, 'value':120}]I tried to load this 'JSON file' into Python like this:import jsonwith open('data.json') as data_file: data = json.loads(data_file)but got an error:TypeError: expected string or bufferHow can I load this file correctly into Python? I would like to iterate thru each row to extract all the 'values'. Thanks! | Use json.load:with open('data.json') as data_file: data = json.load(data_file)The primary difference between json.load and json.loads is that json.load accepts a file (or file-like object) to read and load JSON from, whereas json.loads loads JSON from a string. |
Transform a list The input: lst=[ ' 22774080.570 7 1762178.392 7 1346501.808 8 22774088.434 8\n', ' 20194290.688 8 -2867460.044 8 -2213132.457 9 20194298.629 9\n']The desired output:['22774080.570 7','none','1762178.392 7','1346501.808 8','22774088.434 8'],['20194290.688 8','none','-2867460.044 8', .... ] | I will not answer your question right away because it is not how stackoverflow works but I will give you some hints.HINTSFirst, you can iterate over each line of your first list lst using a for loop.Then, you need to split every numbers of this line, luckily for you, it took always 16 characters. In python, there is severals way to do so. A good training for you will be to use the range function in a for loop. Then use slicing in python strings.Now, it could remains some extra spaces in the beginning of every numbers. A good way to remove them is to use the strip() function. Last thing you need to do is to append every clean strings to a new list (e.g. result)ENDI will edit my answer with a working function as soon as you edit yours with a proper try. Good luck!EDIT Since my namesake @Max Chesterfield already gave an answer, I'll give mine. Instead of using lists of list, you can use a generator to match exactly the desired outputdef extract_line(): for line in lst: result = [] for i in range(0, len(line) - 1, 16): numbers = line[i:i + 16].strip() result.append(numbers if numbers else None) yield resultfor result in extract_line(): print(result)Will output:['22774080.570 7', None, '1762178.392 7', '1346501.808 8', '22774088.434 8']['20194290.688 8', None, '-2867460.044 8', '-2213132.457 9', '20194298.629 9'] |
Why doesn't .strip() remove whitespaces? I have a function that begins like this:def solve_eq(string1): string1.strip(' ') return string1I'm inputting the string '1 + 2 * 3 ** 4' but the return statement is not stripping the spaces at all and I can't figure out why. I've even tried .replace() with no luck. | strip does not remove whitespace everywhere, only at the beginning and end. Try this:def solve_eq(string1): return string1.replace(' ', '')This can also be achieved using regex:import rea_string = re.sub(' +', '', a_string) |
how to get desired output in python from following output I am getting this output as pasted below .[{'accel-world-infinite-burst-2016': 'https://yts.mx/torrent/download/92E58C7C69D015DA528D8D7F22844BF49D702DFC'}, {'accel-world-infinite-burst-2016': 'https://yts.mx/torrent/download/3086E306E7CB623F377B6F99261F82CC8BB57115'}, {'accel-world-infinite-burst-2016': 'https://yifysubtitles.org/movie-imdb/tt5923132'}, {'anna-to-the-infinite-power-1983': 'https://yts.mx/torrent/download/E92B664EE87663D7E5EC8E9FEED574C586A95A62'}, {'anna-to-the-infinite-power-1983': 'https://yts.mx/torrent/download/4F6F194996AC29924DB7596FB646C368C4E4224B'}, {'anna-to-the-infinite-power-1983': 'https://yts.mx/movies/anna-to-the-infinite-power-1983/request-subtitle'}, {'infinite-2021': 'https://yts.mx/torrent/download/304DB2FEC8901E996B066B74E5D5C010D2F818B4'}, {'infinite-2021': 'https://yts.mx/torrent/download/1320D6D3B332399B2F4865F36823731ABD1444C0'}, {'infinite-2021': 'https://yts.mx/torrent/download/45821E5B2E339382E7EAEFB2D89967BB2C9835F6'}, {'infinite-2021': 'https://yifysubtitles.org/movie-imdb/tt6654210'}, {'infinite-potential-the-life-ideas-of-david-bohm-2020': 'https://yts.mx/torrent/download/47EB04FBC7DC37358F86A5BFC115A0361F019B5B'}, {'infinite-potential-the-life-ideas-of-david-bohm-2020': 'https://yts.mx/torrent/download/88223BEAA09D0A3D8FB7EEA62BA9C5EB5FDE9282'}, {'infinite-potential-the-life-ideas-of-david-bohm-2020': 'https://yts.mx/movies/infinite-potential-the-life-ideas-of-david-bohm-2020/request-subtitle'}, {'the-infinite-man-2014': 'https://yts.mx/torrent/download/0E2ACFF422AF4F62877F59EAE4EF93C0B3623828'}, {'the-infinite-man-2014': 'https://yts.mx/torrent/download/52437F80F6BDB6FD326A179FC8A63003832F5896'}, {'the-infinite-man-2014': 'https://yifysubtitles.org/movie-imdb/tt2553424'}, {'nick-and-norahs-infinite-playlist-2008': 'https://yts.mx/torrent/download/DA101D139EE3668EEC9EC5B855B446A39C6C5681'}, {'nick-and-norahs-infinite-playlist-2008': 'https://yts.mx/torrent/download/8759CD554E8BB6CFFCFCE529230252AC3A22D4D4'}, {'nick-and-norahs-infinite-playlist-2008': 'https://yifysubtitles.org/movie-imdb/tt0981227'}]As you can see each movie have multiple links and for each link movie name is repeating .I want all links related to same movie must appeared as same object e.g[{accel-world-infinite-burst-2016:{link1,link2,link3,link4},........]for item in li: # print(item.partition("movies/")[2]) movieName["Movies"].append(item.partition("movies/")[2]) req=requests.get(item) s=soup(req.text,"html.parser") m=s.find_all("p",{"class":"hidden-xs hidden-sm"}) # print(m[0]) for a in m[0].find_all('a', href=True): # movieName['Movies'][item.partition("movies/")[2]]=(a['href']) downloadLinks.append ( {item.partition("movies/")[2]:a['href'] }) | you can try this,# input = your list of dictotp_dict = {}for l in input: for key, value in l.items(): if key not in otp_dict: otp_dict[key] = list([value]) else: otp_dict[key].append(value)print(otp_dict)otp: {'accel-world-infinite-burst-2016':[link1,link2],...}output is dict containing list of links if you want set as you mentioned in your desired op try thisfor l in input: for key, value in l.items(): if key not in otp_dict: otp_dict[key] = set([value]) else: otp_dict[key].add(value)otp: {'accel-world-infinite-burst-2016':{link1,link2},...} |
I have to create a function that returns True or False if a list is sorted I have to create a function called isSorted() and test if a list is sorted or not in Python 3.7. Said function has to return either True or False for whatever case may occur.This is my codedef isSorted(newList): for x in newList: if newList[0] <= x: return True else: return False def main (): newList = [1, 2, 3] print(isSorted(newList)) newList = [1] print(isSorted(newList)) newList = list(range(10)) print(isSorted(newList)) newList[9] = 3 print(isSorted(newList))if __name__ == '__main__': main()I specifically need the newList[9] = 3 line to return False, but it keeps returning True. Can anyone explain why? | Two issues:(1) You should compare each element with the previous one, not with the first element.(2) You immediately return True if the first check succeeds in the loop. Your code doesn't even process the 9th element.A fixed implementation could be:def isSorted(newList): for i in range(len(newList) - 1): if newList[i] > newList[i + 1]: return False return TrueYour test then should print:TrueTrueTrueFalse |
Choose Python version for egg installation or install parallel versions of site-package Via fink install I put the following Python version on my Mac OS X computer:python2.3,python2.4,python2.5,python2.6.Further, python is alias for python2.6 on my system.I want to install an egg, e.g. easy_install networkx-0.36-py2.5.egg, where I have to use python 2.5 instead of version 2.6. Is this possible without changing the python alias?Can you tell me, whether and how I can install networkx-0.36-py2.5 and networkx-1.0rc1-py2.6 in parallel?How can I install a site-package in a way, that it is available for different Python versions? | easy_install is part of the setuptools package. Fink has separate setuptools packages for python 2.5 and python 2.6:fink install setuptools-py25 setuptools-py26You can then download and install networkx to both versions:/sw/bin/easy_install-2.5 networkx/sw/bin/easy_install-2.6 networkxIf you need a particular version of the package:/sw/bin/easy_install-2.5 networkx==0.36/sw/bin/easy_install-2.6 networkx==0.36 |
How do I encode WSGI output in UTF-8? I want to send an HTML page to the web browser encoded as UTF-8. However the following example fails:from wsgiref.simple_server import make_serverdef app(environ, start_response): output = "<html><body><p>Räksmörgås</p></body></html>".encode('utf-8') start_response('200 OK', [ ('Content-Type', 'text/html'), ('Content-Length', str(len(output))), ]) return outputport = 8000httpd = make_server('', port, app)print("Serving on", port)httpd.serve_forever()Here's the traceback:Serving on 8000Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/wsgiref/handlers.py", line 75, in run self.finish_response() File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/wsgiref/handlers.py", line 116, in finish_response self.write(data) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/wsgiref/handlers.py", line 202, in write "write() argument must be a string or bytes"If I remove the encoding and simply return the python 3 unicode string, the wsgiref server seems to encode in whatever charset the browser specifies in the request header. However I'd like to have this control myself as I doubt I can expect all WSGI servers to do the same. What should I do to return a UTF-8 encoded HTML page?Thanks! | You need to return the page as a list:def app(environ, start_response): output = "<html><body><p>Räksmörgås</p></body></html>".encode('utf-8') start_response('200 OK', [ ('Content-Type', 'text/html; charset=utf-8'), ('Content-Length', str(len(output))) ]) return [output]WSGI is designed that way so that you could just yield the HTML (either complete or in parts). |
Remove duplicates from subsequences of the list For part of log parser I need to filter occurrences of baud rate in the log. First I get all occurrences using re.findall, then I'm trying to remove duplicates in subsequences in its result. Results are like [10000,10000,10000,10000,0,0,0,10000,10000], the list can contain several hundreds of values. So the first baud rate was 10000, then 0, then again 10000.I need to see how the baud rate changed, so I can't use set, as it will lose information of baud rate switching points. So, once again input: [10000,10000,10000,10000,0,0,0,10000,10000]Desired output: [10000,0,10000]What I have made already: m = [10000,10000,10000,10000,0,0,0,10000,10000] n = []for i,v in enumerate(m): if i == 0: n.append(v) n_index = 0 else: if v != n[n_index]: n.append(v) n_index = n_index + 1it works, but it doesn't seem pythonic enough to me. Please advise: is there some more efficient way possible, or do I even not need to invent the wheel again? | Use itertools.groupby:>>> rates = [10000,10000,10000,10000,0,0,0,10000,10000]>>> from itertools import groupby>>> [e for e, g in groupby(rates)][10000, 0, 10000]Explanation: If no key function is given, then the elements are just grouped by identity, i.e. groups of consecutive equal elements are collapsed. The result is an iterator of key-elements and the groups (in this case, just repetitions of the key element). We need just the keys.Update: Using IPython's %timeit magic command and a list of 100,000 random baud rates, itertools.groupby seems to be about as fast as the "compare to previous element loop" solutions, and a good deal shorter. |
Beautiful Soup: Parsing only one element I keep running into walls, but feel like I'm close here.HTML block being harvested:div class="details"> <div class="price"> <h3>From</h3> <strike data-round="true" data-currency="USD" data-price="148.00" title="US$148 ">€136</strike> <span data-round="true" data-currency="USD" data-price="136.00" title="US$136 ">€125</span></div>I would like to parse out the "US$136" value alone (span data). Here is my logic so far, which captures both 'span data' and 'strike-data:price = item.find_all("div", {"class": "price"}) price_final = (price[0].text.strip()[8:]) print(price_final)Any feedback is appreciated:) | price in your case is a ResultSet - list of div tags having price class. Now you need to locate a span tag inside every result (assuming there are multiple prices you want to match):prices = item.find_all("div", {"class": "price"})for price in prices: price_final = price.span.text.strip() print(price_final)If there is only once price you need to find:soup.find("div", {"class": "price"}).span.get_text()or with a CSS selector:soup.select_one("div.details div.price span").get_text()Note that, if you want to use select_one(), install the latest beautifulsoup4 package:pip install --upgrade beautifulsoup4 |
Multiple initialisation with given input parameter I am trying to make my class to be executed a given number of times according to passed parameter. My class converts dictionary into some desired data. Let me show You a quick example of what i mean exactly:First class A just offers common methods for derivative classes.class A(object): @staticmethod def do_magic(obj): # ... print(''.obj)Class B, E (skipped in example) initialise objects and provide b_stuff and e_stuff.class B(A): def __init__(self, mydict): # self.b_stuff = [...]Now Class C comes into play. It basically transforms given dictionary mydict which has a key-value pair describing number of iterations 'iters': 'val'class C(B, E): def __init__(self, mydict): B.__init__(self, mydict) # access to b_stuff E.__init__(self, mydict) # access to e_stuff self.iters = int(self.mydict['iters']) # desired nb of initialisations/executions self.c_stuff = [b_stuff, e_stuff, other] def do_magic(self): A.do_magic(self.c_stuff) Now the problem is, that once calling a class from external loop later in code i don't know how to make my class to do_magic mydict['iters'] number of times. for d in list_of_dicts: C(d).do_magic()One of solutions could be that i'll initialise this class just in external loop above, like that: for d in list_of_dicts: for i in range(int(d['iters'])) C(d).do_magic()But it feels far away from OOP style. To be 100% sure that You folks follow me, ill show here actual expected behaviour:d - input dictionaryd['iters'] = 2ExecuteC(d).do_magic()Actual result:>>> 'XXX YYY RANDOM_NUMBER1 .... ' Expected result:>>> 'XXX YYY RANDOM_NUMBER1 .... ' 'XXX YYY RANDOM_NUMBER2 .... ' # random number from class B re-initialised!!As exploring stack i've found out some hints to refer __new__ method but still no clue if this is a right path and how :) Edit:As @Peter-wood suggested - adding to a question: I'd like to point out that both attributes b_stuff and e_stuff once initialised - produce unique strings that are part of c_stuff and are about to be updated with each do_magic() call. This method basically does ''.join(obj) (see class A) | class C(B, E): def do_magic(self): for _ in range(self.iters): A.do_magic(self.c_stuff) |
What does clf mean in machine learning? When doing fitting, I always come across code likeclf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)(from http://scikit-learn.org/stable/modules/cross_validation.html#k-fold)What does clf stand for? I googled around but didn't find any clues. | In the scikit-learn tutorial, it's short for classifier.: We call our estimator instance clf, as it is a classifier. |
Iterate through data frame to generate random number in python Starting with this dataframe I want to generate 100 random numbers using the hmean column for loc and the hstd column for scaleI am starting with a data frame that I change to an array. I want to iterate through the entire data frame and produce the following output.My code below will only return the answer for row zero. Name amax hmean hstd amin0 Bill 22.924545 22.515861 0.375822 22.1100001 Bob 26.118182 24.713880 0.721507 23.7384002 Becky 23.178606 22.722464 0.454028 22.096752This code provides one row of output, instead of threefrom scipy import statsimport pandas as pddef h2f(df, n): for index, row in df.iterrows(): list1 = [] nr = df.as_matrix() ff = stats.norm.rvs(loc=nr[index,2], scale=nr[index,3], size = n) list1.append(ff) return list1df2 = h2f(data, 100)pd.DataFrame(df2)This is the output of my code0 1 2 3 4 ... 99 100 0 22.723833 22.208324 22.280701 22.416486 22.620035 22.55817 This is the desired output0 1 2 3 ... 99 100 0 22.723833 22.208324 22.280701 22.416486 22.620035 1 21.585776 22.190145 22.206638 21.927285 22.5618822 22.357906 22.680952 21.4789 22.641407 22.341165 | Dedent return list1 so it is not in the for-loop. Otherwise, the function returns after only one pass through the loop.Also move list1 = [] outside the for-loop so list1 does not get re-initialized with every pass through the loop:import iofrom scipy import statsimport pandas as pddef h2f(df, n): list1 = [] for index, row in df.iterrows(): mean, std = row['hmean'], row['hstd'] ff = stats.norm.rvs(loc=mean, scale=std, size=n) list1.append(ff) return list1content = '''\ Name amax hmean hstd amin0 Bill 22.924545 22.515861 0.375822 22.1100001 Bob 26.118182 24.713880 0.721507 23.7384002 Becky 23.178606 22.722464 0.454028 22.096752'''df = pd.read_table(io.BytesIO(content), sep='\s+')df2 = pd.DataFrame(h2f(df, 100))print(df2)PS. It is inefficent to call nr = df.as_matrix() with each pass through the loop.Since nr never changes, at most, call it once, before entering the for-loop.Even better, just use row['hmean'] and row['hstd'] to obtain the desired numbers. |
Using QT (PySide) to get user input with QInputDialog I did a small script on python to do some stuff, and I want to ask user input first. This is my current code:import sysfrom PySide import QtGuiapp = QtGui.QApplication(sys.argv)gui = QtGui.QWidget()text, ok = QtGui.QInputDialog.getText(gui, "question", """please put the thing I need from you""")print(text, ok)if ok: app.exit()else: app.exit()app.exec_()print ("I'm aliveeeee'")The dialog pop-ups exactly as I want, but app.exec_() never ends so the rest of the code is never executed (and the process never finish) I tried to kill it with app.exit(), app.quit(), I also try to show() and close() the QWidget, but nothing is working.If I do gui.show() before calling the QInputDialog and then close the widget manually, the app closes successfully. However, this is not the behavior I want.Can you guide me on which is the best way to close the exec loop after I got my data?PD: This is going to be a windows app (with py2exe) and using the shell is not an option. | Just don't call app.exec_()The problem here is that this is a toy example. In real life, usually you will show some UI and then call app.exec() to let the user interact with it. |
Torrent tracker proxy I'm trying to implement a scipt on OpenShift, which works to bypass a very basic firewall in my college.The aim is that I add the OpenShift address to the tracker list in any torrent I am running.The client requests the script for peers.The script accepts the peer list request and then asks for the list itself from a valid tracker. For testing purposes I have hardcoded this into the script as the tracker works for the test torrent without the firewall.The response is passed back to the torrent client on my computer.MyPC <==> Openshift <==> TrackerThis code is not working for some reason. I followed the flask quick start guide and the OpenShift getting started guide . I am new to networking so please help me out.This is the routes.py file:#!/usr/bin/pythonimport os,urllib2from flask import Flaskfrom flask import requestapp=Flask(__name__)app.config['PROPAGATE_EXCEPTIONS']=True@app.route('/announce/')def tormirror_test(): q=request.query_string u=urllib2.urlopen("http://exodus.desync.com:6969/announce?"+str(q)) return u@app.route("/<name>")def insulter(name): return "this is a test code====="+nameif __name__ == "__main__": app.run() | I think part of it is that your university may be blocking the connection back to your computer from OpenShift. My guess is your university blocks incoming connections on port 6969 Just putting it here so you can mark it answered |
Comparing Difference between Values within Python List Let's say I have a list of integers:list = [1,2,3,5,6,7,10,11,12]And I'd like to divide the list in to three separate lists, with the split occurring between consecutive integers with a difference >=2, which would give melist1 = [1, 2, 3]list2 = [5, 6, 7]list3 = [10, 11, 12]Is there a straightforward way to do this in Python? I would like to do this in order to analyze data from a psychology experiment, where I have a list of timestamped response and want to cluster responses based on how far apart they are | Take a look at this StackOverflow question. The answers there show you how to divide a list into sublists of consecutive integers.From the accepted answer there:>>> data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]>>> for k, g in groupby(enumerate(data), lambda (i,x):i-x):... print map(itemgetter(1), g)...[1][4, 5, 6][10][15, 16, 17, 18][22][25, 26, 27, 28]The answer doesn't offer any explanation of what's going on here, so I'll explain. First, it assumes that data is sorted in ascending order. Enumerating data then gives a list of index, value pairs. It then uses the index minus the value as a key for grouping the items. Take a look at what this does for your list:>>> myList = [1,2,3,5,6,7,10,11,12]>>> [i - x for i, x in enumerate(myList)][-1, -1, -1, -2, -2, -2, -4, -4, -4]As you can see, consecutive values end up having the same grouping key. This is becauseif data[i] + 1 == data[i+1]:then data[i] - i == data[i] + 1 - 1 - i == data[i+1] - (i + 1)FYI, groupby comes from itertools and itemgetter comes from operator. So add these lines to your imports:from itertools import groupbyfrom operator import itemgetterJust be aware that this solution will only work if data is sorted and does not contain any duplicates. Of course, it's fairly straightforward to turn a list into a sorted set:>>> myList = [1, 1, 3, 5, 6, 4, 10, 12, 11, 1, 2]>>> myList = list(sorted(set(myList)))>>> print myList[1, 2, 3, 4, 5, 6, 10, 11, 12] |
Automate firefox with python? Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions? | You could try selenium. |
How to grouping and transform in pandas Now I have below dataframeA B C1 1 1 1 2 11 3 22 4 22 5 22 6 3I would like to grouping by df.A, and sum up in df.BBut, I would like to transform C as first of each group elements.So I would like to get results below.A B C1 6 12 15 2How I can remain df.C and transform the first element of each group?I tried df.groupby(A)[B].sum() but I couldnt figure out next step... | You can use agg and pass a dict of funcs to perform on the cols of interest:In [115]:df.groupby('A').agg({'B':'sum','C':'first'}).reset_index()Out[115]: A C B0 1 1 61 2 2 15The dict has the col name and the func to perform on each col, here we can pass the string name of the func for sum and first.To reorder the cols you can use fancy indexing:In [116]:df.groupby('A').agg({'B':'sum','C':'first'}).reset_index().ix[:,df.columns]Out[116]: A B C0 1 6 11 2 15 2 |
Filling a shape with color in python turtle I'm trying to fill a shape with a color but when I run it, it does not show.Am I not supposed to use classes for this? I am not proficient with python-3 and still learning how to use classesimport turtlet=turtle.Turtle()t.speed(0)class Star(turtle.Turtle): def __init__(self, x=0, y=0): turtle.Turtle.__init__(self) self.shape("") self.color("")#Creates the star shape def shape(self, x=0, y=0): self.fillcolor("red") for i in range(9): self.begin_fill() self.left(90) self.forward(90) self.right(130) self.forward(90) self.end_fill()#I was hoping this would fill the inside def octagon(self, x=0.0, y=0.0): turtle.Turtle.__init__(self) def octa(self): self.fillcolor("green") self.begin_fill() self.left(25) for x in range(9): self.forward(77) self.right(40)#doesn't run with out thisa=Star() | Issues with your program: you create and set the speed of a turtle that you don't actually use; turtle.py already has a shape() method so don't override it to mean something else, pick a new name; you don't want the begin_fill() and end_fill() inside the loop but rather surrounding the loop; you call your own shape() method with invalid arguments.The following rework of your code addresses the above issues:from turtle import Turtle, Screenclass Star(Turtle): def __init__(self, x=0, y=0): super().__init__(visible=False) self.speed('fastest') self.draw_star(x, y) def draw_star(self, x=0, y=0): """ Creates the star shape """ self.penup() self.setposition(x, y) self.pendown() self.fillcolor("red") self.begin_fill() for _ in range(9): self.left(90) self.forward(90) self.right(130) self.forward(90) self.end_fill()t = Star()screen = Screen()screen.exitonclick() |
iterate over two numpy arrays return 1d array I often have a function that returns a single value such as a maximum or integral. I then would like to iterate over another parameter. Here is a trivial example using a parabolic. I don't think its broadcasting since I only want the 1D array. In this case its maximums. A real world example is the maximum power point of a solar cell as a function of light intensity but the principle is the same as this example. import numpy as npx = np.linspace(-1,1) # sometimes this is read from fileparameters = np.array([1,12,3,5,6]) maximums = np.zeros_like(parameters)for idx, parameter in enumerate(parameters): y = -x**2 + parameter maximums[idx] = np.max(y) # after I have the maximum I don't need the rest of the data.print(maximums)What is the best way to do this in Python/Numpy? I know one simplification is to make the function a def and then use np.vectorize but my understanding is it doesn't make the code any faster. | Extend one of those arrays to 2D and then let broadcasting do those outer additions in a vectorized way -maximums = (-x**2 + parameters[:,None]).max(1).astype(parameters.dtype)Alternatively, with the explicit use of the outer addition method -np.add.outer(parameters, -x**2).max(1).astype(parameters.dtype) |
Python: change something inside a string respecting length constrains I am looking for a smarter solution to do what my code does.I have to explore a text file. There are many Materials inside this text, and I want to change some material properties by replacing these values with new ones.This is the Material structure: Material, PLASTERBOARD-1, !- Name MediumSmooth, !- Roughness 0.01200, !- Thickness {m} 0.16000, !- Conductivity {W/m-K} 950.000, !- Density {kg/m3} 840.00, !- Specific Heat {J/kg-K} 0.900000, !- Thermal Absorptance 0.600000, !- Solar Absorptance 0.600000; !- Visible AbsorptanceThis is my actual code:from tempfile import mkstempfrom shutil import movefrom os import fdopen, removedef update_capacity(idf_file_path, material, capacity): check = False #Create temp file fh, abs_path = mkstemp() with fdopen(fh,'w') as new_file: with open(idf_file_path) as old_file: for line in old_file: if material in line: check = True if check: if '!- Conductivity {W/m-K}' in line: line = ' '+str(capacity)+ ',' i = 26 - len(str(capacity)) while(i>0): line +=' ' i-=1 line += '!- Conductivity {W/m-K}\n' line.strip('(') check= False new_file.write(line) #Remove original file old_file.close() new_file.close() remove(idf_file_path) #Move new file move(abs_path, idf_file_path) return idf_file_pathmat = 'FIBERGLASS QUILT-1'cap = 2,0idf_file = 'D:\\users\\f35943c\\Downloads\\Exercise1A.idf'update_capacity(idf_file, mat, cap)From if material in line to new_file.write(line) is where I want to optimize my code. Furthermore, strip does not work as I wanted, because I can't remove the parenthesis inside the line.This is my benchmark string " 0.16000, !- Conductivity {W/m-K}" and I have to respect the numbers of characters before the !- Conductivity {W/m-K}Can someone bring me to a smarter solution? | Temporary Solution: if material in line: check = True if check: if '!- Conductivity {W/m-K}' in line: line = ' '+ str(capacity).ljust(25) + '!- Conductivity {W/m-K}\n' check= False new_file.write(line)If someone has a better idea, it would be welcome!I also found this usefull link about formatting string Python 3.x : https://www.digitalocean.com/community/tutorials/how-to-use-string-formatters-in-python-3 |
Parallel processing of large dataframes I have a large Pandas DataFrame with columns a and b (kind of coordinates in floats) and column c (values), which has to be binned and summarized over a certain interval of steps in the columns a and b. The order of the result is relevant, since a and b simulate coordinates where samples have been taken with the value c. In the next step, the results would be reshaped to an image and processed further.This can be solved using nested loops (see below), however, it obviously does not scale well with larger datasets or smaller step sizes.Example:import pandas as pdimport timeimport numpy as npa = np.random.random(int(10E7))b = np.random.random(int(10E7))c = np.random.random(int(10E7))df = pd.DataFrame({'a':a, 'b':b, 'values': c})stepsize = 0.1means = []T1 = time.time()for j in np.arange(0,1,stepsize): for i in np.arange(0,1,stepsize): selection = df[ (df.a > i) & (df.a <= i+stepsize) & (df.b > j) & (df.b <= j+stepsize) ] means.append(selection['values'].mean()) T2 = time.time() I have been wondering, how this can be resolved using multiprocessing (or multithreading?).Therefore, I have set up the following code, though I am stuck: I am not sure, if the meanAB code is setup correctly and if the multiprocessing has been initiated correctly.stepsize = 0.01 # smaller stepsizeA_vals = list(np.arange(0,1,stepsize))B_vals = list(np.arange(0,1,stepsize))def meanByAB(A,B): # loaded df and stepsize globally. Does it make sense?! selection = df[ (df.a > A) & (df.a <= A+stepsize) & (df.b > B) & (df.b <= B+stepsize) ] mean = np.mean(selection['values'])if __name__ == '__main__': T1 = time.time() p = mp.Process(target=meanByAB,args=(A_values,B_values,) # tried many things here. yields ValueError p.start() p.join() T2 = time.time() | This is my solution. The results are not ordered, because I don't understand which is the right sorting to apply. However, each element of the list of results is a tuple composed of: (mean, A, B) where A and B are the values you specify. A posteriori, you can sort the values based on A and B, or by mean if you need it:import pandas as pdimport numpy as np# from multiprocessing import Process, Manager, Array, Queue# import multiprocessing as mpfrom threading import Threadimport threading as thfrom queue import Queueimport timedef worker(df, q_input, q_output, stepsize): while True: A = q_input.get() if A == 'DONE': break for i, B in enumerate(np.arange(0,1,stepsize)): selection = df[ (df.a > A) & (df.a <= A+stepsize) & (df.b > B) & (df.b <= B+stepsize) ] mean = np.mean(selection.values) q_output.put((mean, A, B))def dump_queue(queue): result = [] while not queue.empty(): result.append(queue.get(timeout=0.01)) return resultif __name__ == '__main__': start_time = time.time() start_initialization = time.time() NUM_THREADS = 4 a = np.random.random(int(10E7)) b = np.random.random(int(10E7)) c = np.random.random(int(10E7)) df = pd.DataFrame({'a':a, 'b':b, 'values': c}) stepsize = 0.1 means = [] q_input = Queue() for j in np.arange(0,1,stepsize): q_input.put(j) for _ in range(NUM_PROCESSES): q_input.put('DONE') q_output = Queue() print(f"Initialization terminates in {time.time() - start_initialization}") processes = [Thread(target=worker, args=(df, q_input, q_output, stepsize), name=f"Worker-{i}") for i in range(NUM_THREADS)] for p in processes: p.start() for p in processes: p.join() print(f"Finished in {time.time() - start_time}") result = dump_queue(q_output) print(result) print(len(result))EDITThe edited version uses multi-threading instead of multiprocessing: multi-threading is lighter than multiprocessing, so I think it should be the best choice in this problem |
"AttributeError: 'function' object has no attribute 'get'" in SQLAlchemy ORM Object contructor - Flask EDIT Found my error! Leaving problem description as is, but appending answer bellow.In my registration function, I want to create a new User object.I've defined a User Table like this:class User(_USERDB.Model, UserMixin): """ User defining Data """ __tablename__ = "users" __table_args__ = {'extend_existing': True} id = Column(Integer, primary_key=True) mail = Column(Text, unique=True, nullable=False) pw = Column(Text, nullable=False) date_of_creation = Column(DateTime(timezone=True), default=datetime.now) # the date the user is created settings = relationship("UserSettingProfile", back_populates="user", passive_deletes=True) admin = Column(Boolean, default=False, nullable=False) world_id = Column(Integer, nullable=True) def __dict__(self): return { "id": self.id, "mail": self.mail, "date_of_creation": self.date_of_creation, "admin": self.admin, "world_id": self.world_id }If I now use the constructor as in other tutorials (TechWithTim - Flask Bog tutorial)new_user = User(mail=mail, pw=pw_hash, admin=admin)I get the error from the Title"AttributeError: 'function' object has no attribute 'get'"I've already tried stepping through the debugger to spot where this comes from, but it's not much more helpful than the stack trace. All I did was validate that the stack trace, is the stack trace (not very helpful indeed)Traceback (most recent call last): File "E:\project\venv\Lib\site-packages\flask\app.py", line 2091, in __call__ return self.wsgi_app(environ, start_response) File "E:\project\venv\Lib\site-packages\flask\app.py", line 2076, in wsgi_app response = self.handle_exception(e) File "E:\project\venv\Lib\site-packages\flask\app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "E:\project\venv\Lib\site-packages\flask\app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "E:\project\venv\Lib\site-packages\flask\app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "E:\project\venv\Lib\site-packages\flask\app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "E:\project\web_interface\routes\api_routing.py", line 135, in register new_user = User(mail=mail, pw=pw_hash, admin=admin) File "<string>", line 4, in __init__ File "E:\project\venv\Lib\site-packages\sqlalchemy\orm\state.py", line 479, in _initialize_instance with util.safe_reraise(): File "E:\project\venv\Lib\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__ compat.raise_( File "E:\project\venv\Lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_ raise exception File "E:\project\venv\Lib\site-packages\sqlalchemy\orm\state.py", line 477, in _initialize_instance return manager.original_init(*mixed[1:], **kwargs) File "E:\project\venv\Lib\site-packages\sqlalchemy\orm\decl_base.py", line 1157, in _declarative_constructor setattr(self, k, kwargs[k]) File "E:\project\venv\Lib\site-packages\sqlalchemy\orm\attributes.py", line 459, in __set__ self.impl.set( File "E:\project\venv\Lib\site-packages\sqlalchemy\orm\attributes.py", line 1094, in set old = dict_.get(self.key, NO_VALUE)AttributeError: 'function' object has no attribute 'get'For completion's sake, here is my api_routing.py file:from flask import Blueprint, request, jsonifyfrom database import User, UserSettingProfile@api_routes.route("/register", methods=["POST"])def register(): response = {"message": ""} try: mail = request.values["mail"] pw1 = request.values["pw1"] pw2 = request.values["pw2"] except KeyError as e: response["message"] = f"{e=} | Missing argument. Expected: mail, password1, password2" return jsonify(response), 400 admin = False pw_hash = hash_pw(pw1) print(f"{pw_hash=}\n{mail=}\n{admin=}") new_user = User(mail=mail, pw=pw_hash, admin=admin) print(new_user) new_user_settings = UserSettingProfile(user_id=new_user.id) _USERDB.session.add(new_user) _USERDB.session.add(new_user_settings) _USERDB.session.commit() login_user(new_user, remember=True) response["message"] = f"{mail=} registered and logged in successfully" return jsonify(response), 200All the parameters that I pass on into the User() constructor are valid and as expected:pw_hash='$2b$14$6UpznQzJgw/zLZLGmjBkfOpm.D8iGXf/OsfqRkAVyzcZFM88kdos2'mail='test_mail'admin=FalseAfter looking at other posts, I double-checked:The name "User" in the namespace indeed maps to the model-class I defined. | AnswerThe reason it fails is thanks to the __dict__ method. Since the removal of it, everything works fine.Of course this leads to the next question: How to define custom dict functions for those classesI couldn't find an answer to this but still want to offer a solution:Define a custom function that takes the required obj as a parameter and then puts the wanted fields into a dict. Not the most elegant solution IMO but it works. |
TypeError: Can't convert 'list' object to str implicitly - Python following is a code to get the emails into the stdin, and do some filtering to extract the ticket-id from email then send it to telegram.but when i run the code it returns the following error type. TypeError: Can't convert 'list' object to str implicitlyI have made the tikid to str(tikid), the i receive no issues but no messages are getting send to telegram function.the code is:import reimport sysimport requestsdef telegram_bot_sendtext(bot_message): bot_token = 'mytoken_id_here' bot_chatID = 'my_chatid_here' send_text = 'https://api.telegram.org/bot' + bot_token + '/sendMessage?chat_id=' + bot_chatID + '&parse_mode=Markdown&text=' + bot_message response = requests.get(send_text) return response.json()for myline in sys.stdin: tikid = re.findall("ticket/\d+", myline) if (len(tikid)) > 0: tikid = output.replace("ticket/", "") print(tikid)telegram_bot_sendtext(tikid)Email-Content--===============7235995302665768439==Content-Type: text/html; charset="utf-8"MIME-Version: 1.0Content-Transfer-Encoding: 8bit<div dir="rtl"><html><head> Ticket generated .... </head><p>&nbsp;</p><p><a href="http://example.com/show/ticket/1735" target="_blank">http://example.com/show/ticket/1735</a></p></body></html></div>--===============7235995302665768439==--OUTPUTroot@server:~# cat email.txt | /usr/bin/python3.5 /usr/local/scrip/telegram-piper17351735Traceback (most recent call last): File "/usr/local/scripts/telegram-piper.py", line 22, in <module> telegram_bot_sendtext(tikid) File "/usr/local/scripts/telegram-piper.py", line 9, in telegram_bot_sendtext send_text = 'https://api.telegram.org/bot' + bot_token + '/sendMessage?chat_id=' + bot_chatID + '&parse_mode=Markdown&text=' + bot_messageTypeError: Can't convert 'list' object to str implicitly``` | Solved the issue.By using ' '.joinimport reimport sysimport requestsdef telegram_bot_sendtext(bot_message): bot_token = 'mytoken_id_here' bot_chatID = 'my_chatid_here' send_text = 'https://api.telegram.org/bot' + bot_token + '/sendMessage?chat_id=' + bot_chatID + '&parse_mode=Markdown&text=' + bot_message response = requests.get(send_text) return response.json()for myline in sys.stdin: tikid = re.findall("ticket/\d+", myline) if (len(tikid)) > 0: output = str(tikid[0]) tikid = output.replace("ticket/", "") new_value=''.join(tikid)telegram_bot_sendtext(new_value) |
Efficient pairwise correlation for two matrices of features In Python I need to find the pairwise correlation between all features in a matrix A and all features in a matrix B. In particular, I am interesting in finding the strongest Pearson correlation that a given feature in A has across all features in B. I do not care whether the strongest correlation is positive or negative.I've done a inefficient implementation using two loops and scipy below. However, I'd like to use np.corrcoef or another similar method to compute it efficiently. Matrix A has shape 40000x400 and B has shape 40000x1440. My attempt at doing it efficiently can be seen below as the method find_max_absolute_corr(A,B). However, it fails with the following error:ValueError: all the input array dimensions except for the concatenation axis must match exactly.import numpy as npfrom scipy.stats import pearsonrdef find_max_absolute_corr(A, B): """ Finds for each feature in `A` the highest Pearson correlation across all features in `B`. """ max_corr_A = np.zeros((A.shape[1])) for A_col in range(A.shape[1]): print "Calculating {}/{}.".format(A_col+1, A.shape[1]) metric = A[:,A_col] pearson = np.corrcoef(B, metric, rowvar=0) # takes negative correlations into account as well min_p = min(pearson) max_p = max(pearson) max_corr_A[A_col] = max_absolute(min_p, max_p) return max_corr_Adef max_absolute(min_p, max_p): if np.isnan(min_p) or np.isnan(max_p): raise ValueError("NaN correlation.") if abs(max_p) > abs(min_p): return max_p else: return min_pif __name__ == '__main__': A = np.array( [[10, 8.04, 9.14, 7.46], [8, 6.95, 8.14, 6.77], [13, 7.58, 8.74, 12.74], [9, 8.81, 8.77, 7.11], [11, 8.33, 9.26, 7.81]]) B = np.array( [[-14, -9.96, 8.10, 8.84, 8, 7.04], [-6, -7.24, 6.13, 6.08, 5, 5.25], [-4, -4.26, 3.10, 5.39, 8, 5.56], [-12, -10.84, 9.13, 8.15, 5, 7.91], [-7, -4.82, 7.26, 6.42, 8, 6.89]]) # simple, inefficient method for A_col in range(A.shape[1]): high_corr = 0 for B_col in range(B.shape[1]): corr,_ = pearsonr(A[:,A_col], B[:,B_col]) high_corr = max_absolute(high_corr, corr) print high_corr # -0.161314601631 # 0.956781516149 # 0.621071009239 # -0.421539304112 # efficient method max_corr_A = find_max_absolute_corr(A, B) print max_corr_A # [-0.161314601631, # 0.956781516149, # 0.621071009239, # -0.421539304112] | Seems scipy.stats.pearsonr follows this definition of Pearson Correlation Coefficient Formula applied on column-wise pairs from A & B -Based on that formula, you can vectorized easily as the pairwise computations of columns from A and B are independent of each other. Here's one vectorized solution using broadcasting -# Get number of rows in either A or BN = B.shape[0]# Store columnw-wise in A and B, as they would be used at few placessA = A.sum(0)sB = B.sum(0)# Basically there are four parts in the formula. We would compute them one-by-onep1 = N*np.einsum('ij,ik->kj',A,B)p2 = sA*sB[:,None]p3 = N*((B**2).sum(0)) - (sB**2)p4 = N*((A**2).sum(0)) - (sA**2)# Finally compute Pearson Correlation Coefficient as 2D array pcorr = ((p1 - p2)/np.sqrt(p4*p3[:,None]))# Get the element corresponding to absolute argmax along the columns out = pcorr[np.nanargmax(np.abs(pcorr),axis=0),np.arange(pcorr.shape[1])]Sample run -1) Inputs :In [12]: AOut[12]: array([[ 10. , 8.04, 9.14, 7.46], [ 8. , 6.95, 8.14, 6.77], [ 13. , 7.58, 8.74, 12.74], [ 9. , 8.81, 8.77, 7.11], [ 11. , 8.33, 9.26, 7.81]])In [13]: BOut[13]: array([[-14. , -9.96, 8.1 , 8.84, 8. , 7.04], [ -6. , -7.24, 6.13, 6.08, 5. , 5.25], [ -4. , -4.26, 3.1 , 5.39, 8. , 5.56], [-12. , -10.84, 9.13, 8.15, 5. , 7.91], [ -7. , -4.82, 7.26, 6.42, 8. , 6.89]])2) Original loopy code run -In [14]: high_corr_out = np.zeros(A.shape[1]) ...: for A_col in range(A.shape[1]): ...: high_corr = 0 ...: for B_col in range(B.shape[1]): ...: corr,_ = pearsonr(A[:,A_col], B[:,B_col]) ...: high_corr = max_absolute(high_corr, corr) ...: high_corr_out[A_col] = high_corr ...: In [15]: high_corr_outOut[15]: array([ 0.8067843 , 0.95678152, 0.74016181, -0.85127779])3) Proposed code run -In [16]: N = B.shape[0] ...: sA = A.sum(0) ...: sB = B.sum(0) ...: p1 = N*np.einsum('ij,ik->kj',A,B) ...: p2 = sA*sB[:,None] ...: p3 = N*((B**2).sum(0)) - (sB**2) ...: p4 = N*((A**2).sum(0)) - (sA**2) ...: pcorr = ((p1 - p2)/np.sqrt(p4*p3[:,None])) ...: out = pcorr[np.nanargmax(np.abs(pcorr),axis=0),np.arange(pcorr.shape[1])] ...: In [17]: pcorr # Pearson Correlation Coefficient arrayOut[17]: array([[ 0.41895565, -0.5910935 , -0.40465987, 0.5818286 ], [ 0.66609445, -0.41950457, 0.02450215, 0.64028344], [-0.64953314, 0.65669916, 0.30836196, -0.85127779], [-0.41917583, 0.59043266, 0.40364532, -0.58144102], [ 0.8067843 , 0.07947386, 0.74016181, 0.53165395], [-0.1613146 , 0.95678152, 0.62107101, -0.4215393 ]])In [18]: out # elements corresponding to absolute argmax along columnsOut[18]: array([ 0.8067843 , 0.95678152, 0.74016181, -0.85127779])Runtime tests -In [36]: A = np.random.rand(4000,40)In [37]: B = np.random.rand(4000,144)In [38]: np.allclose(org_app(A,B),proposed_app(A,B))Out[38]: TrueIn [39]: %timeit org_app(A,B) # Original approach1 loops, best of 3: 1.35 s per loopIn [40]: %timeit proposed_app(A,B) # Proposed vectorized approach10 loops, best of 3: 39.1 ms per loop |
Stanford NER with python NLTK fails with strings containing multiple "!!"s? Suppose this is my filecontent: When they are over 45 years old!! It would definitely help Michael Jordan.Below is my code for tagging setences.st = NERTagger('stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz', 'stanford-ner/stanford-ner.jar')tokenized_sents = [word_tokenize(sent) for sent in sent_tokenize(filecontent)] taggedsents = st.tag_sents(tokenized_sents)I would expect both tokenized_sents and taggedsents contain the same number of sentences.But here is what they contain:for ts in tokenized_sents: print "tok ", tsfor ts in taggedsents: print "tagged ",ts>> tok ['When', 'they', 'are', 'over', '45', 'years', 'old', '!', '!']>> tok ['It', 'would', 'definitely', 'help', '.']>> tagged [(u'When', u'O'), (u'they', u'O'), (u'are', u'O'), (u'over', u'O'), (u'45', u'O'), (u'years', u'O'), (u'old', u'O'), (u'!', u'O')]>> tagged [(u'!', u'O')]>> tagged [(u'It', u'O'), (u'would', u'O'), (u'definitely', u'O'), (u'help', u'O'), (u'Michael', u'PERSON'), (u'Jordan', u'PERSON'), (u'.', u'O')]This is due to having doulbe "!" at the end of the supposed first sentence. Do I have to remove double "!"s before using st.tag_sents()How should I resolve this? | If you follow my solution from the other question instead of using nltk you will get JSON that properly splits this text into two sentences.Link to previous question: how to speed up NE recognition with stanford NER with python nltk |
Sqlite Database Access : No such table (Within Django no models) I have a django & docker server running on my computer and I have created a database with code from outside this server. I am trying to access this database ('test.sqlite3') within the server.I made sure the path was the correct one and that the file name was correct as well. When I open the database with DB Browser, I can see the tables and all my data. But I still get the following error text:OperationalError no such table: NAMEOFTABLEWhen I use the exact same code from another python IDE (spyder) it works fine. I'm guessing there's something weird going on with django?Here is some of the code:conn = sqlite3.connect("../test.sqlite3")c = conn.cursor()c.execute("SELECT firstName, lastName FROM RESOURCES")conn.close()(Yes, I have also tried using the absolute path and I get the same error.)Also to be noted: I get this same error when I try to create the database file & table from within the django code (the path should then be the same but it still get the error in this case).Update: it seems I have a problem with my path because I can't even open a text file with python and it's absolute path. So if anyone has any idea why that'd be great. try: f = open("/Users/XXXXX/OneDrive/XXXXX/XXXX/Autres/argon-dashboard-django/toto.txt") # Do something with the file except IOError: q="File not accessible" finally: f.close()always return the following error 'f referenced before assignment' and q = "File not accesible" so that means I can't even find the text file. | Had a similar issue, possibly something about leaving Django Model metadata files outside of the image. I needed to synchronize the model with the DB using a run-syncdbRUN ["python", "manage.py", "migrate"]RUN ["python", "manage.py", "migrate", "--run-syncdb"]CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] |
Trouble accessing data multiple objects in JSON with Python I am trying to parse through a web-service and retrieve certain records, however I am consistently receiving a KeyError for responses with more than one object. These records are returned in intervals, so sometimes I might receive one record and others I might receive 300. If I receive one record, the logic of my code works, if there are multiple items then the code doesn't work. Here is an example of an output with one object.{ "status": { "code": 311, "message": "Service Request Successfully Queried.", "cause": "" }, "Response": { "NumOutputObjects": "1", "ListOfServiceRequest": { "ServiceRequest": [ { "AddressVerified": "Y", "SRNumber": "1-13967451", "SRType": "Service Not Complete", "CreatedDate": "05/08/2015 10:00:38", "UpdatedDate": "05/08/2015 10:00:49", "IntegrationId": "05082015100148678", "Status": "Open", "CreatedByUserLogin": "PROXYE", "UpdatedByUserLogin": "PROXYE", "Anonymous": "N", "Zipcode": "90032", "Latitude": "34.0843242531", "Longitude": "-118.171015007", "CustomerAccessNumber": "", "LADWPAccountNo": "", "NewContactFirstName": "jj", "NewContactLastName": "rambo", "NewContactPhone": "", "NewContactEmail": "", "ParentSRNumber": "1-10552271", "Priority": "Normal", "Language": "", "ReasonCode": "", "ServiceDate": "", "Source": "", "ClosedDate": "", "Email": "", "FirstName": "", "HomePhone": "", "LastName": "", "LoginUser": "", "ResolutionCode": "", "SRUnitNumber": "", "MobilOS": "", "SRAddress": "5163 E TEMPLETON ST, 90032", "SRAddressName": "", "SRAreaPlanningCommission": "East Los Angeles APC", "SRCommunityPoliceStation": "CENTRAL BUREAU", "SRCouncilDistrictMember": "Jose Huizar", "SRCouncilDistrictNo": "14", "SRDirection": "E", "SRNeighborhoodCouncilId": "48", "SRNeighborhoodCouncilName": "LA-32 NC", "SRStreetName": "TEMPLETON", "SRSuffix": "ST", "SRTBColumn": "F", "SRTBMapGridPage": "595", "SRTBRow": "6", "SRXCoordinate": "6509897", "SRYCoordinate": "1853117", "AssignTo": "NC", "Assignee": "NC eWaste Supervisor 01", "Owner": "BOS", "ParentSRStatus": "Open", "ParentSRType": "Electronic Waste", "ParentSRLinkDate": "05/08/2015 10:00:39", "ParentSRLinkUser": "PROXYE", "SRAreaPlanningCommissionId": "5", "SRCommunityPoliceStationAPREC": "HOLLENBECK", "SRCommunityPoliceStationPREC": "4", "SRCrossStreet": "", "ActionTaken": "", "SRCity": "", "RescheduleCounter": "", "SRHouseNumber": "5163", "SourceofRequestCouncil": "", "CCBPremiseType": "", "ContainerBlackCount": "", "ContainerBrownCount": "", "SRIntersectionDirection": "", "SRApproximateAddress": "N", "ContainerGreenCount": "", "OtherBureauName": "", "AssigneeName": "", "AssigneeOrganization": "E-Waste, NC", "AnotherBureauEmailId": "", "ListOfLa311BarricadeRemoval": {}, "ListOfLa311BulkyItem": {}, "ListOfLa311DeadAnimalRemoval": {}, "ListOfLa311GraffitiRemoval": {}, "ListOfLa311InformationOnly": {}, "ListOfLa311MultipleStreetlightIssue": {}, "ListOfLa311SingleStreetlightIssue": {}, "ListOfLa311SrPhotoId": { "La311SrPhotoId": [] }, "ListOfLa311BusPadLanding": {}, "ListOfLa311CurbRepair": {}, "ListOfLa311Flooding": {}, "ListOfLa311GeneralStreetInspection": {}, "ListOfLa311GuardWarningRailMaintenance": {}, "ListOfLa311GutterRepair": {}, "ListOfLa311LandMudSlide": {}, "ListOfLa311Pothole": {}, "ListOfLa311Resurfacing": {}, "ListOfLa311SidewalkRepair": {}, "ListOfLa311StreetSweeping": {}, "ListOfLa311BeesOrBeehive": {}, "ListOfLa311MedianIslandMaintenance": {}, "ListOfLa311OvergrownVegetationPlants": {}, "ListOfLa311PalmFrondsDown": {}, "ListOfLa311StreetTreeInspection": {}, "ListOfLa311StreetTreeViolations": {}, "ListOfLa311TreeEmergency": {}, "ListOfLa311TreeObstruction": {}, "ListOfLa311TreePermits": {}, "ListOfLa311BrushItemsPickup": {}, "ListOfLa311Containers": {}, "ListOfLa311ElectronicWaste": {}, "ListOfLa311IllegalDumpingPickup": {}, "ListOfLa311ManualPickup": {}, "ListOfLa311MetalHouseholdAppliancesPickup": {}, "ListOfLa311MoveInMoveOut": {}, "ListOfLa311HomelessEncampment": {}, "ListOfLa311IllegalAutoRepair": {}, "ListOfLa311IllegalConstruction": {}, "ListOfLa311IllegalConstructionFence": {}, "ListOfLa311IllegalDischargeOfWater": {}, "ListOfLa311IllegalDumpingInProgress": {}, "ListOfLa311IllegalExcavation": {}, "ListOfLa311IllegalSignRemoval": {}, "ListOfLa311IllegalVending": {}, "ListOfLa311LeafBlowerViolation": {}, "ListOfLa311NewsRackViolation": {}, "ListOfLa311Obstructions": {}, "ListOfLa311TablesAndChairsObstructing": {}, "ListOfLa311GisLayer": { "La311GisLayer": [ { "A_Call_No": "", "Area": "0", "Day": "MONDAY", "DirectionSuffix": "", "DistrictAbbr": "", "DistrictName": "NC", "DistrictNumber": "", "DistrictOffice": "", "Fraction": "", "R_Call_No": "", "SectionId": "", "ShortDay": "", "StreetFrom": "", "StreetTo": "", "StreetLightId": "", "StreetLightStatus": "", "Type": "GIS", "Y_Call_No": "", "Name": "05082015100148678100", "CommunityPlanningArea": "", "LastUpdatedBy": "", "BOSRadioHolderName": "" } ] }, "ListOfLa311ServiceRequestNotes": { "La311ServiceRequestNotes": [ { "CreatedDate": "05/08/2015 10:00:39", "Comment": "Materials have been out in a normal collection area, unsure why driver missed the e-waste items.", "CreatedByUser": "PROXYE", "IsSrNoAvailable": "", "CommentType": "External", "Notification": "N", "FeedbackSRType": "", "IntegrationId": "050820151001486782", "Date1": "", "Date2": "", "Date3": "", "Text1": "", "AnotherBureau": "", "EmailAddress": "", "ListOfLa311SrNotesAuditTrail": {} }, { "CreatedDate": "05/08/2015 10:00:39", "Comment": "", "CreatedByUser": "PROXYE", "IsSrNoAvailable": "", "CommentType": "Address Comments", "Notification": "N", "FeedbackSRType": "", "IntegrationId": "050820151001486781", "Date1": "", "Date2": "", "Date3": "", "Text1": "", "AnotherBureau": "", "EmailAddress": "", "ListOfLa311SrNotesAuditTrail": {} } ] }, "ListOfLa311SubscribeDuplicateSr": {}, "ListOfChildServiceRequest": {}, "ListOfLa311BillingCsscAdjustment": {}, "ListOfLa311BillingEccAdjustment": {}, "ListOfLa311BillingRsscAdjustment": {}, "ListOfLa311BillingRsscExemption": {}, "ListOfLa311SanitationBillingBif": {}, "ListOfLa311SanitationBillingCssc": {}, "ListOfLa311SanitationBillingEcc": {}, "ListOfLa311SanitationBillingLifeline": {}, "ListOfLa311SanitationBillingRssc": {}, "ListOfLa311SanitationBillingSrf": {}, "ListOfLa311DocumentLog": {}, "ListOfAuditTrailItem2": { "AuditTrailItem2": [ { "Date": "05/08/2015 10:00:49", "EmployeeLogin": "SADMIN", "Field": "Assignee", "NewValue": "NC eWaste Supervisor 01", "OldValue": "" } ] }, "ListOfLa311GenericBc": { "La311GenericBc": [ { "ATTRIB_08": "", "NAME": "05082015100148678100", "PAR_ROW_ID": "1-8BDCR", "ROW_ID": "1-8BOCG", "TYPE": "GIS", "ATTRIB_16": "", "ListOfLa311GenericbcAuditTrail": {} }, { "ATTRIB_08": "", "NAME": "05082015100148678", "PAR_ROW_ID": "1-8BDCR", "ROW_ID": "1-8BOCJ", "TYPE": "Service Not Complete", "ATTRIB_16": "", "ListOfLa311GenericbcAuditTrail": {} } ] }, "ListOfLa311ServiceNotComplete": { "La311ServiceNotComplete": [ { "ContainerLocation": "", "ContainerType": "", "DriverFirstName": "", "DriverLastName": "", "MissedCollectionService": "Electronic Waste", "OtherServiceMissedReason": "", "ServiceDateRendered": "", "ServiceMissedReason": "I'm not sure", "TruckNo": "", "Type": "Service Not Complete", "WireBasketLocation": "", "LastUpdatedBy": "", "Name": "05082015100148678" } ] }, "ListOfLa311Other": {}, "ListOfLa311WeedAbatementForPrivateParcels": {}, "ListOfLa311SanitationBillingInquiry": {} } ] } }}The code is below; data2 = jsonpickle.decode((f2.read()))Start = datetime.datetime.now()data2 = jsonpickle.encode(data2)url2 = "myURL"headers2 = {'Content-type': 'text/plain', 'Accept': '/'}r2 = requests.post(url2, data=data2, headers=headers2)decoded2 = json.loads(r2.text)try: r2except requests.exceptions.ConnectTimeout as e: print "Too slow Mojo!"items = []for sr in decoded2['Response']['ListOfServiceRequest']['ServiceRequest']: SRAddress = sr['SRAddress'] Latitude = sr['Latitude'] Longitude = sr['Longitude'] ReasonCode = sr['ReasonCode'] SRNumber = sr['SRNumber'] FirstName = sr['FirstName'] LastName = sr['LastName'] ResolutionCode = sr['ResolutionCode'] HomePhone = sr['HomePhone'] CreatedDate = sr['CreatedDate'] UpdatedDate = sr['UpdatedDate'] CreatedDate = datetime.datetime.strptime(CreatedDate, "%m/%d/%Y %H:%M:%S") UpdatedDate = datetime.datetime.strptime(UpdatedDate, "%m/%d/%Y %H:%M:%S") print SRAddress print SRNumberItemInfo = " "for ew in sr["ListOfLa311ServiceRequestNotes"][u"La311ServiceRequestNotes"]:Comment = ew['Comment']print CommentOutput Materials have been out in a normal collection area, unsure why driver missed the e-waste items.If I use the logic above for a response with more than one object returned I receive a KeyError value and am unable to access the array that I want to parse.Example of code with multiple objects returned;Output when I use; I receive a key error if I attempt to do something along the lines of CommodityType = sr['ListOfLa311ElectronicWaste']['ElectronicWasteType'] belowfor sr in decoded2['Response']['ListOfServiceRequest']['ServiceRequest']: CommodityType = sr['ListOfLa311ElectronicWaste'] # ItemType = sr['ElectronicWestType'] # DriverFirstName = sr ['DriverFirstName'] # DriverLastName = sr ['DriverLastName'] # ItemCount = sr['ItemCount'] # ItemInfo += '{0}, {1}, '.format(ItemType, ItemCount) # ParentNumber = sr['Name'] # print CommodityType{u'La311ElectronicWaste': [{u'IllegallyDumped': u'N', u'OtherElectronicWestType': u'hash', u'ItemCount': u'5', u'Name': u'6a31f058-ece1-4e7d-b682-7d9052a512f4', u'MobileHomeSpace': u'', u'DriverLastName': u'', u'ActiveStatus': u'Y', u'DriverFirstName': u'', u'LastUpdatedBy': u'', u'GatedCommunityMultifamilyDwelling': u'Outside the main gate', u'IllegalDumpCollectionLoc': u'', u'ElectronicWestType': u'Other', u'CollectionLocation': u'Gated Community', u'Type': u'Electronic Waste', u'ServiceDateRendered': u'', u'TruckNo': u''}]}{u'La311ElectronicWaste': [{u'IllegallyDumped': u'Y', u'OtherElectronicWestType': u'', u'ItemCount': u'5', u'Name': u'3f4d9d20-a712-4be3-822f-e6a45219c1cf', u'MobileHomeSpace': u'', u'DriverLastName': u'', u'ActiveStatus': u'Y', u'DriverFirstName': u'', u'LastUpdatedBy': u'', u'GatedCommunityMultifamilyDwelling': u'', u'IllegalDumpCollectionLoc': u'Cul De Sac', u'ElectronicWestType': u'Electronic Equipment', u'CollectionLocation': u'Alley', u'Type': u'Electronic Waste', u'ServiceDateRendered': u'', u'TruckNo': u''}]}How do I handle multiple outputs the same way that I have handled the single output? | I solved this, this thread was a huge reference.Decoding nested JSON with multiple 'for' loopsfor sr in ElectronicType: for illegaldump in ElectronicType['La311ElectronicWaste']: illegalewaste = illegaldump['IllegallyDumped']YNNNNNYYNY |
How to return a value from tuple Okay, so say I have a tuple like thisvalues = ('1', 'cat', '2', 'bat', '3', 'rat', '4', 'hat', '5', 'sat')What loop function would I have to write to ask for a user input for the integer and have it return the word. ie. user inputs 1 and it returns cat. or user inputs 123 and it returns catbatrat (i dont need the output to be have spaces)edit: I know it would make more sense to use a dictionary, but in this instance, i would not like to. my code: values = ('1', 'cat', '2', 'bat', '3', 'rat', '4', 'hat', '5', 'sat') message = raw_input("Enter a number for the corresponding word: ") for char in message: print values | As you say, this is much better done via dictionary. But since you insist: you can look up the index for the number in the tuple, and then print the element at the next index:for char in message: print values[values.index(char)+1],With input '123', will print:cat bat ratNote that this will fail horribly if any of the characters entered aren't present since the index method will raise an exception.I'm wondering if you just want to know how to access individual tuple elements; you can index them just like you would a list. For example, given:values = ('cat', 'rat', 'bat', 'hat')You could just say values[0] to get 'cat', for example. |
Text Detection of Labels using PyTesseract A label detection tool that automatically identifies and alphabetically sorts the images based on equipment number (19-V1083AI). I used the pytesseract library to convert the image to a string after the contours of the equipment label were identified. Although the code runs correctly, it never outputs the equipment number. It's my first time using the pytesseract library and the goodFeaturesToTrack function. Any help would be greatly appreciated!Original Imageimport numpy as npimport cv2import imutils #resizeimageimport pytesseract # convert img to stringfrom matplotlib import pyplot as pltpytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"# Read the image fileimage = cv2.imread('Car Images/s3.JPG')# Resize the image - change width to 500image = imutils.resize(image, width=500)# Display the original imagecv2.imshow("Original Image", image)cv2.waitKey(0)# RGB to Gray scale conversiongray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)cv2.imshow("1 - Grayscale Conversion", gray)cv2.waitKey(0)# Noise removal with iterative bilateral filter(removes noise while preserving edges)gray = cv2.bilateralFilter(gray, 11, 17, 17)cv2.imshow("2 - Bilateral Filter", gray)cv2.waitKey(0)corners = cv2.goodFeaturesToTrack(gray,60,0.001,10)corners = np.int0(corners)for i in corners: x,y = i.ravel() cv2.circle(image,(x,y),0,255,-1) coord = np.where(np.all(image == (255, 0, 0),axis=-1))plt.imshow(image)# Use tesseract to covert image into stringtext = pytesseract.image_to_string(image, lang='eng')print("Equipment Number is:", text)plt.show()Output Image2Note: It worked with one of the images but not for the others Output Image2 | I found using a particular configuration option for PyTesseract will find your text -- and some noise. Here are the configuration options explained: https://stackoverflow.com/a/44632770/42346 For this task I chose: "Sparse text. Find as much text as possible in no particular order."Since there's more "text" returned by PyTesseract you can use a regular expression to filter out the noise.This particular regular expression looks for two digits, a hyphen, five digits or characters, a space, and then two digits or characters. This can be adjusted to your equipment number format as necessary, but I'm reasonably confident this is a good solution because there's nothing else like this equipment number in the returned text.import reimport cv2import pytesseractimage = cv2.imread('Fv0oe.jpg') text = pytesseract.image_to_string(image, lang='eng', config='--psm 11') for line in text.split('\n'): if re.match(r'^\d{2}-\w{5} \w{2}$',line): print(line) Result (with no image processing necessary):19-V1083 AI |
How do I package and add my Python app to my launchpad repository? I have a launchpad account and an activated ppa but I have no idea how to package my app and upload. I write programs with Python using Tkinter. May someone explain ? | You'll need to package your project into a .deb. Here's a good tutorial:https://wiki.debian.org/Python/PackagingAnd here is an example packaged app which has TKinter as a dependency:http://packages.ubuntu.com/trusty/python-pil.imagetkSnippet from its control file:Source: pillowSection: pythonPriority: optionalMaintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>XSBC-Original-Maintainer: Matthias Klose <doko@debian.org>Build-Depends: debhelper, tk-dev, dpkg-dev (>= 1.16.1~), python-all-dev (>= 2.7.3-11~), python-all-dbg, python-setuptools, python3-all-dev (>= 3.3), python3-all-dbg, python3-setuptools, python-tk, python-tk-dbg, python3-tk, python3-tk-dbg (>= 3.3), libsane-dev, libfreetype6-dev, libjpeg8-dev, zlib1g-dev, liblcms2-dev, libwebp-devBuild-Conflicts: python-numarrayStandards-Version: 3.9.5XS-Testsuite: autopkgtest |
How to sort the top scores from an external text file with the names of the people who achieved those scores still linked? Python I have managed to write the scores and names of anybody who wins a simple dice game I created to an external text document. How would I sort the top scores from this document and display them on the console alongside the name that achieved that score?Code used to write score and name to the text document:winners = open("winners.txt", "a")winners.write(p1_username) #writes the name winners.write("\n")winners.write(str(p1_total)) #writes the scoreTried entering the data into an array of tuples and sorting them according to the numerical value in each tuple when I found this, so triedcount = len(open("winners.txt").readlines( ))winners_lst = [(None, None)] * countf = open("winners.txt", "r")lines2 = f.readlines()winners_lst[0][0] = lines2[0]winners_lst[0][0] = lines2[1]but that returnsTypeError: 'tuple' object does not support item assignmentEdit: I am not asking why my solution didn't work, I am asking what I could do to make it work or for an alternate soloution. | First, the write operation should be cleaner:with open('winners.txt', 'w') as f: for p1_username, p1_score in [('foo', 1), ('bar', 2), ('foobar', 0)]: print(f'{p1_username}\n{p1_score}', file=f)Content of winners.txt:foo1bar2foobar0Then, you can read this back into a list of tuples:with open('winners.txt') as f: lines = f.read().splitlines()winners_lst = [(a, b) for a, b in zip(lines[::2], lines[1::2])]Content of winners_lst:[('foo', '1'), ('bar', '2'), ('foobar', '0')]After ingest, you may convert the scores into int:winners_lst = [(a, int(b)) for a, b in winners_lst]And sort by score descending (if such is your goal):sorted(winners_lst, key=lambda ab: ab[1], reverse=True)# out:[('bar', 2), ('foo', 1), ('foobar', 0)] |
Trying to repeat the regex breaks the regex I have a working regex that matches ONE of the following lines:A punctuation from the following list [.,!?;]A word that is preceded by the beginning of the string or a space.Here's the regex in question ([.,!?;] *|(?<= |\A)[\-'’:\w]+)What I need it to do however is for it to match 3 instances of this. So, for example, the ideal end result would be something like this.Sample text: "This is a test. Test"Output"This" "is" "a""is" "a" "test""a" "test" ".""test" "." "Test"I've tried simply adding {3} to the end in the hopes of it matching 3 times. This however results in it matching nothing at all or the occasional odd character. The other possibility I've tried is just repeating the whole regex 3 times like so ([.,!?;] *|(?<= |\A)[\-'’:\w]+)([.,!?;] *|(?<= |\A)[\-'’:\w]+)([.,!?;] *|(?<= |\A)[\-'’:\w]+) which is horrible to look at but I hoped it would work. This had the odd effect of working, but only if at least one of the matches was one of the previously listed punctuation.Any insights would be appreciated.I'm using the new regex module found here so that I can have overlapping searches. | What is wrong with your approachThe ([.,!?;] *|(?<= |\A)[\-'’:\w]+) pattern matches a single "unit" (either a word or a single punctuation from the specified set [.,!?;] followed with 0+ spaces. Thus, when you fed this pattern to the regex.findall, it only could return just the chunk list ['This', 'is', 'a', 'test', '. ', 'Test'].SolutionYou can use a slightly different approach: match all words, and all chunks that are not words. Here is a demo (note that C'est and AUX-USB are treated as single "words"):>>> pat = r"((?:[^\w\s'-]+(?=\s|\b)|\b(?<!')\w+(?:['-]\w+)*))\s*((?1))\s*((?1))">>> results = regex.findall(pat, text, overlapped = True)>>> results[("C'est", 'un', 'test'), ('un', 'test', '....'), ('test', '....', 'aux-usb')]Here, the pattern has 3 capture groups, and the second and third one contain the same pattern as in Group 1 ((?1) is a subroutine call used in order to avoid repeating the same pattern used in Group 1). Group 2 and Group 3 can be separated with whitespaces (not necessarily, or the punctuation glued to a word would not be matched). Also, note the negative lookbehind (?<!') that will ensure that C'est is treated as a single entity.ExplanationThe pattern details:((?:[^\w\s'-]+(?=\s|\b)|\b(?<!')\w+(?:['-]\w+)*)) - Group 1 matching:(?:[^\w\s'-]+(?=\s|\b) - 1+ characters other than [a-zA-Z0-9_], whitespace, ' and - immediately followed with a whitespace or a word boundary| - or \b(?<!')\w+(?:['-]\w+)*) - 1+ word characters not preceded with a ' (due to (?<!')) and preceded with a word boundary (\b) and followed with 0+ sequences of - or ' followed with 1+ word characters.\s* - 0+ whitespaces((?1)) - Group 2 (same pattern as for Group 1)\s*((?1)) - see above |
Parameters undefined using lmfit I am trying to fit a curve to the equation below with the given data. The equation is Rate=k*Concentration^n. I am having trouble as the n when fitted is -6, which is not possible so I am trying to set a bound at min=0. However, I am getting a undefined term parameter errors. Any help would be great thanks. from IPython import get_ipython get_ipython().magic('reset -sf')import numpy as npfrom lmfit import Modelimport matplotlib.pyplot as plt# Homework Problem #2 Time = np.array ([0, 48, 76, 124, 204, 238, 289])Concentration =np.array ([19.04, 17.6, 16.9, 15.8, 14.41, 13.94, 13.37])# Rate Determination Rate=Concentration/Time # Model Definition def rateEq(Concentration, k, n): return k*(Concentration)**n# Model creationmodel=Model(rateEq)# Parametersparams = parameters()params.add(k=0.001)params.add(n=0.001)par.set(min=0)# Data Fit vresult=model.fit(Rate, params, Concentration=Concentration)# Print and Plot Results print(result.fit_report())result.plot_fit() | parameters is undefined because you do not define it anywhere. You use it as params = parameters(), probably implying a function call, but you do not define or import that function.... Similarly, par is undefined because you do not define it anywhere.You almost certainly wantfrom lmfit import Model, Parameters # explicitly import Parametersparams = Parameters() # note the capitalizationparams.add('k', value=0.001)params.add('n', value=0.001)what is less clear (because I cannot guess what par is)is whether you wantparams['k'].min = 0or params['n'].min = 0Also, just as a warning: Since your time[0] is 0, your Rate[0] will be infinite, which will cause trouble when running the fit. |
splitting list, extracting an element and adding it in python I am new in python.I have a list with seperator of "::" and it seems like that; 1::Erin Burkovich (2000)::Drama 2::Assassins (1995)::ThrillerI want to split them by "::" and extract the year from name and add it into the end of the line. Each movie has it own index.Desired list seems like; 1::Erin Burkovich:Drama::2000 2::Assasins:Thriller:1995I have below code:for i in movies: movie_id,movie_title,movie_genre=i.split("::") movie_year=((movie_title.split(" "))[-1]).replace("(","").replace(")","") movies.insert(-1, movie_year)but it doesn't work at all.Any help ?Thanks in advance. | You're having infinite loop, because when you add an item, your loop needs to iterate on more items, and then you're adding another item...You should create a new list with the result.Also, you can extract the list in a much easier way:movie_year = re.findall('\d+', '(2000)') |
Python Logical Operation I'm pretty new to python and I'm working on a web scraping project using the Scrapy library. I'm not using the built in domain restriction because I want to check if any of the links to pages outside the domain are dead. However, I still want to treat pages within the domain differently from those outside it and am trying to manually determine if a site is within the domain before parsing the response. Response URL: http://www.siteSection1.domainName.comIf Statement:if 'domainName.com' and ('siteSection1' or 'siteSection2' or 'siteSection3') in response.url: parsePageInDomain()The above statement is true (the page is parsed) if 'siteSection1' is the first to appear in the list of or's but it will not parse the page if the response url is the same but the if statement were the following:if 'domainName.com' and ('siteSection2' or 'siteSection1' or 'siteSection3') in response.url: parsePageInDomain()What am I doing wrong here? I haven't been able to think through what is going on with the logical operators very clearly and any guidance would be greatly appreciated. Thanks! | or doesn't work that way. Try any:if 'domainName.com' in response.url and any(name in response.url for name in ('siteSection1', 'siteSection2', 'siteSection3')):What's going on here is that or returns a logical or of its two arguments - x or y returns x if x evaluates to True, which for a string means it's not empty, or y if x does not evaluate to True. So ('siteSection1' or 'siteSection2' or 'siteSection3') evaluates to 'siteSection1' because 'siteSection1' is True when considered as a boolean.Moreover, you're also using and to combine your criteria. and returns its first argument if that argument evaluates to False, or its second if the first argument evaluates to True. Therefore, if x and y in z does not test to see whether both x and y are in z. in has higher precedence than and - and I had to look that up - so thattests if x and (y in z). Again, domainName.com evaluates as True, so this will return just y in z.any, conversely, is a built in function that takes an iterable of booleans and returns True or False - True if any of them are True, False otherwise. It stops its work as soon as it hits a True value, so it's efficient. I'm using a generator expression to tell it to keep checking your three different possible strings to see if any of them are in your response url. |
Set timezone to EST in Google App Engine (Python) Can anyone advise how I change the timezone for my google app engine application? It's running python, I need to set the timezone so all datetime.now() etc work on EST timezone instead of the default?Thanks! | Have a look at http://timezones.appspot.com/ You can not make datetime.now() to use your custom time zone but you can convert time as per your requirements. |
Django counter in loop to index list I'm passing two lists to a template. Normally if I was iterating over a list I would do something like this{% for i in list %}but I have two lists that I need to access in parallel, ie. the nth item in one list corresponds to the nth item in the other list. My thought was to loop over one list and access an item in the other list using forloop.counter0 but I can't figure out the syntax to get that to work.Thanks | You can't. The simple way is to preprocess you data in a zipped list, like thisIn your viewx = [1, 2, 3]y = [4, 5, 6]zipped = zip(x, y)Then in you template :{% for x, y in zipped %} {{ x }} - {{ y }}{% endfor %} |
SegmentNotFoundException in AWS Xray with Lambda I am trying to write a Lambda function to copy files from one s3 bucket to another integrated with AWS Xray. Below is the code for Lambda function. I am getting the error aws_xray_sdk.core.exceptions.exceptions.SegmentNotFoundException: cannot find the current segment/subsegment, please make sure you have a segment openI have included the Aws xray SDK in my deployment package. Also, begin segment and end segment are included in the code. Please give a solution to this error.import boto3from aws_xray_sdk.core import xray_recorderfrom aws_xray_sdk.core import patchpatch(['boto3'])client = boto3.client('s3')s3 = boto3.resource('s3')SourceBucket = 'bucket1'DestBucket = 'bucket2'list1=[];def lambda_handler(event, context): response = client.list_objects(Bucket=SourceBucket) if 'Contents' in response: for item in response['Contents']: list1.append(item['Key']); put_object_into_s3() for name in list1: copy_source = { 'Bucket': SourceBucket, 'Key': name } response = s3.meta.client.copy(copy_source, DestBucket, name) | the context management for a Lambda environment would never throw a SegmentNotFoundException. If there is no active segment/subsegment in thread local storage, it constructs a segment based on environment variables set in Lambda container. See https://github.com/aws/aws-xray-sdk-python/blob/master/aws_xray_sdk/core/lambda_launcher.py#L79The lambda context management will be used when an environment variable LAMBDA_TASK_ROOT is set. Are you using some tool to run your Lambda function locally or have you enabled your Lambda function's active tracing? |
Mocking python subprocess.call function and capture its system exit code Writing test cases to handle successful and failed python subprocess calls, I need to capture subprocess.call returning code.Using python unittest.mock module, is it possible to patch the subprocess.call function and capture its real system exit code?Consider an external library with the following code:## <somemodule.py> fileimport subprocessdef run_hook(cmd): # ... subprocess.call(cmd, shell=True) sum_one = 1+1 return sum_oneI can't modify the function run_hook. It is not part of my code. But, the fact is that subprocess.call is being called among other statements.Here we have a snippet of code returning a forced system exit error code 1:## <somemodule.py> tests fileimport subprocessfrom somemodule import run_hooktry: from unittest.mock import patchexcept ImportError: from mock import patch@patch("subprocess.call", side_effect=subprocess.call)def test_run_hook_systemexit_not_0(call_mock): python_exec = sys.executable cmd_parts = [python_exec, "-c", "'exit(1)'"] # Forced error code 1 cmd = " ".join(cmd_parts) run_hook(cmd) call_mock.assert_called_once_with(cmd, shell=True) # I need to change the following assertion to # catch real return code "1" assert "1" == call_mock.return_value(), \ "Forced system exit(1) should return 1. Just for example purpose"How can I improve this test to capture the expected real value for any subprocess.call return code?For compatibility purposes, new subprocess.run (3.5+) can't be used. This library is still broadly used by python 2.7+ environments. | About subprocess.call, the documentation says: Run the command described by args. Wait for command to complete, then return the returncode attribute.All you need to do is to modify your run_hook function and return the exit code:def run_hook(cmd): # ... return subprocess.call(cmd, shell=True)This will simply your test code.def test_run_hook_systemexit_not_0(): python_exec = sys.executable args = [python_exec, "-c", "'exit(1)'"] assert run_hook(args) == 1My advice: use subprocess.run instead EditIf you want to check the exit code of subprocess.call you need to patch it with your own version, like this:import subprocess_original_call = subprocess.calldef assert_call(*args, **kwargs): assert _original_call(*args, **kwargs) == 0Then, you use assert_call as a side effect function for your patch:from unittest.mock import patch@patch('subprocess.call', side_effect=assert_call)def test(call): python_exec = sys.executable args = [python_exec, "-c", "'exit(1)'"] run_hook(args) |
Python matched the value on json response based on user input, and pass the value to variable please give the solution based on my code below, and also can you help me, so in order to enter the country name, user must see the country list first, i mean in first step user must input 'country' and then it gives response the 'c_list' like in the screenshot, and then user can input the country they choose, so it was like there was a step before you can input country name :)from nltk.tokenize import sent_tokenize, word_tokenizec_list = ['Australia', 'CHINA', 'Combodia', 'EUROPE', 'HONG KONG','INDIA', 'INDONESIA', 'JAPAN', 'KOREA', 'MALAYSIA', 'Myanmar', 'New Zealand', 'PHILIPPINES', 'Russia', 'senchilles', 'SINGAPORE', 'Sri Lanka', 'TAIWAN', 'TestCountry', 'THAILAND', 'UNITED KINGDOM', 'USA', 'Vietnam', 'XYTestCountry']r = input(" user input : ")wt = (word_tokenize(r))if any("country" in s for s in wt): reply = c_listelse: reply = "Sorry I cant answer that right now."print(reply)the result above is from api responsemy question is, how can i add code that can check if the country exist in the 'c_list' list when user reply any country listed above.Then if its exist i want to pass the country code on this json if matched with user reply, then i want to pass the country code to another api request and so on"DATA": { "data":[ { "countryId": "26", "countryCode" : "AU", "name" : "Australia" }, { "countryId": "17", "countryCode" : "ID", "name" : "Indonesia" } ] } | Here are some snippets to give you a good start.Say user response is rv holding country name. Check if country exists in 'c_list'.country_lower = rv.lower()is_in_country_list = country_lower in map(str.lower, c_list)Parse country code from JSON sample.countries = { "DATA": { "data": [{ "countryId": "26", "countryCode": "AU", "name": "Australia" }, { "countryId": "17", "countryCode": "ID", "name": "Indonesia" }] }}def take_first(predicate, iterable): for element in iterable: if predicate(element): yield element breakcountry_found = list( take_first( lambda e: e['name'].lower() == country_lower, countries['DATA']['data'] ))default_country_code = 'US'country_code = ( country_found['countryCode'] if country_found else default_country_code) |
Downloading gensim models behind a proxy I am trying to download gensim pretrained word2vec models behind a proxy. I receive this error. urllib.error.URLError: urlopen error [Errno 11004] getaddrinfo failed for the following code import gensim.downloader as apiapi.info() I have already set proxy using set HTTPS_PROXY=https://username:xxxxxx@myproxy.com and have been successfully downloading packages using pip. Is there a way to add my proxy to gensim? | You can use command line on terminal to download instead of run code:export https_proxy=https://username:xxxxxx@myproxy.com python -m gensim.downloader --download text8 |
Python equivalent for nested c++ style for loop In c++ for (auto i = min; i < sqrt_max; i++ ) { for (auto j = i; i*j <= sqrt_max; j++) {I am trying to do the exact same thing in pythonfor i in enumerate(range(min, sqrt_max + 1)): for j in enumerate(range(min, i * j < sqrt_max + 1)):I get undefined name j | Do not use enumerate(..) in both your loops. enumerate takes something that returns a pair of index, element for each element in its argument.You can not use j the way you do because it is defined only within the for-loop body.What you want is something like below:for i in range(min, sqrt_max): # i will not take the value of sqrt_max. for j in range(i, 1 + sqrt_max//i): # i is defined here within range, but j is not. ... |
how to get a uniform white balance on a set of images (timelapse) As I am realising a timelapse film, I've taken thousands of photos of a set and due to the different weather and light conditions the pictures are very differently exposed from sunshine to haze and rain.I am looking for a way to generalise the white balance of all the images in order to have a timelapse as smooth as possible using python.I already tried using defilcekr filter and correcting the white balance using a variable, but I couldn't find a way to generelise the parameters to all the images of different conditions.here are some examples of the images I have : | For a visually pleasing time lapse, there are many things you can try. I'd additionally recommend a temporal blur (motion blur) to even out local lighting changes and fast-moving objects. That also reproduces the impression of a normal video, which always has some non-zero exposure time, i.e. motion blur.This code approximately works in a linear space, tries to balance colors (color temperature) and brightness. All this isn't physically accurate but close to what's doable with the information given. One glaring defect: I'm not "tone-mapping" here, so anything that's too bright is simply clipped to white.The "Gray world" is just one possible way to estimate color balance. There are a bunch more, variously complex. Your original pictures may already be color-balanced okay...def gamma_decompress(im): return np.power(im, 2.2)def gamma_compress(im): return np.power(im, 1/2.2)def measure_gray_world(im): avg = np.mean(im, axis=(0, 1)) return avgfor im in images: im = gamma_decompress(im / 255) avg = measure_gray_world(im) # or im[height // 3:] im = im / avg * 0.3 # move mean down so picture isn't blown out im = gamma_compress(im) imshow(im)You could also only respect certain areas of the picture for the color balance... say the lower two thirds, mostly ignoring the sky. That'll make the picture look colder because most of it is adjusted to be more neutral, and the sky gets to be very blue. |
Setting Azure EnvironmentCredential() I am on an Azure VM with a dynamic IP adress. When I am logged in, I am able to retrieve secrets using the following python code without any issues;from azure.identity import DefaultAzureCredentialfrom azure.keyvault.secrets import SecretClientcredential = DefaultAzureCredential()secret_client = SecretClient(vault_url="https://xxxx/", credential=credential)secret = secret_client.get_secret("testSecret")I need to retrieve the secrets when the VM is on but when I am not logged to enable other processes to run. I noticed the code above was failing when I am logged off. The system admin gave me the AZURE_CLIENT_ID, AZURE_CLIENT_SECRET,AZURE_TENANT_ID and VAULT_URL for me to set them as EnvironmentCredentials.I set them in the CMD as follows;SETX AZURE_CLIENT_ID "pppp"SETX AZURE_CLIENT_SECRET "mmmm"SETX AZURE_TENANT_ID "kkkk"SETX VAULT_URL "xxxx"When I check the system environment settings, I can see they have been setI tried retrieving my secret using this code,from azure.keyvault.secrets import SecretClientVAULT_URL = os.environ["VAULT_URL"]credential = EnvironmentCredential()client = SecretClient(vault_url=VAULT_URL, credential=credential)password = client.get_secret("testSecret").valueI got this errorraise HttpResponseError(response=response, model=error)azure.core.exceptions.HttpResponseError: (Forbidden) The user, group or application 'pppp;iss=https://sts.windows.net/kkkk/' does not have secrets get permission on key vault 'name of my vault-vault;location=australiasoutheast'. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287QuestionThe system admin confirms the credentials issued are the service principal's correct details.How can correct this or what am I doing wrong?Is there a way for me to print DefaultAzureCredentials so that I set the same as EnvironmentCredential because I believe why I recover secrets when I am logged in is that the credentials are cached when I sign in?Your help will highly be appreciated. | How can correct this or what am I doing wrong?The error means your service principal does not have the correct secret permission in your keyvault -> Access policies, to solve the issue, add the application(service principal) mentioned in the error message to the Access policies with the Get secret permission in your keyvault in the azure portal. If it still not work, please try to set the environment variables in the System variables instead of User variables for xxx as shown in your screenshot.Is there a way for me to print DefaultAzureCredentials so that I set the same as EnvironmentCredential because I believe why I recover secrets when I am logged in is that the credentials are cached when I sign in?No need to do this, the DefaultAzureCredential attempts to authenticate via the following mechanisms in this order, see here. If you didn't set the environment variables before, it should use the managed identity of your VM to authenticate. |
Concatenate columns based on certain group I have dataframe df1 ike this: Schema table Name temp0 schema1 table1 col1 INT(1,2) NOT NULL1 schema1 table1 col2 INT(3,2) NOT NULL2 schema1 table1 col3 SMALLINT(6,2) NULL3 schema1 table1 col4 SMALLINT(9,2) NULL4 schema2 table2 col6 CHAR(20,2) NULL5 schema2 table2 col7 CHAR(20,4) NULL6 schema2 table2 col8 CHAR(6,5) NULL7 schema2 table2 col9 CHAR(6,3) NULLIn this dataframe I have two different schemas and tables(table1 and table2). I want to build create table statement out of this.So, from the above dataframe I need a new dataframe which will have 2 rows (since 2 different tables in df1 ) and the value would bedf2: ddl_statement0 create table schema1.table1 (col1 INT(1,2) NOT NULL,col2 INT(3,2) NOT NULL,col3_Nbr SMALLINT(6,2) NULL,col4 SMALLINT(9,2) NULL)1 create table schema2.ITEM_DESC2 (col6 CHAR(20,2) NULL,col7 CHAR(20,4) NULL,Col8 CHAR(6,5) NULL,col9 CHAR(6,3) NULL)How can I achieve this with out using loop? | Use groupby and f-strings:df2 = df.groupby(['Database/Schema Name', 'entity Name'])['temp'] \ .apply(lambda x: f"create table {x.name[0]}.{x.name[1]} ({', '.join(x)})") \ .reset_index(drop=True).to_frame('ddl_statement')Output:>>> df2 ddl_statement0 create table schema1.ITEM_DESC1 (Item_Nbr INT(1,2) NOT NULL, Old_Nbr INT(3,2) NOT NULL, Order_Dept_Nbr SMALLINT(6,2) NULL, Acct_Dept_Nbr SMALLINT(9,2) NULL)1 create table schema2.ITEM_DESC2 (Primary_Desc CHAR(20,2) NULL, Secondary_Desc CHAR(20,4) NULL, Color_Desc CHAR(6,5) NULL, Size_Desc CHAR(6,3) NULL) |
How to use the Anaconda environment on blender? I'm having problems to use some modes like numpy and pandas on blender, apparently the blender's python do not allow us to install packages using pip; so I thought that I could resolve this issue changing its environment to the Anaconda or something like that. I looked for solutions, but all I founded worked on windows but I use ubuntu. If someone can help me, I really appreciate it. | after struggling with this several times, and coming across it right now, I figured I'd share my solution.long story short: just install things into the default blender environment or install bpy into an anaconda environmentOn linux you may be able to follow Failxxx's answer, or using mklink in Windows maybe the same (I couldn't get it to work, got weird "compiler" errors when trying to import bpy in blender).You can also check here for various workarounds:https://docs.blender.org/api/current/info_tips_and_tricks.htmlIn the end, I've had success building bpy as a module inside of an anaconda environment (follow tutorial in link), or installing whatever packages I want inside of my Blender python environment.To do this.. find out where your Blender's python.exe is stored.. mine is here:C:\Program Files\Blender Foundation\Blender 3.2\3.2\python\binthen, open a cmd as admin (admin was extremely crucial for my machine?) in this directory and do something like:python -m pip install pandasthis will install the package into your Blender python environment. Sometimes I had to uninstall and reinstall with admin incase anything had been installed to C:\Users\%USERNAME%\AppData\Roaming - Python or Blender folders.if it's only a handful of packages, this is the best way in my opinion. Just know that this permanently alters the python environment that shipped with Blender.Proof :) : |
Check for password before accessing content in django I am trying to build a functionality where users have to enter the passcode to access the site.If you go to this site it will ask for a password (123) before showing you the content:https://www.protectedtext.com/djangoprojI want to do it without forms.py.URL OF THE PAGE --> TEMPLATE --> ASK FOR PASSCODE --> SHOW CONTENT IF PASSCODE MATCHESmodels.pyfrom django.db import models# Create your models here.class Text(models.Model): text = models.TextField(blank=True) slug = models.SlugField(unique=True) password = models.CharField(max_length=50) def __str__(self): return self.slugviews.pyfrom django.shortcuts import renderfrom .models import Textimport django.contrib.auth# Create your views here.def textview(request, slug): obj= Text.objects.get(slug=slug) return render(request, 'text/textpage.html', {'obj' : obj})def home(request): return render(request, 'text/index.html', {})I have tried creating a new template for password but I am still not getting that functionality.Thanks in advance | If I were you (and didn't want to use DRF), I would make something like this:def check_password(*args, **kwargs): # decorator function for checking password def wrapper(func): if kwargs.get('password', None) == '123': func() # show index, if password is corrent else: raise Exception('Access denied') # else raise the exception return wrapperdef home(request): try: password = request.GET.get('password') # get password from given parameters @check_password(password) def show_index(): # create function, maybe better to make it out of scope return render(request, 'text/index.html', {}) show_index() # call function except: print('Password was not given or it is incorrect, access denied') return render(request, 'text/401.html', {}) |
Update and replace values in columns based on conditions in Python I wish to update and replace values based on the dates within my dataframe, while removing data in other specific columns.Dataid date location status value1 value2CC 1/1/2022 ny new 12 1CC 4/1/2022 ny new 1 1CC 7/1/2022 ny new 1 1CC 10/1/2022 ny new 1 2CC 1/1/2023 ny ok 1 2CC 4/1/2023 ny ok 1 2CC 7/1/2023 ny ok 1 3CC 10/1/2023 ny ok 1 3BB 1/1/2022 ca new 1 3BB 4/1/2022 ca new 1 3BB 7/1/2022 ca new 1 3BB 10/1/2022 ca new 12 3BB 1/1/2023 ca new 2 3BB 4/1/2023 ca new 2 3BB 7/1/2023 ca new 2 3BB 10/1/2023 ca new 2 3Desiredid date location status value1 value2CC 1/1/2022 ny open CC 4/1/2022 ny open CC 7/1/2022 ny open CC 10/1/2022 ny new 1 2CC 1/1/2023 ny ok 1 2CC 4/1/2023 ny ok 1 2CC 7/1/2023 ny ok 1 3CC 10/1/2023 ny ok 1 3BB 1/1/2022 ca new 1 3BB 4/1/2022 ca new 1 3BB 7/1/2022 ca new 1 3BB 10/1/2022 ca new 12 3BB 1/1/2023 ca new 2 3BB 4/1/2023 ca new 2 3BB 7/1/2023 ca new 2 3BB 10/1/2023 ca new 2 3Doingdf.loc[(df.id == 'cc') & (df.date <= '07/01/2022'), 'status']= 'open'This labels all of the dates as open and does not remove the values in the other columns.Any suggestion is appreciated.Thank you for any suggestions. | Unfortunately, snapping a cell out of existence does not seem to work with Pandas. Similarly, Pandas expects a value for each cell of every column when setting up a dataframe.Therefore, nan (not a number) seems to be the exact placeholder appropriate for your case. In turn, consider, importing numpy as np and adding the line to set the respective entry to np.nandf.loc[(df.id == 'cc') & (df.date <= '07/01/2022'), 'value1']= np.nanFortunately,df.fillna("")prints the Pandas frame without showing those annoying NAN entries but leaves the cells 'empty' as you seem to desire.In addition, NumPy enables the use of aggregate functions to ignore nan values such as np.nanmean() as can be found here to not break computation on such tables. |
Can you import Python libraries with PL/Python in PostgreSQL? I was wondering if it was possible to use Python libraries inside PL/Python. What I want to do is remove one node in our setup. Right now we have a sensor publishing data to RabbitMQ using Mosquitto and MQTT.On the other side, we have PostgreSQL and we want to build a database. I know that we need something between RabbitMQ and PostgreSQL and we were thinking of Paho. But we were wondering if it was possible to run a script on PostgreSQL using plpython and using the library Paho there. So that would make onr less thing to execute "stand alone".Or maybe there are other alternatives? | Sure, you can import any module into PL/Python. The documentation states: PL/Python is only available as an “untrusted” language, meaning it does not offer any way of restricting what users can do in it and is therefore named plpythonu.Just make sure you don't use multi-threading inside PostgreSQL. |
Python/Pandas: How to Merge Data Frames and Reshape to Long Form? I have data in dataframes of the kind (names of columns and values are dummies):frame1 = AA BBDate_Time 2001 1 52002 2 62017 3 72018 4 8frame2 = AA BBDate_Time 2001 10 502002 20 602017 30 702018 40 80I would like to merge and reshape them into a long form dataframe, to be visualized with seaborn. Like this:frame = stn origin valueDate2001 AA f1 1 f2 10 BB f1 5 f2 50 ... ... 2018 AA f1 4 f2 40 BB f1 8 f2 80How can I do that? I don't have any code to show because the couple of half-hearted attempts I made went nowhere close what I want. | Well, Ravishankar pointed me in the right direction. With some searching I found (almost) how to do it, using concat with group keys and double stacking:foo = pds.concat(dict(f1 = frame1, f2 = frame2), axis=1)foo.stack().stack()Date 2001 AA f1 1 f2 10 BB f1 5 f2 502002 AA f1 2 f2 20 BB f1 6 f2 60However, this method will produce a series with multi-index, which is not appropriate in all situations.To create a dataframe with a single index (years), the following can be used:bar = foo.stack().stack().reset_index(level=[1,2]).Then the columns can be renamed according to need. |
Find snappy compressed files given an AWS S3 bucket with a lot of files in it, is there a way that I can filter out only the snappy compressed files among all of these? | Do they end with .sz? If they aren't marked in the filename, then one will have to inspect the start of each file, and check if they start with the stream identifier: 0xff 0x06 0x00 0x00 0x73 0x4e 0x61 0x50 0x70 0x59 |
Python gnupg "Type Error: a bytes-like object is required, not 'str'" I am just making a simple sign in agent that creates an account and locks it using the gnupg module. Unfortunately, I get this error TypeError: a bytes-like object is required, not 'str. I have tried all sorts of different ways to convert my password into bytes but nothing seems to work.Here is the line that is breaking private_export = self.gpg.export_keys(key, secret=True, passphrase=self.password)Help would be appreciated! | Did you try with any of this??:pass_in_bytes = bytes(self.password, 'utf-8')orpass_in_bytes = self.password.encode('utf-8')orpass_in_bytes = str.encode(self.password)Always ensure if it's byte representation. I'm not sure if the "key" param or the "passphrase" param must be the byte. |
QMessageBox add custom button and keep open I want to add a custom button to QMessagebox that opens up a matplotlib window, along with an Ok button for user to click when they want to close itI currently have it somewhat working, but I want the two buttons to do separate things and not open the window.I know I can just create a dialog window with the desired results, but I wanted to know how to with a QMessageBox.import sysfrom PyQt5 import QtCore, QtWidgetsdef main(): app = QtWidgets.QApplication(sys.argv) msgbox = QtWidgets.QMessageBox() msgbox.setWindowTitle("Information") msgbox.setText('Test') msgbox.addButton(QtWidgets.QMessageBox.Ok) msgbox.addButton('View Graphs', QtWidgets.QMessageBox.YesRole) bttn = msgbox.exec_() if bttn: print("Ok") else: print("View Graphs") sys.exit(app.exec_())if __name__ == "__main__": main()Desired result:Ok button - closes QMessageBoxView Graph button - opens matplotlib window and keeps QMessageBox open until user clicks Ok | A bit hacky IMO, but after you add the View Graphs button you could disconnect its clicked signal and reconnect it to your slot of choice, e.g.import sysfrom PyQt5 import QtCore, QtWidgetsdef show_graph(): print('Show Graph')def main(): app = QtWidgets.QApplication(sys.argv) msgbox = QtWidgets.QMessageBox() msgbox.setWindowTitle("Information") msgbox.setText('Test') msgbox.addButton(QtWidgets.QMessageBox.Ok) yes_button = msgbox.addButton('View Graphs', QtWidgets.QMessageBox.YesRole) yes_button.clicked.disconnect() yes_button.clicked.connect(show_graph) bttn = msgbox.exec_() if bttn: print("Ok") sys.exit(app.exec_())if __name__ == "__main__": main() |
Combine distinct list of dict with ansible I have two list in ansible:toto: - name: titi - name: tatatiti: - name: titi ack: trueIs it possible to combine these two lists by the name key to get the following:new_list: - name: titi ack: true - name: tataI found the way to combine dict, to combine list, by I don't know if I can do the following.Thanks. | Q: Is it possible to combine these two lists by the name key?A: Yes. It is possible with the filter selectattr. The tasks below- set_fact: new_list: "{{ new_list|default([]) + [ item| combine(titi|selectattr('name', 'match', item.name)| list) ] }}" loop: "{{ toto }}"- debug: var: new_listgive"new_list": [ { "ack": true, "name": "titi" }, { "name": "tata" }] |
program that will send a magic packet to a certian ip I need to code a C, C++, .bat, or Python program. It should detect if there is a person trying to connect to a certain IP address(192.168.1.149) through a certain port(25570) from outside the router/ fire wall. The program will then send a magic packet to an IP address (192.168.1.149). The magic packet will then wake up the computer. because the computer will be in sleep mode the code will probably have to be on another computer in the network(I don't want this). Maybe even in the router or DNS. I have a windows 7 computer. It is the computer supposed to be in sleep mode. I need people who answer this question to give me a guide I am not a really good coder so this is why I am asking this question here is my router info:https://www.asus.com/Networking/RTN65U/Sorry I have a really old DNS so I cant give you the specs.How would I do this?thx. | Here's some Python code I use on one of my network machines to wake up another. Not sure where I got it originally, but it works great. Hopefully you can adapt it for your purposes.#!/usr/bin/env pythonimport sockets=socket.socket(socket.AF_INET, socket.SOCK_DGRAM)s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)s.sendto('\xff'*6+'\x00\x11\x22\x33\x44\x55'*16, ('192.168.0.255', 9))print "Attempted to wake host"Note that the \x00\x11 etc. stuff is the MAC address of the machine to be awakened and should be replaced with your machine's MAC address (converted to hex bytes). The 192.168.0.255 is the broadcast address of the network and, again, should be replaced with yours. |
How to assign values from a DataFrame using np.where I am trying to use np.where using 2 DataFrames, but I getting a error saying that there's a problem with the column.Following my code:sum_d = source.groupby('Country')['Deaths'].sum() sum_c = source.groupby('Country')['Confirmed'].sum() Deaths = pd.DataFrame(sum_d)Confirmed = pd.DataFrame(sum_c)frames = [Deaths, Confirmed] Grouped = pd.concat(frames, axis=1)Grouped.loc[:,'Mortality Rate Country'] = Grouped['Deaths']/Grouped['Confirmed']until here it works properly, and I get this result:Grouped.head() Deaths Confirmed Mortality Rate CountryCountry Afghanistan 1 40 0.025000Albania 2 89 0.022472Algeria 17 201 0.084577Andorra 1 113 0.008850Angola 0 2 0.000000source.head() Country Last Update Confirmed Deaths Recovered Province/State Hubei China 2020-03-22 09:43:06 67800 3144 59433 NaN Italy 2020-03-22 18:13:20 59138 5476 7024 NaN Spain 2020-03-22 23:13:18 28768 1772 2575 NaN Germany 2020-03-22 23:43:02 24873 94 266 NaN Iran 2020-03-22 14:13:06 21638 1685 7931 Then I try to assign some values comparing values I get error:source['Mortality Rate Country'] = np.where(source['Country'] == Grouped['Country'], Grouped['Mortality Rate Country'], source['Mortality Rate'])The error says: KeyError: 'Country'During handling of the above exception, another exception occurred:Any tips or ideas would be really appreciated.Thanks in advance | If there are unique countries:source['Mortality Rate Country'] = source['Deaths']/source['Confirmed']If there are duplicated countries:Your code should be simplify by GroupBy.transform for new columns in original data filled by aggregate values:source['Deaths1'] = source.groupby('Country')['Deaths'].transform('sum') source['Confirmed1'] = source.groupby('Country')['Confirmed'].transform('sum') source['Mortality Rate Country'] = source['Deaths1']/source['Confirmed1'] |
Do add_roles and remove_roles "hit" the discord service if they wouldn't result in a change? Question about Member.add_roles and Member.remove_roles on the discord.py library.If I just...import discord, asyncioguild = discord.Guild()for member in guild.members ... await member.add_roles(desiredRole)But 80 of the 100 members have the desiredRole, does my bot send 100 commands to discord or 20?In other words, does Member.add_roles check a member's roles and decide if to send the command, or do I need to do that on my own in order not to run into limit issues if I cycle through 1000 users on 200 servers each, for example? | Yes. The underlying implementation is actually to create a list of the roles you should have and send that to Discord via member.edit. You can see the code for add_roles here.You can always perform this check locally yourself by looking at member.roles |
Filter many to many set from models instance I have a product model that has tags, the tags can be in multiple languages. when I get a instance of the product, i have a product.tags manager.I was wondering if there was a way to filter the tags connected to the product instance, that when I pass it to the serializer, i would only get the tags in a single language with the serializer output.class Product(models.Model): ... tags = models.ManyToManyField(Tag) ...class Tag(models.Model) text = models.CharField(max_length=32) language = models.CharField(max_length=2)class ProductSerializer(serializer.ModelSerializer): tags = TagSerializer(many=True) ...I am able to filter them manually then add them to the data response, like so:tags_query = product.tags.filter(language=lang)tag_serializer = TagSerializer(lang, many=True) but I was wondering if this can be done through the serializers? | No, you can't do that through the serializers.You can do like this:tags_query = product.tags.filter(language=lang)tag_serializer = TagSerializer(tags_query, many=True) |
Python 3 DictWriter csv BytesIO TypeError I'm using python 3 trying to generate a csv on the file.I want to ensure that I'm writing utf8 so I'm converting values of my list of dicts into byte stringsfield_order = ['field1', 'field2', 'field3', 'field4']stats = ... # list of dictsoutput = io.BytesIO()writer = csv.DictWriter(output, field_order)writer.writeheader()for stats in my_stats: writer.writerow({k: bytes(v, 'utf8') for k, v in stats.items()}) csv_output = output.getvalue()I'm getting an exception on the writer.writeheader() callTypeError: 'str' does not support the buffer interfaceThere doesn't seem to any way to change writerheader to write bytes.What am I doing wrong? | csv mdoule operates on strings according to the documentation: Writer objects (DictWriter instances and objects returned by the writer() function) have the following public methods. A row must be a sequence of strings or numbers for Writer objects and a dictionary mapping fieldnames to strings or numbers (by passing them through str() first) for DictWriter objects. Note that complex numbers are written out surrounded by parens. This may cause some problems for other programs which read CSV files (assuming they support complex numbers at all).How about using io.StringIO, and later encode it utf-8.import csvimport iofield_order = ['field1', 'field2', 'field3', 'field4']my_stats = ...output = io.StringIO()writer = csv.DictWriter(output, field_order)writer.writeheader()for stats in my_stats: writer.writerow(stats)csv_output = output.getvalue().encode('utf-8') |
Send Message If A Message Has A Key Word. Discord Py So I wanted the bot to reply with 'hi' every time someone mentions its name in a sentence.I wrote this but it's not working, could someone please help :)@client.eventasync def on_message(msg, ctx): if 'astro' in msg.content: await ctx.send('hi there!')error | on_message doesn't take a ctx argument:@client.eventasync def on_message(msg): if 'astro' in msg.content: channel = msg.channel await channel.send('Hi there!') |
How to get stacked barh-plot using column of pandas dataframe to stack the bars? I have two dataframes:The first dataframe df1:Name GroupAbc ABcd ACde BDef CThe second dataframe df2:Name GroupEfg AFgh BGhi CHij CFirst StepWhat I did was to create a new dataframe df_new which contains Group as a column and length of df1 and df2 as another column:df_new = pd.DataFrame({ 'name': ['1', '2'], 'length': map(len, [df1, df2])})Second StepAnd then used the following code to plot df_new as barh-plot:fig, ax = plt.subplots() df_new.plot.barh(width=0.75,ax=ax,figsize=(12,8))ax.set_title('Title')ax.set_xlabel('Number')ax.set_ylabel('Group')ax.set_xlim([0,20])for i, v in enumerate(df_new['Anzahl']): ax.text(x = v + 1, y = i + .1, s = str(v), color='blue', fontweight='bold')plt.savefig(os.path.join('test.png'), dpi=400, format='png', bbox_inches='tight')This works out but instead of getting a normal barh-plot with one continuous bar for each dataframe I want to get a barh-plot stacked by the Group-column from my original dataframes df1 and df2. What do I have to do differently in the first step where I create df_new and what in the second step where I plot df_new? | Create a group for each dataframe using the column Group, then, use agg count to count the number of rows with the same Group type on each group, change the column name (this will ease the process for data plot) and put them inside a list. After, use pd.concat to concatenate both dataframes along the column axis (axis=1), this will place the data in the desired format for the barh. What remains now to obtain the barh-plot stacked by Group-columns is to set the stacked parameter for the bar plot (stacked=True).df_list = []for idx, df in enumerate([df1, df2], 1): df_agg = df.groupby('Group').agg('count') df_agg.columns = [f'Dataframe-{idx}'] df_list.append(df_agg)dff = pd.concat(df_list, axis=1)fig, ax = plt.subplots()dff.plot.barh(width=0.75,ax=ax,figsize=(12,8), stacked=True)ax.set_title('Title')ax.set_xlabel('Number')ax.set_ylabel('Group')ax.set_xlim([0,dff.max().max()*2])plt.show() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.