questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Working on Kivy App for expense tracking... unable to align the gridlayout I am working on an expense app for myself. in the second screen i want to move "January" and "year-to-date" label above close to "available balance" and move below section above. I have spent few days but unable to find a solution. I was wondering if you someone could help me on this.from kivymd.app import MDAppfrom kivy.lang.builder import Builderfrom kivy.uix.screenmanager import ScreenManager, Screenfrom kivy.uix.scrollview import ScrollViewfrom kivymd.uix.label import MDLabelfrom kivy.uix.gridlayout import GridLayoutfrom kivy.uix.floatlayout import FloatLayoutfrom kivy.uix.widget import Widgetscreen_helper = """<GoButton>: Button: font_size: 12 text: "Search" background_color: app.theme_cls.primary_color size_hint: 0.1, 0.05 pos_hint: {'x': 0.2, 'y':0.7} #on_release: app.run_test()<MonthYear@MDTextField>: font_size: 20 hint_text: "Enter Month/Year" helper_text: "MM/YYYY" helper_text_mode: 'on_focus' size_hint: None, None width:120ScreenManager: MainPage: IndividualExpense: UploadScreen:<MainPage>: name: 'main' BoxLayout: orientation: 'vertical' spacing: '8dp' MDToolbar: title: 'Home' MDLabel: text: " Expenses" font_style: 'Subtitle1' size_hint_y: None height: self.texture_size[1] ScrollView: MDList: OneLineListItem: text: 'Cell phone' OneLineListItem: text: 'Grocery' MDToolbar: left_action_items: [["home", lambda x: app.mainPageScreen()], ["file-table-outline", lambda x: app.IndivdualExpenseScreen()], ["view-compact", lambda x: app.listOfStocksScreen()],] <IndividualExpense>: name: 'indExp' BoxLayout: orientation: 'vertical' spacing: '8dp' MDToolbar: title: 'Cell Phone' GridLayout: cols: 3 FloatLayout: MonthYear pos_hint: {'x':0.025, 'y': .7} FloatLayout: Button: font_size: 14 size_hint: 0.4,0.15 text: "Submit" background_color: app.theme_cls.primary_color pos_hint: {"x":0.01, "top":0.93} GridLayout: cols: 1 size: root.width-300, root.height-300 FloatLayout: MDLabel: text: ' Available Balance' pos_hint: {'x': 0.0, 'y':1} GridLayout: cols: 2 size: root.width-300, root.height-300 FloatLayout: MDLabel: text: ' January' pos_hint: {'x': 0.0, 'y':1} FloatLayout: MDLabel: text: 'Year-to-Date' pos_hint: {'x': 0.0, 'y':1} GridLayout: cols: 1 MDLabel: text: ' Monthly Budget: ' MDLabel: text: ' MTD Expense: ' MDLabel: text: ' YTD Budget: ' MDLabel: text: ' YTD Expense: ' MDToolbar: left_action_items: [["home", lambda x: app.mainPageScreen()], ["file-table-outline", lambda x: app.IndivdualExpenseScreen()], ["view-compact", lambda x: app.listOfStocksScreen()],] <UploadScreen>: name: 'upload' MDLabel: text: 'Upload' halign: 'center' MDRectangleFlatButton: text: 'Back' pos_hint: {'center_x':0.5,'center_y':0.1} on_press: root.manager.current = 'main' """class MainPage(Screen): passclass IndividualExpense(Screen): passclass UploadScreen(Screen): passclass GoButton(FloatLayout): pass# Create the screen managersm = ScreenManager()sm.add_widget(MainPage(name='main'))sm.add_widget(IndividualExpense(name='indExp'))sm.add_widget(UploadScreen(name='upload'))class DemoApp(MDApp): def mainPageScreen(self): self.root.current = 'main' self.root.transition.direction = 'left' def IndivdualExpenseScreen(self): self.root.current = 'indExp' self.root.transition.direction = 'right' def build(self): screen = Builder.load_string(screen_helper) return screenDemoApp().run()See second screen
Here is a modified version of the <IndividualExpense>: rule in your kv:<IndividualExpense>: name: 'indExp' BoxLayout: orientation: 'vertical' spacing: '8dp' MDToolbar: title: 'Cell Phone' GridLayout: cols: 4 size_hint_y: 0.15 MonthYear: Widget: size_hint_x: 0.3 Button: font_size: 14 size_hint: 0.3,0.15 text: "Submit" background_color: app.theme_cls.primary_color Widget: size_hint_x: 0.1 GridLayout: cols: 1 size_hint_y: 0.1 MDLabel: text: ' Available Balance' pos_hint: {'x': 0.0, 'y':1} GridLayout: cols: 2 size_hint_y: 0.8 MDLabel: text: ' January' pos_hint: {'x': 0.0, 'top':1} size_hint_y: 0.2 MDLabel: text: 'Year-to-Date' pos_hint: {'x': 0.0, 'top':1} size_hint_y: 0.2 GridLayout: size_hint_y: 0.8 cols: 1 MDLabel: text: ' Monthly Budget: ' MDLabel: text: ' MTD Expense: ' MDLabel: text: ' YTD Budget: ' MDLabel: text: ' YTD Expense: ' MDToolbar: left_action_items: [["home", lambda x: app.mainPageScreen()], ["file-table-outline", lambda x: app.IndivdualExpenseScreen()], ["view-compact", lambda x: app.listOfStocksScreen()],] I have removed several FloatLayouts that seemed to serve no purpose. I have also removed some size properties (they have no effect unless size_hints are None). Since your base layout in this rule is a BoxLayout, each child of that BoxLayout will be given a share of the vertical space based on their size_hint_y values. Similar approach for GridLayouts.
Running a for loop for higher number of iterations in python I have written a piece of code which I am trying to run in my local machine of 8GB ram.import numpy as nptasks = ['A','B','C','D']tasks_pass_prob = [0.7,0.1,0.5,0.3]task_probs = tuple(zip(tasks,tasks_pass_prob))N = 1000000 n = 1results_dict = {}for _ in range(N): for t,p in task_probs: res = np.random.binomial(n,p,N) results_dict[t]=resFor smaller values of N code is running but with a higher value of N the machine gets hung. Is a better way to restructure my for loop to run the code ?
Actually, your code is not hanging but your processes are so big that it's taking a long time to run...It is not the issue of RAM...And why did you use for _ in range(N)?I suggest you write it like this:import numpy as nptasks = ['A','B','C','D']tasks_pass_prob = [0.7,0.1,0.5,0.3]task_probs = tuple(zip(tasks,tasks_pass_prob))N = 1000000n = 1results_dict = {}# for _ in range(N):for t, p in task_probs: res = np.random.binomial(n, p, N) results_dict[t] = res print(f"{res=}, {results_dict=}")
Python Code Not Selecting Correct Dictionary I am trying to loop through and select the two class_id values I want, and then compare their center_x and center_y values together. If they are within a certain range, which I now have set at 0.10 it will print within range. However when I run my code now and print out the x_absolute_dif and y_absolute_dif between them it just outputs 0.0 meaning it is not selecting them properly. Any help would be appreaciated.Python Code: desired_id1 = 14 for thing in results: for object1 in thing["objects"]: if object1["class_id"] == desired_id1: specific_class = object1 print("Correct Class") break for _class in results: for object1 in _class["objects"]: relative_coordinates = object1["relative_coordinates"] center_x = relative_coordinates["center_x"] center_y = relative_coordinates["center_y"] # Do something with these values desired_id2 = 15 for thing in results: for object2 in thing["objects"]: if object2["class_id"] == desired_id2: specific_class = object2 print("Correct Class") break for _class in results: for object2 in _class["objects"]: relative_coordinates = object2["relative_coordinates"] center_x = relative_coordinates["center_x"] center_y = relative_coordinates["center_y"] # Do something with these values x_dif = object1["relative_coordinates"]["center_x"] - object2["relative_coordinates"]["center_x"] x_absolute_dif = abs(x_dif) print(x_absolute_dif) if (x_absolute_dif <= 0.10): print("X-Cords Within Range") x_within_range = True else: print("X-Cords Not Within Range") y_dif = object1["relative_coordinates"]["center_y"] - object2["relative_coordinates"]["center_y"] y_absolute_dif = abs(y_dif) print(y_absolute_dif) if (y_absolute_dif <= 0.10): print("Y-Cords Within Range") y_within_range = True else: print("Y-Cords Not Within Range")Json File:[ { "frame_id": 1, "filename": "C:\\Yolo_v4\\darknet\\build\\darknet\\x64\\f047.png", "objects": [ { "class_id": 14, "name": "d", "relative_coordinates": { "center_x": 0.049905, "center_y": 0.635935, "width": 0.101077, "height": 0.044067 }, "confidence": 0.966701 }, { "class_id": 15, "name": "e", "relative_coordinates": { "center_x": 0.045943, "center_y": 0.685398, "width": 0.109195, "height": 0.041489 }, "confidence": 0.923188 }, ] }]
I think the problem with the code is using object1 and object2 which are initialised in the loop and outside the loop they will have the value of the last element before the loop ends. In your case the loopsfor _class in results: for object1 in _class["objects"]:andfor _class in results: for object2 in _class["objects"]:will leave object1 and object2 equal to the last element in _class["objects"] and not your desired state (the one after break).Edit:desired_id1 = 14obj1 = Nonefor thing in results: for object1 in thing["objects"]: if object1["class_id"] == desired_id1: specific_class = object1 obj1 = object1 print("Correct Class") breakfor _class in results: for object1 in _class["objects"]: relative_coordinates = object1["relative_coordinates"] center_x = relative_coordinates["center_x"] center_y = relative_coordinates["center_y"] # Do something with these valuesdesired_id2 = 15obj2 = Nonefor thing in results: for object2 in thing["objects"]: if object2["class_id"] == desired_id2: specific_class = object2 obj2 = object2 print("Correct Class") breakfor _class in results: for object2 in _class["objects"]: relative_coordinates = object2["relative_coordinates"] center_x = relative_coordinates["center_x"] center_y = relative_coordinates["center_y"] # Do something with these valuesx_dif = obj1["relative_coordinates"]["center_x"] - obj2["relative_coordinates"]["center_x"]x_absolute_dif = abs(x_dif)print(x_absolute_dif)if (x_absolute_dif <= 0.10): print("X-Cords Within Range") x_within_range = Trueelse: print("X-Cords Not Within Range")y_dif = obj1["relative_coordinates"]["center_y"] - obj2["relative_coordinates"]["center_y"]y_absolute_dif = abs(y_dif)print(y_absolute_dif)if (y_absolute_dif <= 0.10): print("Y-Cords Within Range") y_within_range = Trueelse: print("Y-Cords Not Within Range")
flask-sqlalchemy Get difference between 2 date from database i want to calculate the difference between 2 days from mysql databasei have the script like this@app.route('/',methods=['GET','POST']) def show(): dura= [] dates_start=(MyTask.query.get('dates_start')) d1=(MyTask.query.with_entities(MyTask.dates_start)) d2=(MyTask.query.with_entities(MyTask.dates_finish)) dura = d2.date()-d1.date() return render_template('index.html',dura=dura.days)the html script:{% for output in dura %}{{ output }}{% endfor %}<br>when i run the script it returns an error like this:AttributeError: 'BaseQuery' object has no attribute 'date'how to solve this?
Get date between in two date:from datetime import datetimedate1_ = datetime.timedelta(days=1) # 1 or another numberdate2_ = dadtime.timedelta(days=10) # 10 or another numberModel.filter(Model.date_column.between(date1_,date2_),).all()Get differance date between in two date:date1_ = datetime.timedelta(days=1) # 1 or another numberdate2_ = dadtime.timedelta(days=10) # 10 or another numberx = Model.filter(Model.date_column>=date1_).all()y = Model.filter(Model.date_column>=date1_).all()result_list = [item for item in x if item not in y]
Matching pandas dataframe rows in a spreadsheet with Xlwings I am writing a script to:import spreadsheets as pandas dataframesexport, sort and compile them in a single XL spreadsheet via XlwingsMy issue is that the inputs do not have the exact same number and values of indexes. I am trying to ensure that every row would be matching to show the right values in the right rows for all dataframes in my final spreadsheet, and that rows that only exist in one dataframe would be completed by zero values in the others.As an exemple, i have set together the following script:import pandas as pdimport xlwings as xwdf1= pd.read_excel('test1.xlsx')df2=pd.read_excel('test2.xlsx')print(df1)print(df2) Header 1 Header 2 Header 30 Cat1 A 1501 Cat1 A 2002 Cat1 A 2503 Cat2 A 3004 Cat3 B 3005 Cat3 B 3506 Cat3 C 07 Cat4 C 08 Cat5 D 50 Header 1 Header 2 Header 30 Cat1 A 1501 Cat1 A 2002 Cat1 A 2503 Cat1 A 3504 Cat2 A 3005 Cat3 B 3006 Cat3 B 3507 Cat3 C 08 Cat5 D 509 Cat6 A 25010 Cat6 B 250sht=xw.Book().sheets[0]sht.range('A1').value = df1sht.range('E1').value = df2In the end the result does not match all Cat 1 / A / numbers on the same row etc.Any idea?Thank you very much
I honestly did not understand your issue, may you check and revise it ? It sounds nonsense to me.My issue is that the inputs do not have the exact same number and values of indexes. I am trying to ensure that every row would be matching to show the right values in the right rows for all dataframes in my final spreadsheet, and that rows that only exist in one dataframe would be completed by zero values in the others.However, I can help you with your two first requests, for both I would use xlwings indeed:for both, use import xlwings as xw1) import spreadsheets as pandas dataframesdf = pd.DataFrame(xw.Book(file_path).sheets['SheetName'].range((1,1), (10, 3)).value)This will create a Pandas.Dataframe object with values found in your excel file, in the sheet 'SheetName' from A1 to C10 . Be aware Excel (and xlwings) ha 1-based index, not 0-based, hence first row is 1 , first column is 1 - in "normal" Python, the first would be 0.2) export, sort and compile them in a single XL spreadsheet via XlwingsYour code is quite correct here, general API call would be:xw.Book(file_path).sheets['SheetName'].range('A1')= dfMake your request more clear so you can get more help.
Can't make dynamic dimension in tensorflow variable I have the following code:a = tf.placeholder(dtype = tf.float64, shape = (10, None))b = tf.Variable(tf.random_normal((20, 10), dtype = tf.float64), dtype = tf.float64)c = tf.matmul(b, a)d = tf.shape(a)[1]e = tf.Variable(tf.random_normal((d, d), dtype = tf.float64), dtype = tf.float64)I want to set the dimension of e during the execution. But I get an error. Isn't it possible?
No, it's not possible. Tensorflow doesn't allow dynamic shape in variable definition, because it can't allocate the memory of arbitrary size during graph definition. So the dimensions of e must be known statically.
Is it possible to write a code in Python for a USB stick so when you plug it it downloaded all the files from the pc? was interested is it possible to wirte a code in Python for a USB stick so when you plug the USB in a pc, it downloades every file from the pc without showing any message boxes that are like “Files are being downloaded”?
that was probably possible back in windows XP, however on anything newer than that then answer is NO, because a computer won't simply run any software without the user permission.
Adding on values to a key based on different lengths I'm trying to add on values to a key after making a dictionary.This is what I have so far: movie_list = "movies.txt" # using a file that contains this order on first line: Title, year, genre, director, actor in_file = open(movie_list, 'r')in_file.readline()def list_maker(in_file): movie1 = str(input("Enter in a movie: ")) movie2 = str(input("Enter in another movie: ")) d = {} for line in in_file: l = line.split(",") title_year = (l[0], l[1]) # only then making the tuple ('Title', 'year') for i in range(4, len(l)): d = {title_year: l[i]} if movie1 or movie2 == l[0]: print(d.values())The output I get it: Enter in a movie: 13 BEnter in another movie: 1920{('13 B', '(2009)'): 'R. Madhavan'}{('13 B', '(2009)'): 'Neetu Chandra'}{('13 B', '(2009)'): 'Poonam Dhillon\n'}{('1920', '(2008)'): 'Rajneesh Duggal'}{('1920', '(2008)'): 'Adah Sharma'}{('1920', '(2008)'): 'Anjori Alagh\n'}{('1942 A Love Story', '(1994)'): 'Anil Kapoor'}{('1942 A Love Story', '(1994)'): 'Manisha Koirala'}{('1942 A Love Story', '(1994)'): 'Jackie Shroff\n'}.... so on and so forth. I get the whole list of movies. How would I go about doing so if I wanted to enter in those two movies (any 2 movies as a union of the values to the key (movie1, movie2) )?Example: {('13 B', '(2009)'): 'R. Madhavan', 'Neetu Chandra', 'Poonam Dhillon'}{('1920', '(2008)'): 'Rajneesh Duggal', 'Adah Sharma', 'Anjori Alagh'}
Sorry if the output isn't completely what you want, but here's how you should do it:d = {}for line in in_file: l = line.split(",") title_year = (l[0], l[1]) people = [] for i in range(4, len(l)): people.append(l[i]) # we append items to the list... d = {title_year: people} # ...and then make the dict so that the list is in it. if movie1 or movie2 == l[0]: print(d.values())Basically, what we are doing here is that we are making a list, and then setting the list to a key inside of the dict.
mypy "Optional[Dict[Any, Any]]" is not indexable inside standard filter, map Given the following code:from typing import Optional, Dictdef foo(b: bool) -> Optional[Dict]: return {} if b else Nonedef bar() -> None: d = foo(False) if not d: return filter(lambda x: d['k'], [])mypy 0.770 fails with the following error on the last line of bar: Value of type "Optional[Dict[Any, Any]]" is not indexable. Same goes for map. Changing the line to use list comprehension or filter_ or map_ from pydash resolves the error.Why does mypy throw an error when using the standard filter even though there is a type guard?
The type-narrowing that happens after an if or assert doesn't propagate down to inner scopes that you've bound that variable in. The easy workaround is to define a new variable bound with the narrower type, e.g.:def bar() -> None: d = foo(False) if not d: return d_exists = d filter(lambda x: d_exists['k'], [])The reason that d isn't bound to the narrower type in the inner scope might be because there's no guarantee that d might not get changed back to None in the outer scope, e.g.:def bar() -> None: d = foo(False) if not d: return def f(x: str) -> str: assert d is not None # this is totally safe, right? return d['k'] # mypy passes because of the assert d = None # oh no! filter(f, [])whereas if you bind a new variable, that assignment is not possible:def bar() -> None: d = foo(False) if not d: return d_exists = d def f(x: str) -> str: # no assert needed because d_exists is not Optional return d_exists['k'] d_exists = None # error: Incompatible types in assignment filter(f, [])In your particular example there's no runtime danger because the lambda is evaluated immediately by filter with no chance for you to change d in the meantime, but mypy doesn't necessarily have an easy way of determining that the function you called isn't going to hang on to that lambda and evaluate it at a later time.
How to apply bearer token with a get? I'm trying to access an API that requires a bearer token. I am able to get the bearer token, but I do not understand the next steps. The token is required in the header, but a get only accepts 2 parameters which would be my url and parameters?I have been trying to mimic the companies javascript example. https://imgur.com/A7RhWo9https://portal.trafnet.com/rest/home/JavascriptExamplehttps://portal.trafnet.com/rest#Urlstokenurl = 'https://portal.trafnet.com/rest/token'#Credsuser='test@test.com'password='test'#Fetch Bearer Tokentokenfetch = requests.post(tokenurl, data = {'grant_type':'password', 'username':user, 'password':password})tokenval = tokenfetch.json()mytoken = tokenval['access_token']Below this line I obviously do not understand. #DataParmsdatefrom = '2019-09-01'dateto = '2019-09-01'sitecode = '01'includeinternaloc = 'true'datasummedbyday = 'false'header = {'Authorization: Bearer %s' %mytoken}params = {'SiteCode':sitecode, 'DateFrom':datefrom, 'DateTo':dateto,'IncludeInternalLocations':includeinternaloc, 'DataSummedByDay':datasummedbyday}response = requests.post(dataurl,params,header)print(response)print(response.json()){'Message': "No HTTP resource was found that matches the request URI 'https://portal.trafnet.com/rest/api/traffic'."}
if i understand the problem correctly you have got a bearer token from the post request and have to use the same token in the next GET API call in the header. To pass the headers use headers as keyword argument to the requests.get methodOne example of headers with bearer token: {'Authorization': 'Bearer <access_token>' }requests.get(dataurl, params=params, headers=header)
Python: Only return matching values in up to three lists but ignore empty lists I've created three lists of integers (a, b and c) and these may or may not contain any values. I want to create a new list (newList) based on these existing lists:If all three lists contain values, I want to populate newList with only the values that are common to every list (e.g. a = [1,2,3], b = [2,3,4], c = [3,4,5] then newList = [3])If all of the lists are empty, I want newList to also be empty (e.g. a = [], b = [], c = [] then newList = [])If one or two of the lists are empty, I want to populate newList with the values that the non-empty lists share (e.g. a = [], b = [2,3,4], c = [3,4,5] then newList = [3,4]What I'm finding tricky is that any or none of the lists could be empty, meaning that I'm currently having to duplicate my code in different if statements.The below is what I have tried but it looks really inefficient.a = [1,2,3]b = [2,3,4]c = [3,4,5]newList = []if len(a) + len(b) + len(c) != 0: if len(a) > 0: if len(b) > 0: if len(c) > 0: #a, b and c all contain values, find common values newList = list(set(a) & set(b) & set(c)) else: #a and b contain values, c is empty. Find common values in a and b. newList = list(set(a) & set(b)) else: if len(c) > 0: #a and c contain values, b is empty. Find common values in a and c. newList = list(set(a) & set(c)) else #only a contains values. b and c are empty. newList = a else: if len(b) > 0: if len(c) > 0: #b and c contain values, a is empty. Find common values in b and c. newList = list(set(b) & set(c)) else: #only b contains values. a and c are empty newList = b else len(c > 0: #only c contains values. a and b are empty newList = celse: #no lists contain values, leave newList as emptyI'd be really grateful if anyone has any improvements, thanks in advance.
You could approach it like this, generalized, and using sets:def inner_join_nonempty(*iterables): sets = (set(iterable) for iterable in iterables) nonempty_sets = [s for s in sets if s] return set.intersection(*nonempty_sets) if nonempty_sets else set()Usage for your example:>>> inner_join_nonempty(a, b, c){3}
Posting a default value with serializer in DRF I am attempting to post a default value.In plain English, this is how I want it to work:If data has no "tag" field (s)Check to see if tag "none" exists (for 'owner')If tag "none" exists, create the m2mIf tag "none' doesn't exist, create the tag none (for 'owner')My post data will not contain the field tag in the JSON data being posted.This code works perfectly EXCEPT when there is no tag fieldWhen there is no tag field, it tells me 'tag field is required'Example data being posted{title: "Testing"}Models.pyclass Tag(models.Model): name = models.CharField("Name", max_length=5000, blank=True) taglevel = models.IntegerField("Tag level", null=True, blank=True) owner = models.ForeignKey('auth.User', blank=True, null=True)vclass Item(models.Model): title = models.CharField("Title", max_length=10000, blank=True) tag = models.ManyToManyField('Tag', blank=True) owner = models.ForeignKey('auth.User', blank=True, null=True)Serializerclass ItemSerializer(serializers.ModelSerializer): tag = TagSerializer(many=True, read_only=False) info = InfoSerializer(many=True, read_only=True) class Meta: model = Item ordering = ('-created',) fields = ('title', 'pk', 'tag') def create(self, validated_data): tags_data = validated_data.pop('tag') owner = self.context['request'].user item = Item.objects.create(owner=owner, **validated_data) for tag_data in tags_data: tag_data['owner'] = owner tag_qs = Tag.objects.filter(name__iexact=tag_data['name']) if not tag_data: Tag.objects.get_or_create(tag_name="None") if tag_qs.exists(): tag = tag_qs.first() else: tag = Tag.objects.create(**tag_data) item.tag.add(tag) return item
Try setting default=None in your tag field of ItemSerializer.Your ItemSerializer should now looks like:class ItemSerializer(serializers.ModelSerializer): tag = TagSerializer(default=None, many=True, read_only=False) info = InfoSerializer(many=True, read_only=True) ...
Python asking game I'm making a question game(along the lines of 20 questions) but I want my program to only ask a question once. I have tried using enumerate to give each string in ques a value then had an if statement saying if i = i: i != 1 hoping that that would change the value of i to something else so that it doesn't repeat the questions but that didn't work. Any help would be nice this is my first time programming a question game and have high hopes for it as soon as I can get it to a stable point.import sys, randomkeepGoing = Trueques = ['What does it eat?', 'How big is it?', 'What color is it?', 'How many letters are in it?', 'Does it have scales?', 'Does it swim?', 'How many legs does it have?' ]ask = raw_input("Want to play a game?")while keepGoing: if ask == "yes": nextQ = raw_input(random.choice(ques)) else: keepGoing = False
Do something like this:for question in random.shuffle(ques): #your code here, for each questionBTW if your code is as-is, as you wrote it, it generates an endless loop. Try to reformule it.
Getting same value for Precision and Recall (K-NN) using sklearn Updated question: I did this, but I am getting the same result for both precision and recall is it because I am using average ='binary'?But when I use average='macro' I get this error message: Test a custom review messageC:\Python27\lib\site-packages\sklearn\metrics\classification.py:976: DeprecationWarning: From version 0.18, binary input will not be handled specially when using averaged precision/recall/F-score. Please use average='binary' to report only the positive class performance. 'positive class performance.', DeprecationWarning)Here is my updated code:path = 'opinions.tsv'data = pd.read_table(path,header=None,skiprows=1,names=['Sentiment','Review'])X = data.Reviewy = data.Sentiment#Using CountVectorizer to convert text into tokens/featuresvect = CountVectorizer(stop_words='english', ngram_range = (1,1), max_df = .80, min_df = 4)X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=1, test_size= 0.2)#Using training data to transform text into counts of features for each messagevect.fit(X_train)X_train_dtm = vect.transform(X_train) X_test_dtm = vect.transform(X_test)#Accuracy using KNN ModelKNN = KNeighborsClassifier(n_neighbors = 3)KNN.fit(X_train_dtm, y_train)y_pred = KNN.predict(X_test_dtm)print('\nK Nearest Neighbors (NN = 3)')#Naive Bayes Analysistokens_words = vect.get_feature_names()print '\nAnalysis'print'Accuracy Score: %f %%'% (metrics.accuracy_score(y_test,y_pred)*100)print "Precision Score: %f%%" % precision_score(y_test,y_pred, average='binary')print "Recall Score: %f%%" % recall_score(y_test,y_pred, average='binary')By using the code above I get same value for precision and recall. Thank you for answering my question, much appreciated.
To calculate precision and recall metrics, you should import the according methods from sklearn.metrics.As stated in the documentation, their parameters are 1-d arrays of true and predicted labels:from sklearn.metrics import precision_scorefrom sklearn.metrics import recall_scorey_true = [0, 1, 2, 0, 1, 2]y_pred = [0, 2, 1, 0, 0, 1]print('Calculating the metrics...')recision_score(y_true, y_pred, average='macro')>>> 0.22recall_score(y_true, y_pred, average='macro')>>> 0.33
Code under conditional statement is not executing despite condition being met maybe it's because I haven't coded for a few days, but I can't understand why this isn't working. The condition of if i == enemy_spaceship_index is being met once in the for loop, and yet the code beneath that conditional if statement is not executing. When I print out the list, it's just giving me seven 2s. What is should be doing is printing six 2s and a 3. The position of the 3 in the list is determine by enemy_spaceship_index. Any help would be appreciated. enemy_spaceship_index = randint(0, 6)appearancesLeft = []for i in range(7): if i == enemy_spaceship_index: appearancesLeft.append(3) elif i != enemy_spaceship_index: appearancesLeft.append(2) print appearancesLeft
I think your indentation is messed up. Otherwise, it looks to me like your code works just fine...>>> from random import randint>>> for x in range(7): # test all possible values... enemy_spaceship_index = x... appearancesLeft = []... for i in range(7):... if i == enemy_spaceship_index:... appearancesLeft.append(3)... elif i != enemy_spaceship_index:... appearancesLeft.append(2)... print appearancesLeft...[3][3, 2][3, 2, 2][3, 2, 2, 2][3, 2, 2, 2, 2][3, 2, 2, 2, 2, 2][3, 2, 2, 2, 2, 2, 2][2][2, 3][2, 3, 2][2, 3, 2, 2][2, 3, 2, 2, 2][2, 3, 2, 2, 2, 2][2, 3, 2, 2, 2, 2, 2][2][2, 2][2, 2, 3][2, 2, 3, 2][2, 2, 3, 2, 2][2, 2, 3, 2, 2, 2][2, 2, 3, 2, 2, 2, 2][2][2, 2][2, 2, 2][2, 2, 2, 3][2, 2, 2, 3, 2][2, 2, 2, 3, 2, 2][2, 2, 2, 3, 2, 2, 2][2][2, 2][2, 2, 2][2, 2, 2, 2][2, 2, 2, 2, 3][2, 2, 2, 2, 3, 2][2, 2, 2, 2, 3, 2, 2][2][2, 2][2, 2, 2][2, 2, 2, 2][2, 2, 2, 2, 2][2, 2, 2, 2, 2, 3][2, 2, 2, 2, 2, 3, 2][2][2, 2][2, 2, 2][2, 2, 2, 2][2, 2, 2, 2, 2][2, 2, 2, 2, 2, 2][2, 2, 2, 2, 2, 2, 3]
Python - Merge two lists with a simultaneous concatenation ListA = [1,2,3]ListB = [10,20,30]I want to add the contents of the lists together (1+10,2+20,3+30) creating the following list:ListC = [11,22,33]Is there a function that merges lists specifically in this manner?
This works:>>> ListA = [1,2,3]>>> ListB = [10,20,30]>>> list(map(sum, zip(ListA, ListB)))[11, 22, 33]>>>All of the built-ins used above are explained here.Another solution would be to use a list comprehension. Depending on your taste, you could do this:>>> [sum(x) for x in zip(ListA, ListB)][11, 22, 33]>>>or this:>>> [x+y for x,y in zip(ListA, ListB)][11, 22, 33]>>>
TypeError when assigning a non-existent path to a string this is my first post here so i am welcome to criticism regarding the post itself in terms of layout etc.I am writing a script to automatically sort files. For that I have to ask for a source where the files are.import osimport os.pathimport shutilimport timedef main(): src = get_source() dst = get_destination() sort_files_by_modification_time(src, dst)def get_source(): path_source = input("Where are the files?\n") if not os.path.exists(path_source): print("The system cannot find the path specified.") get_source() elif not os.listdir(path_source): print("The selected path is empty.") get_source() else: return path_source#didnt want to simplify this bit, since in the line path_current = something # the error occurs. It is a TypeError: expected strdef sort_files_by_modification_time(path_source) for x in range(0, len(os.listdir(path_source))): path_current = os.path.join(path_source, os.listdir(path_source)[x]) if __name__ == '__main__': main()I tried to simplify it as much as possible. When I type in an existing path, everything works fine.eg. actualpath\testfolder existsIf i were to "accidentally" type actualpath\testfolde without the r the if not will call the method again. The same will happen if the the folder is empty (which is also checked with elif).So: If inside get_source() get_source gets called again the variable src will be a NoneType or contain None. What am i doing wrong?Thanks for advise!
In the get_source function, when you recursively call get_source(), you haven't specified it to return the result and the statement has no effect. Add the return keyword in those two instances, like this:def get_source():path_source = input("Where are the files?\n")if not os.path.exists(path_source): print("The system cannot find the path specified.") return get_source()elif not os.listdir(path_source): print("The selected path is empty.") return get_source()else: return path_source
Issue regarding a for loop based off an Access table Essentially what is supposed to happen is it takes from a database table some information containing IDs. In the condition that one of the input.text() elements are found on the Database as one of the IDs, I expect it to run X, but instead bypasses and runs Y `DBConnect = pyodbc.connect('Driver={Microsoft Access Driver (*.mdb, *.accdb)}; Dbq=C:\\A\\B\\C;') DBSelect = DBConnect.cursor() DBSelect.execute("select * from ...) Row = DBSelect.fetchall() Update = False for field in Row: Appointment_ID = field[0] print(Appointment_ID) Selected_ID = self.ui.input.text() Selected_ID = str(Selected_ID) print(Selected_ID, "this is selected") print(Update) if Appointment_ID == Selected_ID: Update = True print(Update, "this is update") if Update == True: run X else: run Y`here are the print outs that produce on a run of this code when I input 141, and as you can see it does not produce an Update = True output139,141 this is selected,False,140,141 this is selected,False,141,141 this is selected,False,False this is update
You have a bug. Update == True should be Update = True, inside the loop
Making attributes that calculates with the value of other attributes I´m writing a program where you can put in home team, away team, and the result of a game. I want to make the data of the teams to change according to this and most of it does. But I can´t make the "points", "goal difference" and "played"(games) to change! This is the code i wrote so far:class team: def __init__(self, name, wins, drawn, losses, goals_for, goals_against): self.name = name self.wins = int(wins) self.drawn = int(drawn) self.losses = int(losses) self.goals_for = int(goals_for) self.goals_against = int(goals_against) self.goals_difference = (self.goals_for - self.goals_against) self.points = ((self.wins * 3) + self.drawn) self.played = (self.wins + self.drawn + self.losses) def __repr__(self): return 'Name:{} P:{} W:{} D:{} L:{} GF:{} GA:{} GD:{} PTS:{}'.format(self.name, self.played, self.wins, self.drawn, self.losses, self.goals_for, self.goals_against, self.goals_difference, self.points) detroit_red_wings = team("Detroit", 1, 0, 3, 4, 5)los_angeles_kings = team("LA", 0, 1, 4, 3, 7)toronto_maple_leafs = team("Toronto", 1, 2, 2, 3, 6)teamlist = [detroit_red_wings, los_angeles_kings, toronto_maple_leafs]print(teamlist)class data_input: def home_team_input(self): home_team = input("Type in the home team: ") for i in teamlist: if i.name == home_team: return i def away_team_input(self): away_team = input("Type in the away team: ") for t in teamlist: if t.name == away_team: return t def result_input(self): goals_home_team = int(input("Type in the number of goals made by the home team: ")) goals_away_team = int(input("Type in the number of goals made by the away team: ")) return (goals_home_team, goals_away_team)def adding_result(): home_team = data_input.home_team_input() away_team = data_input.away_team_input() goals_home_team, goals_away_team = data_input.result_input() home_team.goals_for += goals_home_team home_team.goals_against += goals_away_team away_team.goals_for += goals_away_team away_team.goals_against += goals_home_team if goals_home_team > goals_away_team: home_team.wins += 1 away_team.losses += 1 if goals_home_team < goals_away_team: away_team.wins += 1 home_team.losses += 1 if goals_home_team == goals_away_team: home_team.drawn += 1 away_team.drawn += 1data_input = data_input()adding_result()print(teamlist)I wrote the directions for the attributes in the __init__ method of the class team and as you can see the points depends on the wins. This all works when i create the objects but when I put in the result of the new game the points doesn't change(neither does the played or goals_difference). This surprises me because the other attributes changes when I type in result of the game in the input function.
If you update your team class to make the calculated fields properties, then the property functions will always return the correct result. You will also get an error if you try to set those properties, as they are not settable, i.e., they are the result of a calculation on other set data.class team: def __init__(self, name, wins, drawn, losses, goals_for, goals_against): self.name = name self.wins = int(wins) self.drawn = int(drawn) self.losses = int(losses) self.goals_for = int(goals_for) self.goals_against = int(goals_against) @property def goals_difference(self): return self.goals_for - self.goals_against @property def points(self): return self.wins * 3 + self.drawn @property def played(self): return self.wins + self.drawn + self.losses def __repr__(self): return 'Name:{} P:{} W:{} D:{} L:{} GF:{} GA:{} GD:{} PTS:{}'.format( self.name, self.played, self.wins, self.drawn, self.losses, self.goals_for, self.goals_against, self.goals_difference, self.points)I would also consider making the W/L/D and GF/GA initializers tupples or dictionaries rather than passing 5 variables to the initializer.
Tricky Python 3.5 CSV Puzzle - efficiently create 100 lists from a CSV file without referencing criteria each time My project was to take 100 different colors and collect people's one-word reactions to them. So there are two columns: Color and Reaction. There are about 600 cases. My goal is to take every reaction for the same color and merge them into some sort of list. So, I'd need to take all reactions to a specific color and merge them into a list and call that variable [Color]_All_Reactions. If there were just (say) three colors, I could just iterate through every reaction associated with a specified color to add those colors to a list. But I have too many colors to do that for. I need to write something to iterate through all reactions, create an All_Reactions list for each unique color, and then append the content associated with each unique color to that color's All_Reactions list. Here is a simple file with just 3 colors. I'm wondering if anyone can point me in the right direction.
Use a dictionary.Create a dictionary with color as key and list of reactions as value. This way, iterating over it will be a breeze.Pro tip -Use collections.defaultdict instead of a regular dict.
I cannot reach Entry's values from tkinter, the last one corrupt others I am a new Python user. I try to make a serie of entries identified by labels and gets the values inserted. The callback functions seem well done but it's allways the third entry's value which I reach.I am Python/Linux user, version 2.7.6It seem to be lambda's declaration issue, can you help me ?import Tkinter as tkclass c1(tk.Frame): def __init__(self, master=None): r = 0 tk.Frame.__init__(self, master) self.grid() self.master.title('Test three binds') self.master.geometry('300x200+400+400') self.ents = {} for i in ['aaa', 'bbb', 'ccc'] : r += 1 self.ents[i] = c2() self.ents[i].label = tk.Label(text = i) self.ents[i].label.grid (row = r, column = 0) self.ents[i].entry = tk.Entry() self.ents[i].entry.grid (row = r, column = 1) self.ents[i].val = tk.StringVar() self.ents[i].val.set(i) self.ents[i].entry["textvariable"] = self.ents[i].val self.ents[i].entry.bind('<Key-Return>', lambda X : self.verif(self.ents[i])) def verif(self, event) : print event.val.get()class c2 : passmm = c1()for ii in mm.ents : print mm.ents[ii].val.get()mm.mainloop()
One problem is this line of code:self.ents[i].entry["textvariable"] = self.ents[i].valThe textvariable attribute must be set to an instance of StringVar (or one of the other tkinter variables), not the value of such an instance. You need to remove .val.The other problem is that you need to bind (as in: create a closure) the value of i at the time you create the lambda. You can do that like this:self.ents[i].entry.bind('<Key-Return>', lambda event, i=i : self.verif(self.ents[i]))However, if all you need in self.verif is the value from the entry widget, you can reduce the complexity of your code by removing the use of the textvariable altogether, as well as the use of lambda since the event contains a reference to the widget itself. Both textvariables and lambda are rarely truly required.self.ents[i].entry.bind('<Key-Return', self.verif)...def verif(self, event): print event.widget.get()On a final note, tkinter Entry widgets have built-in verification support. Take a look at this answer for more information: https://stackoverflow.com/a/4140988/7432
Python processes fail to start I'm running the following code block in my application. While running it with python3.4 I get 'python quit unexpectedly' popup on my screen. The data missing from the aOut file is for a bunch of iterations and it is in chunks. Say 0-1000 items in the list are not present and others have the data. The other items run properly on their own without intervention.While using python2.7 the failures are for items ~3400-4400 in the list.On logging I see that, the detect() call are not made for processes from 0-1000 (i.e) process.start() calls dont trigger the detect method.I am doing this on MAC OS Sierra. What is happening here? Is there a better way to achieve my purpose?def detectInBatch (aList, aOut): #iterate through the objects processPool = [] pthreadIndex = 0 pIndex = 0 manager = Manager() dict = manager.dict() outline = "" print("Threads: ", getMaxThreads()) # max threads is 20 for key in aList: print("Key: %s, pIndex: %d"%(key.key, pIndex)) processPool.append(Process(target=detect, args=(key.key, dict))) pthreadIndex = pthreadIndex + 1 pIndex = pIndex + 1 #print("Added for %d" %(pIndex)) if(pthreadIndex == getMaxThreads()): print("ProcessPool size: %d" %len(processPool)) for process in processPool: #print("Started") process.start() #end for print("20 Processes started") for process in processPool: #print("Joined") process.join() #end for print("20 Processes joined") for key in dict.keys(): outline = outline + dict.get(key) #end for dict.clear() pthreadIndex = 0 processPool = [] #endif #endfor if(pthreadIndex != 0): for process in processPool:# print("End Start") process.start() #end for for process in processPool:# print("End done") process.join() #end for for key in dict.keys(): print ("Dict: " + dict.get(key)) outline = outline + dict.get(key) #end for #endif aOut.write(outline)#end method detectInBatch
To avoid the 'unexpected quit' perhaps try to ignore the exception withtry: your_loop()except: passThen, put in some logging to track the root cause.
Need help converting number and a base to binary? so i wrote this code to take a base that is inputted by the user and a number inputted by the user and it is supposed to print the num with that base to binary. It does do what its supposed to do but it asks me for the base 3 times and then gives me the correct number here's the code
So I believe this is a duplicate of This question if so then the code from their answer will work, you just need to use a base 2 to their function:def numberToBase(number, base): if number == 0: return [0] digits = [] while number: digits.append(int(number % base)) number //= base return digits[::-1]So in your case:print(numberToBase(int(input("What is your number?:")), 2))Or if you want a string representation I would recommend this:def number_to_base(number, base): if number == 0: return [0] digits = [] while number: digits.append(str(number % base)) number //= base return "".join(digits[::-1])
How do I close pop-up windows with Selenium in Python when I don't know when they will pop up? I am trying to scrape historical weather data from this website:https://www.worldweatheronline.com/taoyuan-weather-history/tai-wan/tw.aspxUsing this code: driver.find_element_by_css_selector("input[type='date']").send_keys(str(for_weather.mm[i])+str(for_weather.dd[i])+for_weather.year[i].astype(str))wait=WebDriverWait(driver,10)wait.until(EC.element_to_be_clickable((By.ID,'ctl00_MainContentHolder_butShowPastWeather'))).click()temp=driver.find_element_by_xpath('//div[@class="days-collapse-temp"]').get_attribute('innerHTML')One some of the pages, a popup appears.I've seen help that shows how to choose and close popups, but in my case, we don't know when they will show up. On some pages it appears, some don't. When they do show up, they prevent me from obtaining the data I want, and stops the loop. The following is the error message (it has the characteristics of the popup):ElementClickInterceptedException: element click intercepted: Element <input type="submit" name="ctl00$MainContentHolder$butShowPastWeather" value="Get Weather" id="ctl00_MainContentHolder_butShowPastWeather" class="btn btn-success ml-2"> is not clickable at point (956, 559). Other element would receive the click: <div class="introjs-overlay" style="inset: 0px; position: fixed; cursor: pointer;"></div> (Session info: chrome=97.0.4692.71)Thanks much!
Here's a Python Selenium solution that uses SeleniumBase:First pip install seleniumbase, then copy the example below into a Python file, eg weather_test.py. Then run it with pytest:pytest weather_test.py --block-adsfrom seleniumbase import BaseCaseclass MyTestClass(BaseCase): def test_base(self): self.open("https://www.worldweatheronline.com/taoyuan-weather-history/tai-wan/tw.aspx") self.js_click("#ctl00_MainContentHolder_butShowPastWeather") for i in range(1, 32): date_string = "2021-12-%s" % i self.set_attribute("#datePicker input", "value", date_string) self.js_click('input[value="Get Weather"]') temp = self.get_attribute("div.days-collapse-temp", "innerHTML") temp = temp.split("</span>")[-1] print("%s : %s" % (date_string, temp))That will get you the weather in that city for all days in December:2021-12-1 : 11°/13°c2021-12-2 : 11°/13°c2021-12-3 : 11°/13°c2021-12-4 : 11°/13°c2021-12-5 : 11°/13°c2021-12-6 : 11°/13°c2021-12-7 : 11°/13°c2021-12-8 : 11°/13°c2021-12-9 : 11°/13°c2021-12-10 : 17°/21°c2021-12-11 : 17°/25°c2021-12-12 : 17°/20°c2021-12-13 : 14°/15°c2021-12-14 : 15°/23°c2021-12-15 : 15°/28°c2021-12-16 : 17°/27°c2021-12-17 : 12°/18°c2021-12-18 : 11°/13°c2021-12-19 : 12°/18°c2021-12-20 : 15°/18°c2021-12-21 : 16°/18°c2021-12-22 : 17°/18°c2021-12-23 : 16°/19°c2021-12-24 : 16°/21°c2021-12-25 : 13°/14°c2021-12-26 : 9°/12°c2021-12-27 : 9°/11°c2021-12-28 : 10°/18°c2021-12-29 : 14°/17°c2021-12-30 : 12°/13°c2021-12-31 : 11°/14°cThe key difference is that it uses js_click to click on things instead of a regular click, which would give you a ElementClickInterceptedException if there was a pop-up.
How to save a dataframe as csv file in filefield I am trying to save a dataframe as a csv file in a model object's filefield but it is not saving it correctly, the file that is getting saved, contains some other language characters!!please tell what I am doing wrong??new_df = df.to_csv(columns=['A', 'B'], index=False)doc.csvfile.save(f'{doc.id}.csv', ContentFile(new_df))
Hello you can try to save csv file with below codeimport csvfrom io import StringIOfrom django.core.files.base import ContentFilenew_df = df.to_csv(columns=['A', 'B'], index=False)csv_buffer = StringIO()csv_writer = csv.writer(csv_buffer)csv_writer.writerow(new_df)csv_file = ContentFile(csv_buffer.getvalue().encode('utf-8'))doc.csvfile.save('output.csv', csv_file)
Convert SVG image to PNG image by python I have use svglib with this code :from svglib.svglib import svg2rlgfrom reportlab.graphics import renderPMdrawing = svg2rlg('''E:/img/1926_S1_style_1_0_0.svg''')renderPM.drawToFile(drawing, 'image.jpg', fmt='jpg')But what i receive It's from the image of SVG like So what should i do to convert SVG to PNG in the right way?
try using cairosvgfrom cairosvg import svg2pngsvg_code = """ <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#000" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"> <circle cx="12" cy="12" r="10"/> <line x1="12" y1="8" x2="12" y2="12"/> <line x1="12" y1="16" x2="12" y2="16"/> </svg>"""svg2png(bytestring=svg_code,write_to='output.png')answer by JWL
Opening a document in text widget tkinter The example code opens a .txt file but is there a way to open a word document, preferably a .docx file? from Tkinter import *import Pmw, sysfilename = "textfile.txt"root = Tk() top = Frame(root); top.pack(side='top')text = Pmw.ScrolledText(top, borderframe=5, vscrollmode='dynamic', hscrollmode='dynamic', labelpos='n', label_text='file %s' % filename, text_width=40, text_height=4, text_wrap='none', )text.pack()text.insert('end', open(filename,'r').read())Button(top, text='Quit', command=root.destroy).pack(pady=15)root.mainloop())
No, the text widget can't display a .docx file. At the name implies, it is for displaying plain text.
How to parallelize for loop with multiple functions inside - Python - AWS Glue Here is my question.I have created several functions, and those functions will be run per each UserID (that is the reason of the for loop below running the functions). It will be run in AWS Glue.I need to do scalable this code in Python / AWS Glue, working with millions of UsersID. Since each UserID has 3000 records, then 1 million Users will be 3000000000 records in total.I thought about parallelize the for loop below, and I read about Dask, Pandarell, and some others. (Even increasing resources in AWS Glue, like worker types, but not working)The problem is, I do not know how to implement those libraries when I have to run a for loop with several functions inside, and when the output of every functions is the input of the following function.Does anyone have a clue of how can I parallelize that loop? (Could be using Python, Spark, PySpark, etc)Thanks in advanceraw_data = pd.read_csv('C:{path}/tracking.csv') raw_data["Time"]=pd.to_datetime(raw_data.Time)raw_data=raw_data.sort_values(['UserId', 'Time'], ascending=[True, True])listIdUser = raw_data['UserId'].unique().tolist() poi = pd.DataFrame(columns=['PointsId', 'TimeInitial', 'TimeEnding', 'Lat', 'Lon', 'TotalTime', 'UserId'])for i in listIdUser: source_data = raw_data[raw_data["UserId"] == i] source_data=source_data.reset_index().drop(["index"], axis=1) clean_data = single_outlier_detection(source_data) pairwise_distance = pairwise_distance_calculation(clean_data) data = consecutive_points(clean_data) dbscan_data=coordinates_data_preparation(data) distances = distances_estimation(dbscan_data) threshold=dbscan_epsilon_estimation(distances,pairwise_distance) cluster_df = dbscan_model_run(threshold, dbscan_data) points_stay = places_of_interest(cluster_df) poi=pd.concat([poi, points_stay])
Since your logic is already defined for one listIdUser, all you have to do is wrap this logic with Fugue. We can separate the partitioning and execution. I don't have data to test on but it will look like this.raw_data = pd.read_csv('C:{path}/tracking.csv') raw_data["Time"]=pd.to_datetime(raw_data.Time)def logic_one_id(df: pd.DataFrame) -> pd.DataFrame: # notice I remove the step to subset raw data source_data=df.reset_index().drop(["index"], axis=1) clean_data = single_outlier_detection(source_data) pairwise_distance = pairwise_distance_calculation(clean_data) data = consecutive_points(clean_data) dbscan_data=coordinates_data_preparation(data) distances = distances_estimation(dbscan_data) threshold=dbscan_epsilon_estimation(distances,pairwise_distance) cluster_df = dbscan_model_run(threshold, dbscan_data) points_stay = places_of_interest(cluster_df) # notice I remove the concat. Fugue will handle return points_stayand then you can do:output_schema="PointsId:int,TimeInitial:datetime,TimeEnding:datetime,Lat:float,Lon:float,TotalTime:float,UserId:int"from pyspark.sql import SparkSessionspark = SparkSession.builder.getOrCreate()from fugue import transformtransform(df, logic_one_id, schema=output_schema, partition={"by":"UserId", "presort":"Time"}, engine=spark)A couple of things to notice:Schema is a requirement for Spark so I guessed your output schemaThe transform() function will handle the partition by UserID so you don't need to do it yourselfWe presorted the data by time also during partitioningWe passed in spark as the engine. If engine=None, it will run on Pandas so you can test locally pretty easily.You can test on Pandas like this:test_df = raw_data[raw_data["UserId"] == one_id_here]transform(test_df, logic_one_id, schema=output_schema, partition={"by":"UserId", "presort":"Time"})and if it works, run all the data on the spark engine
Understanting what the syntax for {:.2} means in python I am working on creating a linear regression model for a specific data set, I am following an example I found on you tube, at some point I calculate the kurtosis and the skewness as below:# calculate the excess kurtosis using the fisher method. The alternative is Pearson which calculates regular kurtosis.exxon_kurtosis = kurtosis(price_data['exxon_price'], fisher = True)oil_kurtosis = kurtosis(price_data['oil_price'], fisher = True)# calculate the skewnessexxon_skew = skew(price_data['exxon_price'])oil_skew = skew(price_data['oil_price'])display("Exxon Excess Kurtosis: {:.2}".format(exxon_kurtosis)) # this looks finedisplay("Oil Excess Kurtosis: {:.2}".format(oil_kurtosis)) # this looks finedisplay("Exxon Skew: {:.2}".format(exxon_skew)) # moderately skeweddisplay("Oil Skew: {:.2}".format(oil_skew)) # moderately skewed, it's a little high but we will accept it.I am new to python, and the following code confuses me here {:.2}, please can someone explain what this part {:.2}display("Exxon Excess Kurtosis: {:.2}".format(exxon_kurtosis))
The kurtosis and skew functions are doing the calculation, while the display function is probably just some form of print() for that environment!".. {:.2}".format(x) is a string formatter which rounds floating points to 2 significant digits>>> "{:.2}".format(3.0)'3.0'>>> "{:.2}".format(0.1555)'0.16'>>> "{:.2}".format(3.1555)'3.2'String formatting is exhaustively detailed here
Non-interactive authentication fails with WsTrust server issue MSIS7068 Setup:Users are created on On-Prem AD and synced to Azure AD via Azure AD ConnectI have a single-tenant app set up on Azure ADI created a user (On-Prem, synced to AAD) that can authenticate without MFA (we need to use username-password authentication due to an internal limitation).Here is the non-interactive authentication code:import msal# create a public client appauthority_url = f"https://login.microsoftonline.com/{TENANT_ID}"msal_app = msal.PublicClientApplication(client_id=CLIENT_ID, authority=authority_url)# acquire tokentoken = msal_app.acquire_token_by_username_password(username=USERNAME, password=PASSWORD, scopes=SCOPES)I'm getting the following error:Traceback (most recent call last): File "/./scripts/aad.py", line 8, in <module> token = msal_app.acquire_token_by_username_password( File "/usr/local/lib/python3.10/site-packages/msal/application.py", line 1420, in acquire_token_by_username_password response = _clean_up(self._acquire_token_by_username_password_federated( File "/usr/local/lib/python3.10/site-packages/msal/application.py", line 1447, in _acquire_token_by_username_password_federated wstrust_result = wst_send_request( File "/usr/local/lib/python3.10/site-packages/msal/wstrust_request.py", line 60, in send_request return parse_response(resp.text) File "/usr/local/lib/python3.10/site-packages/msal/wstrust_response.py", line 49, in parse_response raise RuntimeError("WsTrust server returned error in RSTR: %s" % (error or body))RuntimeError: WsTrust server returned error in RSTR: {'reason': 'MSIS7068: Access denied.', 'code': 'a:FailedAuthentication'}Searching through Google I found that this can be caused by MFA, but the user is excluded from MFA. I've also verified that there are no Conditional Access policies in place to block the user accessing the app.Using Interactive auth works as expected. Any ideas on how to get non-interactive auth to work or what might be the issue here?
First, no guesswork! You would need to login to Azure AD with elevated privilege (Security Reader at the least if not Global Administrator).Go to Enterprise Applications and locate your application by client id.One you are at the application, go to Sign-in tab/pane.Review the sign-in activities. You should see the reason authentication failed in overview tab. Look at the Conditional Access tab and you will know if there is any policy that blocked the sign-in.Take action based on what you identified in sign-in activity.Okay, I am going to make an educated guess! When you login as non-interactive, you have two authentication choices - ROPC and Client Credential- both requires client_secret to be passed in the request but you have not! Since you are using username and password, it implies that msal is using ROPC and you must include client secret.
Is it possible to expose replay buffer in A2C Stable Baselines 3 to include human judgements? I am using A2C (Advantage Actor Critic) framework from stable-baselines3 (package link here) package for solving a reinforcement problem where reward is +1 or 0. I have an automatic mechanism to allocate reward to a choice in a given state. However, that automatic mechanism is not that good enough to reward my choices. I have evaluated that human judgement (if a human sits and rewards the choices) is better.Now, I want to incorporate this human judgement into the A2C framework in training.This is my understanding of how A2C works:Let's say there are N timesteps in 1 episode. The trajectory is stored in an experience replay buffer: [(S1, A1, R1), (S2, A2, R2) ...] which is used to train the actor and critic neural networks at the end of the episode.Can I access this buffer that is sent to neural networks for training? Or is there any alternative to introduce human in the loop in A2C framework?
Of course! The environment is a simple python script in which, somewhere at the end of env.step, the reward is calculated and returned, to be then added along with the state and the action to the replay buffer.You could then manually insert the reward value each time an action is taken, using simple I/O commands.However, Deep Reinforcement Learning usually requires hundreds of thousands of iterations (experience) before learning something useful (unless the environment is simple enough).
Data visualization of CSV file with dash I am new to Python. https://realpython.com/python-dash provides code for visualizing a line graph from a CSV file using Python's dash.I ran the code below, but receive an error.import dash_core_components as dccimport dash_html_components as htmlimport pandas as pddata = pd.read_csv("avocado.csv")data = data.query("type == 'conventional' and region == 'Albany'")data["Date"] = pd.to_datetime(data["Date"], format="%Y-%m-%d")data.sort_values("Date", inplace=True)app = dash.Dash(__name__)app.layout = html.Div( children=[ html.H1(children="Avocado Analytics",), html.P( children="Analyze the behavior of avocado prices" " and the number of avocados sold in the US" " between 2015 and 2018", ), dcc.Graph( figure={ "data": [ { "x": data["Date"], "y": data["AveragePrice"], "type": "lines", }, ], "layout": {"title": "Average Price of Avocados"}, }, ), dcc.Graph( figure={ "data": [ { "x": data["Date"], "y": data["Total Volume"], "type": "lines", }, ], "layout": {"title": "Avocados Sold"}, }, ), ])if __name__ == "__main__": app.run_server(debug=True)Traceback (most recent call last): File "/Users/halcyon/Documents/Python/Dashboard - Avocado prices/app.py", line 8, in <module> data["Date"] == pd.to_datetime(data["Date"], format="%Y-%m-%d") File "/Users/halcyon/Documents/Python/Dashboard - Avocado prices/venv/lib/python3.9/site-packages/pandas/core/ops/common.py", line 64, in new_method return method(self, other) File "/Users/halcyon/Documents/Python/Dashboard - Avocado prices/venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py", line 529, in wrapper res_values = comparison_op(lvalues, rvalues, op) File "/Users/halcyon/Documents/Python/Dashboard - Avocado prices/venv/lib/python3.9/site-packages/pandas/core/ops/array_ops.py", line 247, in comparison_op res_values = comp_method_OBJECT_ARRAY(op, lvalues, rvalues) File "/Users/halcyon/Documents/Python/Dashboard - Avocado prices/venv/lib/python3.9/site-packages/pandas/core/ops/array_ops.py", line 57, in comp_method_OBJECT_ARRAY result = libops.scalar_compare(x.ravel(), y, op) File "pandas/_libs/ops.pyx", line 84, in pandas._libs.ops.scalar_compareValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()I copied and pasted the code from the tutorial as it was shown, but was unable to reproduce it. I tried to Google and understand the material from the traceback log but was unable to comprehend it.
I didn't see it had been fixed in comments. A couple of small changes to make it reproducibledynamically get data from github rather than hoping it's on file systemused JupyterDash which works out of box with plotly 5.x.yimport dash_core_components as dccimport dash_html_components as htmlfrom jupyter_dash import JupyterDashimport pandas as pdimport requestsimport io# data = pd.read_csv("avocado.csv")data = pd.read_csv(io.StringIO(requests.get("https://raw.githubusercontent.com/chainhaus/pythoncourse/master/avocado.csv").text))data = data.query("type == 'conventional' and region == 'Albany'")data["Date"] = pd.to_datetime(data["Date"], format="%Y-%m-%d")data.sort_values("Date", inplace=True)app = JupyterDash(__name__)# app = dash.Dash(__name__)
Cuda:0 device type tensor to numpy problem for plotting graph as mentioned in the title, I am facing the problem ofTypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.I found out that that need to be a .cpu() method to overcome the problem, but tried various ways and still unable to solve the problemdef plot(val_loss,train_loss,typ): plt.title("{} after epoch: {}".format(typ,len(train_loss))) plt.xlabel("Epoch") plt.ylabel(typ) plt.plot(list(range(len(train_loss))),train_loss,color="r",label="Train "+typ) plt.plot(list(range(len(val_loss))),val_loss,color="b",label="Validation "+typ) plt.legend() plt.savefig(os.path.join(data_dir,typ+".png")) plt.close()
I guess during loss calculation, when you try to save the loss, instead oftrain_loss.append(loss)it should betrain_loss.append(loss.item())item() returns the value of the tensor as a standard Python number, therefore, train_loss will be a list of numbers and you will be able to plot it.You can read more about item() herehttps://pytorch.org/docs/stable/generated/torch.Tensor.item.html
ValueError: shapes (3,3,1) and (3,1) not aligned: 1 (dim 2) != 3 (dim 0) I am trying to multiply some matrices in python, using the np.dot function.I have a three by three array that I want to multiply by a three by oneValueError: shapes (3,3,1) and (3,1) not aligned: 1 (dim 2) != 3 (dim 0)What exactly does the third dimension on the array mean? Is there a way to get rid of it?
A (3,3,1) means that you have a vector of 3 two dimensional vectors. Take this as example:a = np.random.rand(3,3,1)print(a)[[[0.08233029] [0.21532053] [0.88495997]] [[0.59743708] [0.97966668] [0.44927175]] [[0.40792714] [0.85891152] [0.22584841]]]As above, there are 3 vectors of two dimensions with 3 numbers in a vector. In order to remove it, just use np.reshape to do the trick.a = np.reshape(a, [3,3])print(a)[[0.08233029 0.21532053 0.88495997] [0.59743708 0.97966668 0.44927175] [0.40792714 0.85891152 0.22584841]]From here onwards, you can do your np.dot to obtain your result
Incredibly basic lxml questions: getting HTML/string content of lxml.etree._Element? This is such a basic question that I actually can't find it in the docs :-/In the following:img = house_tree.xpath('//img[@id="mainphoto"]')[0]How do I get the HTML of the <img/> tag?I've tried adding html_content() but get AttributeError: 'lxml.etree._Element' object has no attribute 'html_content'.Also, it was a tag with some content inside (e.g. <p>text</p>) how would I get the content (e.g. text)?Many thanks!
I suppose it will be as simple as:from lxml.etree import tostringinner_html = tostring(img)As for getting content from inside <p>, say, some selected element el:content = el.text_content()
Web scraping a hidden table using Python I am trying to scrape the "Traits" table from this website https://www.ebi.ac.uk/gwas/genes/SAMD12 (actually, the URL can change according to my necessity, but the structure will be the same).The problem is that my knowledge is quite limited in web scraping, and I can't get this table using the basic BeautifulSoup workflow I've seen up to here.Here's my code:import requestsfrom bs4 import BeautifulSoupurl = 'https://www.ebi.ac.uk/gwas/genes/SAMD12'page = requests.get(url)I'm looking for the "efotrait-table":efotrait = soup.find('div', id='efotrait-table-loading')print(efotrait.prettify())<div class="row" id="efotrait-table-loading" style="margin-top:20px"> <div class="panel panel-default" id="efotrait_panel"> <div class="panel-heading background-color-primary-accent"> <h3 class="panel-title"> <span class="efotrait_label"> Traits </span> <span class="efotrait_count badge available-data-btn-badge"> </span> </h3> <span class="pull-right"> <span class="clickable" onclick="toggleSidebar('#efotrait_panel span.clickable')" style="margin-left:25px"> <span class="glyphicon glyphicon-chevron-up"> </span> </span> </span> </div> <div class="panel-body"> <table class="table table-striped borderless" data-export-types="['csv']" data-filter-control="true" data-flat="true" data-icons="icons" data-search="true" data-show-columns="true" data-show-export="true" data-show-multi-sort="false" data-sort-name="numberAssociations" data-sort-order="desc" id="efotrait-table"> </table> </div> </div></div>Specifically, this one:soup.select('table#efotrait-table')[0]<table class="table table-striped borderless" data-export-types="['csv']" data-filter-control="true" data-flat="true" data-icons="icons" data-search="true" data-show-columns="true" data-show-export="true" data-show-multi-sort="false" data-sort-name="numberAssociations" data-sort-order="desc" id="efotrait-table"></table>As you can see, the table's content doesn't show up. In the website, there's an option for saving the table as csv. It would be awesome if I get this downloadable link somehow. But when I click in the link in order to copy it, I get "javascript:void(0)" instead. I've not studied javascript, should I?The table is hidden, and even if it's not, I would need to interactively select more rows per page to get the whole table (and the URL doesn't change, so I can't get the table either).I would like to know a way to get access to this table programmatically (unstructured info), then the minors about organizing the table will be fine. Any clues for how doing that (or what I should study) will be greatly appreciated.Thanks in advance
Desired data is available within API call.import requestsdata = { "q": "ensemblMappedGenes: \"SAMD12\" OR association_ensemblMappedGenes: \"SAMD12\"", "max": "99999", "group.limit": "99999", "group.field": "resourcename", "facet.field": "resourcename", "hl.fl": "shortForm,efoLink", "hl.snippets": "100", "fl": "accessionId,ancestralGroups,ancestryLinks,associationCount,association_rsId,authorAscii_s,author_s,authorsList,betaDirection,betaNum,betaUnit,catalogPublishDate,chromLocation,chromosomeName,chromosomePosition,context,countriesOfRecruitment,currentSnp,efoLink,ensemblMappedGenes,fullPvalueSet,genotypingTechnologies,id,initialSampleDescription,label,labelda,mappedLabel,mappedUri,merged,multiSnpHaplotype,numberOfIndividuals,orPerCopyNum,orcid_s,pValueExponent,pValueMantissa,parent,positionLinks,publication,publicationDate,publicationLink,pubmedId,qualifier,range,region,replicateSampleDescription,reportedGene,resourcename,riskFrequency,rsId,shortForm,snpInteraction,strongestAllele,studyId,synonym,title,traitName,traitName_s,traitUri,platform", "raw": "fq:resourcename:association or resourcename:study"}def main(url): r = requests.post(url, data=data).json() print(r)main("https://www.ebi.ac.uk/gwas/api/search/advancefilter")You can follow the r.keys() and load your desired data by access the dict.But here's a quick load (Lazy Code):import requestsimport reimport pandas as pddata = { "q": "ensemblMappedGenes: \"SAMD12\" OR association_ensemblMappedGenes: \"SAMD12\"", "max": "99999", "group.limit": "99999", "group.field": "resourcename", "facet.field": "resourcename", "hl.fl": "shortForm,efoLink", "hl.snippets": "100", "fl": "accessionId,ancestralGroups,ancestryLinks,associationCount,association_rsId,authorAscii_s,author_s,authorsList,betaDirection,betaNum,betaUnit,catalogPublishDate,chromLocation,chromosomeName,chromosomePosition,context,countriesOfRecruitment,currentSnp,efoLink,ensemblMappedGenes,fullPvalueSet,genotypingTechnologies,id,initialSampleDescription,label,labelda,mappedLabel,mappedUri,merged,multiSnpHaplotype,numberOfIndividuals,orPerCopyNum,orcid_s,pValueExponent,pValueMantissa,parent,positionLinks,publication,publicationDate,publicationLink,pubmedId,qualifier,range,region,replicateSampleDescription,reportedGene,resourcename,riskFrequency,rsId,shortForm,snpInteraction,strongestAllele,studyId,synonym,title,traitName,traitName_s,traitUri,platform", "raw": "fq:resourcename:association or resourcename:study"}def main(url): r = requests.post(url, data=data) match = {item.group(2, 1) for item in re.finditer( r'traitName_s":\"(.*?)\".*?mappedLabel":\["(.*?)\"', r.text)} df = pd.DataFrame.from_dict(match) print(df)main("https://www.ebi.ac.uk/gwas/api/search/advancefilter")Output:0 heel bone mineral density Heel bone mineral density1 interleukin-8 measurement Chronic obstructive pulmonary disease-related ...2 self reported educational attainment Educational attainment (years of education)3 waist-hip ratio Waist-hip ratio4 eye morphology measurement Eye morphology5 CC16 measurement Chronic obstructive pulmonary disease-related ...6 age-related hearing impairment Age-related hearing impairment (SNP x SNP inte...7 eosinophil percentage of leukocytes Eosinophil percentage of white cells8 coronary artery calcification Coronary artery calcified atherosclerotic plaq...9 multiple sclerosis Multiple sclerosis10 mathematical ability Highest math class taken (MTAG)11 risk-taking behaviour General risk tolerance (MTAG)12 coronary artery calcification Coronary artery calcified atherosclerotic plaq...13 self reported educational attainment Educational attainment (MTAG)14 pancreatitis Pancreatitis15 hair colour measurement Hair color16 breast carcinoma Breast cancer specific mortality in breast cancer17 eosinophil count Eosinophil counts18 self rated health Self-rated health19 bone density Bone mineral density
Masking dataframe text column to a new column in pandas dataframe I have pandas dataframe below and I would like to mask ProductId column with a new column. Assign each id to a new numeric value. How can I do that?Thanksimport pandas as pd df=pd.DataFrame({'ProductId':['AXX11','CS22','AXX11','FV34','FV34','DF23','CS22'],'Sales': [10,34,23,45,23,54,65]})dfDesired outcome below:ProductId Mask_ProductId Sales AXX1 20 10 CS22 21 34 AXX1 20 23 FV34 8 45 FV34 8 23 DF23 12 54 CS22 21 65Please help thank you
Use categorical:In [96]: df['Mask_ProductId'] = df.ProductId.astype('category').cat.codesIn [97]: dfOut[97]: ProductId Sales Mask_ProductId0 AXX11 10 01 CS22 34 12 AXX11 23 03 FV34 45 34 FV34 23 35 DF23 54 26 CS22 65 1
Function call does not change value def divide_by_2(number): number /= 2...def main(): n = 42 divide_by_2(n) print(n)The result is 42, not 21. Why is this the case? Thanks in advance.
You have to return the value from your functiondef divide_by_2(number): return number / 2 # return the calculation...def main(n): n = divide_by_2(n) print(n) >> 21main(42) # call main with variable number
Writing a hashtag to a file I am using a python script to create a shell script that I would ideally like to annotate with comments. If I want to add strings with hashtags in them to a code section like this:with open(os.path.join("location","filename"),"w") as f: file = f.read()file += """my_function() {{if [ $# -eq 0 ]thenecho "Please supply an argument"returnfiecho "argument is $1"}}"""with open(os.path.join("location","filename"),"w") as f: f.write(file)what is the best way I can accomplish this?
You already have a # character in that string literal, in $#, so I'm not sure what the problem is.Python considers a """ string literal as one big string, newlines, comment-esque sequences and all, as you've noticed, until the ending """.To also pass escape characters (e.g. \n as \n, not a newline) through raw, you'd use r"""...""".In other words, withwith open("x", "w") as f: f.write("""xhi # hello world""")you end up with a file containingxhi # hello world
Extract value from first column in pandas dataframe and add it in file name while saving I have following dataframeyear city population2002 Chicago 1000002002 Dallas 1500002002 Denver 200000I want to extract "2002" (One file will have same value in each row in first column) from first column and add it in file name I will saveOutput file name -2002_city_population.csvI am trying thisdf = pd.read_csv('city_population.csv', index_col=0)df.to_csv('2002_city_population.csv')Currently I am hardcoding "2002" in file name. But I want 2002 to come from first column of file as each file will have different year
You can do it with a variable and some f-string formatting.year = df.at[0, 'year']df.to_csv(f'{year}_city_population.csv')
What is wrong with this SQL statement in Python? I am using Python and a MySQL database and am attempting to itterate through rows in a CSV file and insert them in my database. I have the following:import mysql.connectorimport pandas as pdmydb = mysql.connector.connect( host="localhost", user="root", passwd="root", database="mydb")cursor = mydb.cursor()cursor.execute("SET FOREIGN_KEY_CHECKS=0")csv_data = pd.read_csv("file path")sql = "INSERT INTO table (ID, SecondID, StartDate, EndDate) VALUES (%s, %s, %s, %s)"for index, row in csv_data.iterrows(): cursor.execute(sql, row)cursor.execute("SET FOREIGN_KEY_CHECKS=1")mydb.commit()cursor.close()mydb.close()I can't see what's wrong with the SQL.Getting the following error:You have an error in your SQL syntax; check the manual thatcorresponds to your MySQL server version for the right syntax to usenear '%s, %s, %s, %s)'NOTE - The rest of the code seems to work okay and the SQL works fine if I insert specific values but when I try to use the %s construct it fails yet other responses I have seen appear to recommend this as the correct syntax.Please help- what am I doing wrong?
I think you better use pandas to_sql function.I'm not sure whether mysql.connector works so i'll use sqlalchemy.It looks like that:ENGINE = sqlalchemy.create_engine('mysql+pymysql://root:root@localhost:3306/mydb')with ENGINE.connect() as connection: ENGINE.execute("SET FOREIGN_KEY_CHECKS=0") csv_data.to_sql('table_name', connection, if_exists='append', index=False) ENGINE.execute("SET FOREIGN_KEY_CHECKS=1")
selenium count divs inside one div I want to count divs inside one div with selenium.This is my code so far, but I don't understand why this is not working. It returns length of 0.available = len(browser.find_elements_by_xpath("//div[@class='sc-AykKC.sc-AykKD.slug__RaffleContainer-sc-10kq7ov-2.eujCnV']/div"))
To count <div> tags with value of alt attribute as Closed within its parent <div> using Selenium you can use either of the following xpath based Locator Strategies:Using text():available = len(browser.find_elements_by_xpath("//h2[text()='List']//preceding::div[1]//div[@alt='Closed']"))Using contains():available = len(browser.find_elements_by_xpath("//h2[contains(., 'List')]//preceding::div[1]//div[@alt='Closed']"))Ideally, you have to induce WebDriverWait for visibility_of_all_elements_located() and you can use either of the following Locator Strategies:Using text():available = len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//h2[text()='List']//preceding::div[1]//div[@alt='Closed']"))))Using contains():available = len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//h2[contains(., 'List')]//preceding::div[1]//div[@alt='Closed']"))))Note : You have to add the following imports :from selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support import expected_conditions as EC
Pandas throwing an OSError on PyCharm I have been getting the following error on my PyCharm:Traceback (most recent call last): File "C:/Users/security/Downloads/AP/Boston-Kaggle/Boston.py", line 1, in <module> import pandas as pd File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\__init__.py", line 13, in <module> __import__(dependency) File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\numpy\core\__init__.py", line 23, in <module> WinDLL(os.path.abspath(filename)) File "C:\Users\security\Anaconda3\lib\ctypes\__init__.py", line 356, in __init__ self._handle = _dlopen(self._name, mode)OSError: [WinError 193] %1 is not a valid Win32 applicationIts because of my Pandas import:import pandas as pdAs per suggestions on similar S/O posts, I have uninstalled Anaconda and reinstalled it. I've tried uninstalling/reinstalling pandas as well but nothing worked.
In ***\core\__init__.py: line23 The initialization script loads a dll, located at your path: "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\numpy\.libs\*openblas*dll", If you're using a 32bit DLL with 64bit Python, or vice-versa, then you'll probably get errors.I recommend trying to load your DLL(download from Numpy) with Anaconda in the same bits.
findAll() in BeautifulSoup missing nodes The method findAll() in BeautifulSoup does not return all elements in XML. If you look the code below and open URL, you can see that there are 10 PubmedArticle nodes in XML. However the findAll method only finds 6 of them. There is only 6 * on the output instead of 10. What am I doing wrong? import urllib2from bs4 import BeautifulSoupURL = 'http://www.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&rettype=abstract&id=23858559,23858558,23858557,23858521,23858508,23858506,23858494,23858473,23858461,23858404'data = urllib2.urlopen(URL).read()soup = BeautifulSoup(data)for x in soup.findAll('pubmedarticle'): print '*'
I solved this by adding xml argument. Make sure you have lxml installed.soup = BeautifulSoup(xmlData, 'xml')
"numpy.linspace" for second time after excluding some point by first "linspace" I am building a model and I need to get the positions of some points inside a box (known volume). I am thinking on usinga) numpy.linspace(start,stop,30)b) numpy.linspace(start,stop,3000)from the same box, I think I need a tool to exclude the points from a) process.Example as [2D]say that we have a line of length 20, and we need to distribute two types of lines:1)10 pieces of 1 length, 2) 4 pieces of 2 length.-The space between piece(small line)from type 1 and any neighbors is equal whatever the neighbor is type 1 or 2.-The number of small pieces are equally distributed around type 2 piece.
This solution is the only one that worked for me:get the xyz file, by any other software like jmole.you have the orientations of the model.I wrote the orientations into my program to avoid overlapping.
Rounding up a dataframe consist of string and float both I have a dataframe : df = pd.DataFrame([["abcd", 1.9923], [2.567345454, 5]])I want to round it up to 2 decimal places for all the floats. I am using: df.round(decimals=2)However, I am observing that it is working only if the entire dataframe is either float or int. Presence of any single string in the dataframe is not doing any changes to the entire dataframe. Any help is highly appreciated.
If there are mixed numeric with strings values is possible use custom lambda function:#first column is filled by strings, so 1. solution not workingdf = df.applymap(lambda x: round(x, 2) if isinstance(x, (float, int)) else x)print (df) 0 10 abcd 1.991 2.57 5.00If need convert to numeric and round:df = pd.DataFrame([["abcd", 1.9923], ["2.567345454", 5]])def f(x): try: return round(float(x), 2) except: return xdf = df.applymap(f)print (df) 0 10 abcd 1.991 2.57 5.00
Unable to pass/exit a python function Just starting out with python functions (fun_movies in functions.py) and I can't seem to get out (via "no" or False) once in the loop:main_menu.pyfrom functions import *def menu(): print("Press 1 for movies.") print("Press 2 to exit.")menu()option = int(input("Input a number: "))while option != 0:#try: if option == 1: fun_movies() elif option == 2: print("Goodbye! ") break else: print ("Wrong input")functions.pyglobal moviesmovies = {}def fun_movies(): name = input("Insert movie name: ") genre = input("Input genre: ") movies [name] = [genre] a = True while a: query = input("Do you want to input another movie? (yes/no) ") if query == "yes": name = input("Insert movie name: ") genre = input("Input genre: ") movies_if = {} movies_if [name] = [genre] movies.update(movies_if) elif query == "no": break else: print ("Wrong input!") return moviesCode works fine when not called via import. When called via import (in main_menu.py), it keeps asking for infinite movies even when I input a "no". I can't find a way to exit the loop. Initially I had a "pass" but that didn't work.Thanks in advance!
global moviesmovies = {}def fun_movies(): name = input("Insert movie name: ") genre = input("Input genre: ") movies [name] = [genre] a = True while a: query = input("Do you want to input another movie? (yes/no) ") if query == "yes": name = input("Insert movie name: ") genre = input("Input genre: ") movies_if = {} movies_if [name] = [genre] movies.update(movies_if) elif query == "no": a = False else: print ("Wrong input!") return moviesA few things:Firstly, you don't need a==True as this statement returns True when a is True and False when a is False, so we can just use a as the condition.Secondly, only use the input at the start of the loop as you want to ask once per iterationThirdly, place your return outside the loop so you only return when a==False and you don't want to input another movie.edit:main file>from functions import *def menu(): print("Press 1 for movies.") print("Press 2 to exit.")menu()option = int(input("Input a number: "))while option != 0: if option == 1: fun_movies() elif option == 2: print("Goodbye! ") break else: print ("Wrong input") option = int(input("Input a number"))
Retrieving text content from Javascript URL I am modifying the play-scraper API to scrape play-store app details. It uses BeautifulSoup to parse HTML pages [reference].I am particularly interested in all the additional information available for an app as shown in the screenshot below. (The above screenshot is taken from this app.)I am stuck at extracting the list of permissions that an app asks for (shown in the above figure) because the View details URL under Permissions is as follows.<a class="hrTbp" jsname="Hly47e">View details</a>Clicking the View details URL shows a list of permissions (screenshot as follows) that I want to extract. I am not familiar with Javascript. Any help would be appreciated.
If I understand the question correctly you are trying to scrape the data from a modal. And when the website loads for the first time these modals data aren't available inside html. They are fetched after you click the view details button. That's why the parser doesn't get the data inside the modal, in your case the permission informations. So this is the reason of your problem.Now about the solution, one possible solution could be achieved by using the Selenium and chromedriver by performing click event on the view details text and then fetching the modal data. Have a look at this link to get an idea.Update: To get an idea about the solution using Selenium and chromedriver consider the following code:options = Options()options.headless = Truedriver = webdriver.Chrome('local_path_to_chrome_driver', options=options)driver.get(url_of_the_play_store_app)time.sleep(5) #sleep for 5 secs sometime to fetch the datadriver.find_element_by_link_text("View details").click() #performing the click eventtime.sleep(5) # again sleep for 5 secs to fetch the modal datasoup = BeautifulSoup(driver.page_source, "lxml")The soup variable now has the updated scraped data including the modal window data and you can retrieve the modal window data from soup.
Renaming multiple columns using their index How can i rename multiple columns of a dataframe using their index? For example i want to rename the columns at positions 5,6,7,8 to 'five','six','seven','eight' respectively. I don't want to enter the keys in the dictionary individually.
In the case of already having a dictionary, you can use rename to map to the new axis values:df = pd.DataFrame(columns=range(10))d = {5:'five', 6:'six', 7:'seven', 8:'eight'}df = df.rename(d, axis=1)# Index([0, 1, 2, 3, 4, 'five', 'six', 'seven', 'eight', 9], dtype='object')Or, as @ch3ster points out, rename takes both index and column parameters allowing to rename both independently:df = df.rename(columns=d)In the case you know the range of columns to rename, and have a list of new column names, you could build a dictionary as and rename with:l = ['five', 'six', 'seven', 'eight', 'nine']df = df.rename(columns=dict(zip(range(5,9), l)))
How can I apply a function to each row in a pandas dataframe? I am pretty new to coding so this may be simple, but none of the answers I've found so far have provided information in a way I can understand. I'd like to take a column of data and apply a function (a x e^bx) where a > 0 and b < 0. The (x) in this case would be the float value in each row of my data. See what I have so far, but I'm not sure where to go from here....def plot_data(): # read the file data = pd.read_excel(FILENAME) # convert to pandas dataframe df = pd.DataFrame(data, columns=['FP Signal']) # add a blank column to store the normalized data headers = ['FP Signal', 'Normalized'] df = df.reindex(columns=headers) df.plot(subplots=True, layout=(1, 2)) df['Normalized'] = df.apply(normalize(['FP Signal']), axis=1) print(df['Normalized']) # show the plot plt.show()# normalization formula (exponential) = a x e ^bx where a > 0, b < 0def normalize(x): x = A * E ** (B * x) return xI can get this image to show, but not the 'normalized' data...thanks for any help!
Your code is almost correct.# normalization formula (exponential) = a x e ^bx where a > 0, b < 0def normalize(x): x = A * E ** (B * x) return xdef plot_data(): # read the file data = pd.read_excel(FILENAME) # convert to pandas dataframe df = pd.DataFrame(data, columns=['FP Signal']) # add a blank column to store the normalized data headers = ['FP Signal', 'Normalized'] df = df.reindex(columns=headers) df['Normalized'] = df['FP Signal'].apply(lambda x: normalize(x)) print(df['Normalized']) df.plot(subplots=True, layout=(1, 2)) # show the plot plt.show()I changed apply row to the following: df['FP Signal'].apply(lambda x: normalize(x)).It takes only the value on df['FP Signal'] because you don't need entire row. lambda x states current values assign to x, which we send to normalize.You can also write df['FP Signal'].apply(normalize) which is more directly and more simple. Using lambda is just my personal preference, but many may disagree.One small addition is to put df.plot(subplots=True, layout=(1, 2)) after you change dataframe. If you plot before changing dataframe, you won't see any change in the plot. df.plot actually doing the plot, plt.show just display it. That's why df.plot must be after you done processing your data.
Doctest fails due to unicode leading u I am writing a doctest for a function that outputs a list of tokenized words.r'''>>> s = "This is a tokenized sentence s\u00f3">>> tokenizer.tokenize(s0)['This', 'is', 'a', 'tokenized', 'sentence', 'só']'''Using Python3.4 my test passes with no problems.Using Python2.7 I get:Expected: ['This', 'is', 'a', 'tokenized', 'sentence', 'só']Got: [u'This', u'is', u'a', u'tokenized', u'sentence', u's\xf3']My code has to work on both Python3.4 and Python2.7. How can I solve this problem?
Python 3 uses different string literals for Unicode objects. There is no u prefix (in the canonical representation) and some non-ascii characters are shown literally e.g., 'só' is a Unicode string in Python 3 (it is a bytestring on Python 2 if you see it in the output).If all you interested is how the function splits an input text into tokens; you could print each token on a separate line, to make the result Python 2/3 compatible:print("\n".join(tokenizer.tokenize(s0)))ThisisatokenizedsentencesóAs an alternative, you could customize doctest.OutputChecker, example:#!/usr/bin/env pythonr""">>> u"This is a tokenized sentence s\u00f3".split()[u'This', u'is', u'a', u'tokenized', u'sentence', u's\xf3']"""import doctestimport reimport sysclass Py23DocChecker(doctest.OutputChecker): def check_output(self, want, got, optionflags): if sys.version_info[0] > 2: want = re.sub("u'(.*?)'", "'\\1'", want) want = re.sub('u"(.*?)"', '"\\1"', want) return doctest.OutputChecker.check_output(self, want, got, optionflags)if __name__ == "__main__": import unittest suite = doctest.DocTestSuite(sys.modules['__main__'], checker=Py23DocChecker()) sys.exit(len(unittest.TextTestRunner().run(suite).failures))
GAE dev_appserver throws HTTP 504 Gateway Timeout I just upgraded my GAE SDK to 1.7.6 (Linux, Python). Now, using dev_appserver.py, my apps are loaded just fine, but as soon as I go to localhost:8080 in the browser, there is an uncaught HTTP 504 Gateway Timeout Exception. I've reproduced it with the helloworld sample code. Everything works like before using old_dev_appserver.py.Is this a bug or am I doing something wrong? Or is it my Python distribution? File "/usr/lib64/python2.7/urllib2.py", line 406, in open response = meth(req, response) File "/usr/lib64/python2.7/urllib2.py", line 519, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib64/python2.7/urllib2.py", line 444, in error return self._call_chain(*args) File "/usr/lib64/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib64/python2.7/urllib2.py", line 527, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)HTTPError: HTTP Error 504: Gateway Time-out
Might be too late, but I hope this helps anyone who might have the same problem.Same thing happened to me, and the problem for me was that my system was set on using proxy. So, GAE dev_appserver was not able to connect to itself (it uses ip and port combination to connect to itself and manage some API stuffs), so it would threw HTTP 504 Gateway Timeout error. So, I removed proxy settings, and worked as usual.
How to find the columns that at least contain one negative element? In Python, for an array how can I find the columns that at least contain one negative element? Additionally, how can I find the median of rows that include at least one negative value?Let's say that this is our array:import numpy as npa = np.array([[1,2,0,-4],[-3,4,-4,1],[3,6,2,9]])Thanks in advance.
>>> (a < 0).any(axis=0)array([ True, False, True, True])# Columns.>>> np.median(a[:, (a < 0).any(axis=0)], axis=0)array([1., 0., 1.])# Rows.>>> np.median(a[:, (a < 0).any(axis=0)], axis==1)array([ 0., -3., 3.])# Median of rows where row contains at least one negative value.>>> np.median(a[(a < 0).any(axis=1), :], axis=1)array([ 0.5, -1. ])
How to do formatting changes in an table using pptx in python? I have a dataframe that looks like this: c sp k1 k2 k3 k4 k5 k60 c1 70.73 0.3% 0.6% 0.7% 0.8% 0.7% 0.5%1 c2 149.71 0.7% 0.6% 0.4% 0.6% 0.7% 1.0%2 c3 -1.00 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%3 c4 24.88 0.1% 0.9% 0.5% 0.7% 0.7% 0.9%4 c5 276.23 0.3% 2.3% 0.4% 2.0% 1.9% 1.9%I am creating a slide in a ppt for this table by using this code:prs = Presentation()slide = prs.slides.add_slide(prs.slide_layouts[0])title = slide.shapes.titletitle.text = "Title"title.top = Cm(1) # set title positiontitle.left = Cm(1) # set title positiontitle.width = Cm(24) # set title sizetitle.height = Cm(2) # set title sizetop = Inches(1.5) # set table positionleft = Inches(0.25) # set table positionwidth = Inches(9.25) # set table sizeheight = Inches(5.0) # set table sizetbl1 = df_to_table(slide, dt, left, top, width, height, name='tbl1')# changing the font size and font of the whole tablefor cell in iter_cells(tbl1.table): for paragraph in cell.text_frame.paragraphs: for run in paragraph.runs: run.font.size = Pt(12) run.font.name = 'Calibri'prs.save('test.pptx')I am struggling in applied the following changes in this table though1- I would like to add thick top and bottom border at the column names2- Right border at the column sp3- Make red color the font color of all the cs that have negative sp4- Apply green color formatting in all the cells in k columns so that the higher the value the darker the green of the cell5- Adjust the size of the cells, so that the text of the cell autofits the cellI am not really sure if all these formatting changes are possible using the pptx packageNote: The df_to_table function comes from here
1+2 ... Do you know how to do this in PowerPoint? If so, you could compare two files with and without thick border to find which part of the xml-file has to be changed.3+4 ... you have to change the font color of the corresponding paragraphs. You could do this directly via python-pptx or you could use a font style (python-pptx-interface) and "write" it to paragraphs.5 ... autofitting is not working. You would need to calculate the size yourself. With a table style you could at least define a ratio for the column sizes.
Python/Pandas: TypeError: float() argument must be a string or a number, not 'function' I am trying to generate a plot from two columns in a .csv file. The column for the x-axis is in the short date format mm/dd/yyyy while the column for the y-axis corresponds to absorption measurement data as regular numerical values. From this, I am also trying to gather a linear regression line from this plot. Here is what I have so far:mydateparser = lambda x: datetime.strptime(x, '%m/%d/%y')df = (pd.read_csv('calibrationabs200211.csv', index_col=[], parse_dates=[0], infer_datetime_format=True, date_parser=mydateparser))if mydateparser == '%m/%d/%y': print('Error')else: mydateparser = float(mydateparser)plt.figure(figsize=(15,7.5))x = df.iloc[:, 0].values.reshape(-1, 1)y = df.iloc[:, 1].values.reshape(-1, 1)linear_regressor = LinearRegression()linear_regressor.fit(x, y)y_pred = linear_regressor.predict(y)plt.scatter(x, y, color='teal')plt.plot(x, y_pred, color='teal')plt.show()However, I am getting an error message:TypeError Traceback (most recent call last)<ipython-input-272-d087bdc00150> in <module> 12 print('Error') 13 else:---> 14 mydateparser = float(mydateparser) 15 16 plt.figure(figsize=(15,7.5))TypeError: float() argument must be a string or a number, not 'function'Furthermore, if I comment-out the If Statement, I end up getting a plot, but with a faulty linear regression. I am fairly new to python, matplotlib, and pandas so any help or feedback is greatly appreciated. Thank you!
Functions in Python can be used as variables, which is what you are doing here. If you want to use the result of a function for something, you need to call it by adding () after the function name.mydateparser is a function, mydateparser() is the result of calling that function.Additionally, I don't think the comparison you're making makes sense. datetime.strptime returns a datetime object, which you are later comparing to a string. I'm actually not sure what you're trying to accomplish with that block at all.Your regression needs the dates to be converted to some sort of numeric value to regress against. I would suggest using matplotlib's date conversion functions, specifically date2num, to try this.Should be something along the lines of:from matplotlib import dates...x = df[0].apply(dates.date2num)
How to properly implement disjoint set data structure for finding spanning forests in Python? Recently, I was trying to implement the solutions of google kickstater's 2019 programming questions and tried to implement Round E's Cherries Mesh by following the analysis explanation.Here is the link to the question and the analysis.https://codingcompetitions.withgoogle.com/kickstart/round/0000000000050edb/0000000000170721Here is the code I implemented:t = int(input())for k in range(1,t+1): n, q = map(int,input().split()) se = list() for _ in range(q): a,b = map(int,input().split()) se.append((a,b)) l = [{x} for x in range(1,n+1)] #print(se) for s in se: i = 0 while ({s[0]}.isdisjoint(l[i])): i += 1 j = 0 while ({s[1]}.isdisjoint(l[j])): j += 1 if i!=j: l[i].update(l[j]) l.pop(j) #print(l) count = q+2*(len(l)-1) print('Case #',k,': ',count,sep='')This passes the sample case but not the test cases. To the best of my knowledge, this should be right. Am I doing something wrong?
You are getting an incorrect answer, because you're calculating the count incorrectly. The it takes n-1 edges to connect n nodes into a tree, and num_clusters-1 of those have to be red.But if you fix that, your program will still be very slow, because of your disjoint set implementation.Thankfully, it's actually pretty easy to implement a very efficient disjoint set data structure in a single array/list/vector in just about any programming language. Here's a nice one in python. I have python 2 on my box, so my print and input statements are a little different from yours:# Create a disjoint set data structure, with n singletons, numbered 0 to n-1# This is a simple array where for each item x:# x > 0 => a set of size x, and x <= 0 => a link to -xdef ds_create(n): return [1]*n# Find the current root set for original singleton indexdef ds_find(ds, index): val = ds[index] if (val > 0): return index root = ds_find(ds, -val) if (val != -root): ds[index] = -root # path compression return root# Merge given sets. returns False if they were already mergeddef ds_union(ds, a, b): aroot = ds_find(ds, a) broot = ds_find(ds, b) if aroot == broot: return False # union by size if ds[aroot] >= ds[broot]: ds[aroot] += ds[broot] ds[broot] = -aroot else: ds[broot] += ds[aroot] ds[aroot] = -broot return True# Count root setsdef ds_countRoots(ds): return sum(1 for v in ds if v > 0)## CherriesMesh solution#numTests = int(raw_input())for testNum in range(1,numTests+1): numNodes, numEdges = map(int,raw_input().split()) sets = ds_create(numNodes) for _ in range(numEdges): a,b = map(int,raw_input().split()) print a,b ds_union(sets, a-1, b-1) count = numNodes + ds_countRoots(sets) - 2 print 'Case #{0}: {1}'.format(testNum, count)
MySQL UTC Date format I am pulling data from Twitter's API and the return date is UTC in the following form: Sat Jan 24 22:14:29 +0000 2009Can MySQL handle this format specifically or do I need to transform it? I am pulling the data using Python.
Yes, if you are not willing to transform it in Python, MySQL can handle this with the STR_TO_DATE() function, as in the following example:INSERT INTO your_tableVALUES ( STR_TO_DATE('Sat Jan 24 22:14:29 +0000 2009', '%a %b %d %H:%i:%s +0000 %Y'));You may also want to check the full list of possible format specifiers: MySQL: DATE_FORMAT.
Why can't my Apache see my media folder? Alias /media/ /home/matt/repos/hello/media<Directory /home/matt/repos/hello/media>Options -IndexesOrder deny,allowAllow from all</Directory>WSGIScriptAlias / /home/matt/repos/hello/wsgi/django.wsgi/media is my directory. When I go to mydomain.com/media/, it says 403 Forbidden. And, the rest of my site doesn't work because all static files are 404s. Why? The page loads. Just not the media folder.Edit: hello is my project folder.I have tried 777 all my permissions of that folder.
You have Indexes disabled, so Apache won't generate a listing of the files when you request the directory /media (instead, it shows the 403 Forbidden error). Try accessing a file directly within there, e.g.: http://localhost/media/some_image.jpg
Python: Error Freezing ctypes I get the following error trying to freeze a python script that imports ctypes: Warning: unknown modules remain: _bisect _ctypes _hashlib _heapq _locale _random _socket _ssl _struct _tkinter _weakref array binascii cStringIO collections datetime fcntl itertools math operator pyexpat readline select strop syslog termios time, while ctypes is a builtin module in python2.5 and the path to ctypes is correctly recognized as in the following:P ctypes /usr/local/lib/python2.5/ctypes/__init__.pym ctypes._endian /usr/local/lib/python2.5/ctypes/_endian.pyIs there any way to manually copy some files around and make this work? Has anybody ever successfully froze ctypes in a standalone binary?
I suggest you use py2exe or something similar instead of freeze.
Pandas - Groupby with cumsum or cumcount I have the following dataframe: Vela FlgVela 0 R 01 V 1 2 V 1 3 R 1 4 R 15 V 06 R 17 R 18 R 1What is the best way to get the result of the dataframe below? Vela FlgVela AddCol0 R 0 11 V 1 22 V 1 23 R 1 34 R 1 35 V 0 46 R 1 57 R 1 58 R 1 5 I have tried the following logic but the result is not what I expected.df['AddCol'] = df.groupby(df['Vela'].astype(str).str.strip() != df['Vela'].shift(-1).astype(str).str.strip() ).cumcount()+1
I think you're close, here is one way:df["AddCol"] = df.groupby("Vela").ngroup().diff().ne(0).cumsum()where we first get the group number each distinct Vela belongs to (kind of factorize) then take the first differences and see if they are not equal to 0. This will sort of give the "turning" points from one group to another. Then we cumulatively sum them,to get>>> df Vela FlgVela AddCol0 R 0 11 V 1 22 V 1 23 R 1 34 R 1 35 V 0 46 R 1 57 R 1 58 R 1 5
Delete rows from python panda dataframe My dataframe has columns like ticket, host, drive model, Chassis, Rack, etc.I want all the rows with value in the Chassis column equal to '1025C-M3B', '1026T-M3FB', '2026TT-DLRF' or 'SYS-2027TR-D70RF+'. I want to delete the rest.I trieddata2 = data1[data1.Chassis == '1025C-M3B' or data1.Chassis == '1026T-M3FB' or data1.Chassis == '2026TT-DLRF' or data1.Chassis == 'SYS-2027TR-D70RF+']Got ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().Then trieddata2 = data1[data1.Chassis.all() == '1025C-M3B' or data1.Chassis.all() == '1026T-M3FB' or data1.Chassis.all() == '2026TT-DLRF' or data1.Chassis.all() == 'SYS-2027TR-D70RF+']Got KeyError: u'no item named False'Can anyone please tell me how to do this?
Use bitwise or (|) instead of logical or.data2 = data1[(data1.Chassis == '1025C-M3B') | (data1.Chassis == '1026T-M3FB') | (data1.Chassis == '2026TT-DLRF') | (data1.Chassis == 'SYS-2027TR-D70RF+')]You can find plenty of reading material about the use of bitwise vs logical operations in numpy/pandas. Here is one.
matplotlib: getting coordinates in 3D plots by a mouseevent I want to get coordinates (x,y,z) in 3D plots by a mouse event such as a click. MATLAB has this function, datacursormode. A good image is in the following link.http://www.mathworks.com/help/matlab/ref/datacursormode.htmlmpldatacursor (https://github.com/joferkington/mpldatacursor) is a similar function for matplotlib, however, this seems to be unsuitable for 3D plots. x and y values are not proper even though they can be get.from mpl_toolkits.mplot3d import Axes3Dimport matplotlib.pyplot as pltimport numpy as npfrom mpldatacursor import datacursorx = np.arange(-3, 3, 0.25)y = np.arange(-3, 3, 0.25)X, Y = np.meshgrid(x, y)Z = np.sin(X)+ np.cos(Y)fig = plt.figure()ax = Axes3D(fig)surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1)datacursor(surf)plt.show()I also want to get z value, if it is possible.Is there any good way?
According to the file "changelog.rst" at the link you suggested (https://github.com/joferkington/mpldatacursor) this function has been added in July 2015. Unfortunately it looks like it extracts the data points from the location where the mouse clicks rather than the original data set. This leads to some imprecision in the result.A possibility could be to modify the datacursor according to the instructions provided for the 2D version in Get data from plot with matplotlib. Hope this helps.
how to run powershell script in python '''$Session = New-Object -ComObject "Microsoft.Update.Session"$Searcher = $Session.CreateUpdateSearcher()$historyCount = $Searcher.GetTotalHistoryCount()$Result = $Searcher.QueryHistory(0, $historyCount) | Select-Object Date,@{name="Operation"; expression={switch($_.operation){1 {"Installation"};2 {"Uninstallation"};3 {"Other"}}}},@{name="Status"; expression={switch($_.resultcode){1 {"In Progress"};2 {"Succeeded"};3 {"Succeeded With Errors"};4 {"Failed"};5 {"Aborted"}}}},@{name="Update"; expression={IF($_.Title.tostring() -match "(.*?)"){$matches[0].replace('(','').replace(')','')}}},Title $Result | Where{$_.Date -gt (Get-Date).AddDays(-14)} | Sort-Object Date | Select Date,Operation,Status,Update,Title | Export-Csv -NoType "$Env:userprofile\Desktop\WindowsUpdates.csv"| Format-Tablethis is script save to notepad i want to get output using pythonimport subprocessp = subprocess.run('F:\getwindowupdate.ps1', shell=True)print(p.stdout)this is only open the notepad file how to execute this powershell script using python
You can pass a command to PowerShell and retrieve the output in your python script.Step 1Write a PowerShell script Write-Host 'Hello, World!'save it as script.ps1PS: This will output Hello, World!Step 2Write a python script and call your PowerShell script from there and retrieve the output import sys import subprocess cmd = ["PowerShell", "-ExecutionPolicy", "Unrestricted", "-File", ".\\script.ps1"] ec = subprocess.call(cmd) print("Powershell returned: {0:d}".format(ec))This will output: Hello, World! Powershell returned: 0
Curl -u in scrapy How to do this curl on scrapy?curl –i -u account_id:api_key "https://xecdapi.xe.com/v1/convert_from.json/?from=USD&to=CAD,EUR &amount=110.23"
You can use scrapy fetch command:scrapy fetch http://stackoverflow.com --nolog > output.htmlTo use authentication you can try passing credentials via url itself:scrapy fetch "http://username:password@stackoverflow.com" --nolog > output.html
Django on Mac with mysql I’m new in Django on Mac. I faced a problem in configuring Django environment with mysql on Mac.The error is “django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/david/david-env/lib/python2.7/site-packages/_mysql.so, 2): Symbol not found: _mysql_shutdownReferenced from: /Users/david/david-env/lib/python2.7/site-packages/_mysql.soExpected in: flat namespacein /Users/david/david-env/lib/python2.7/site-packages/_mysql.so”I have referred serval related answers and methods from stackoverflow, such as pip uninstall MySQL-pythonbrew uninstall mysqlbrew install mysql --universalpip install MySQL-pythonUnfortunately, it doesn’t work.I have built my virtualenv of python2.7.10 on Mac. I have used “pip install ” command to install serval packages including “Django-1.10.6”, “MySQL-python-1.2.5” and “mysqlclient”. I have installed “MySQL Server 5.7.17”, “MySQL Workbench” and “XCode”. Everything looks good , but the error can not be fixed.I also tried to use different versions of “MySQL-python” package, including “1.2.5” and “1.2.3” (I failed to install version 1.2.4). Failed either.I hope there is someone could help give a hand and lead me out of the trouble which destroyed my weekend. Thank you very much.
Downgrade your MySQL to 5.5 or below.Refer to the MySQL-python 1.2.5 intro page: MySQL-3.23 through 5.5 and Python-2.4 through 2.7 are currently supported. Python-3.0 will be supported in a future release. PyPy is supported.
How to apply different functions to a groupby object? I have a dataframe like this:import pandas as pddf = pd.DataFrame({'id': [1, 2, 1, 1, 2, 1, 2, 2], 'min_max': ['max_val', 'max_val', 'min_val', 'min_val', 'max_val', 'max_val', 'min_val', 'min_val'], 'value': [1, 20, 20, 10, 12, 3, -10, -5 ]}) id min_max value0 1 max_val 11 2 max_val 202 1 min_val 203 1 min_val 104 2 max_val 125 1 max_val 36 2 min_val -107 2 min_val -5Each id has several maximal and minimal values associated with it. My desired output looks like this: max minid 1 3 102 20 -10It contains the maximal max_val and the minimal min_val for each id.Currently I implement that as follows:gdf = df.groupby(by=['id', 'min_max'])['value']max_max = gdf.max().loc[:, 'max_val']min_min = gdf.min().loc[:, 'min_val']final_df = pd.concat([max_max, min_min], axis=1)final_df.columns = ['max', 'min']What I don't like is that I have to call .max() and .min() on the grouped dataframe gdf, separately where I throw away 50% of the information (since I am not interested in the maximal min_val and the minimal min_val). Is there a way to do this in a more straightforward manner by e.g. passing the function that should be applied to a group directly to the groupby call?EDIT:df.groupby('id')['value'].agg(['max','min'])is not sufficient as there can be the case that a group has a min_val that is higher than all max_val for that group or a max_val that is lower than all min_val. Thus, one also has to group based on the column min_max.Result for df.groupby('id')['value'].agg(['max','min']) max minid 1 20 12 20 -10Result for the code from above: max minid 1 3 102 20 -10
Here's a slightly tongue-in-cheek solution:>>> df.groupby(['id', 'min_max'])['value'].apply(lambda g: getattr(g, g.name[1][:3])()).unstack()min_max max_val min_valid 1 3 102 20 -10This applies a function that grabs the name of the real function to apply from the group key.Obviously this wouldn't work so simply if there weren't such a simple relationship between the string "max_val" and the function name "max". It could be generalized by having a dict mapping column values to functions to apply, something like this:func_map = {'min_val': min, 'max_val': max}df.groupby(['id', 'min_max'])['value'].apply(lambda g: func_map[g.name[1]](g)).unstack()Note that this is slightly less efficient than the version above, since it calls the plain Python max/min rather than the optimized pandas versions. But if you want a more generalizable solution, that's what you have to do, because there aren't optimized pandas versions of everything. (This is also more or less why there's no built-in way to do this: for most data, you can't assume a priori that your values can be mapped to meaningful functions, so it doesn't make sense to try to determine the function to apply based on the values themselves.)
Selenium keyboard.send key to a specific windows only I have my code working on selenium but the problem is that when the code is running I can't switch to another chrome windows because it will send keybord key to the new one.I need to send the key only to a specific windows where the code is runningdriver = webdriver.Chrome('chromedriver')driver.get ("mywebsite link") sleep(1) keyboard.send('l') sleep(0.5) keyboard.send('t') sleep(0.5) keyboard.send('enter') time.sleep(0.5)
So I had a similar issue to this a while back. The issue that you are running into is that you need to make sure that you are working with the correct window handle.Your answer should be pretty easily solved here:How to switch to new window in Selenium for Python?
What is the meaning of this asterisk? Python Pandas 100 question trying. str.contaubs I am really new to python. I have to use python for my research class, so I WAS learning pandas by using resource of Pandas Data Science 100 questions.I was working on a question that"P-015: From dataset(df_cutomer), retrieve data in (status_cd)which starts from A-F, and end by 1-9. Displays the first 10 of the data. "The answer saysdf_customer.query("status_cd.str.contains(r'^[A-F].*[1-9]$')",engine='python').head(10)I know the . is connecting two arguments, but was not sure what * means in this.The question is translated from Japan, and I am pretty new to python. It might really dumb question, but please answer for me.
Let's look at an online regex visualizerhttps://regexper.com/#'%5E%5BA-F%5D.*%5B1-9%5D%24'Customer must start with A to F and have anything in between the last character of 1 to 9.In particular, .* just means "0 or more of any character".
Creating a new column in a data frame based on row values I want to be able to get the following result without using a for loop or df.apply()The result for each row should be the row values up until the group index. group 0 1 2 3 4 5 6 70 2 a b c d e f g h1 5 s t u v w x y z2 7 a b c d e f g h group result0 2 [a, b, c]1 5 [s, t, u, v, w, x]2 7 [a, b, c, d, e, f, g, h]
Use DataFrame.melt, filter group column and variable column in DataFrame.query and last aggregate list:s = (df.melt('group', ignore_index=False) .astype({'variable':int}) .query("group >= variable") .groupby(level=0)['value'] .agg(list))df = df[['group']].join(s.rename('result'))print (df) group result0 2 [a, b, c]1 5 [s, t, u, v, w, x]2 7 [a, b, c, d, e, f, g, h]Or use apply:df = (df.set_index('group') .rename(columns=int) .apply(lambda x: list(x[x.index <= x.name]), axis=1) .reset_index(name='result'))print (df) group result0 2 [a, b, c]1 5 [s, t, u, v, w, x]2 7 [a, b, c, d, e, f, g, h]
locust run showing ModuleNotFound for Python module Follow-up from earlier question hereRunning a locust (locust.io) script from the command line.locust calls main.py which has the following imports:from locust import HttpUser, between, taskfrom StreamLoader.stream_generator import * # thought this brings in everythingPacker.py has these imports:from multipledispatch import dispatchfrom PackedItem import PackedItemStreamGenerator.py has:import hashlibfrom StreamLoader.Packer import Packerfrom aes_encryption import AesEncryptionI now see a missing module error: File "C:\Users\guyl\PycharmProjects\engine-load-tests\engine_load_tester_locust\main.py", line 2, in <module> from StreamLoader.stream_generator import * File "C:\Users\guyl\PycharmProjects\engine-load-tests\StreamLoader\stream_generator.py", line 2, in <module> from Packer import PackerModuleNotFoundError: No module named 'Packer'For clarity, I am running the code from locust which calls the Python code as depicted here.Here's the file structure:
Placed periods (full stops) before the package names in the import statements.I then was able to run the locust script from within PyCharm.Running from the DOS shell, I was able to accomplish the same by first running<project directory>\venv\Scripts\activate
Error: 'NoneType' object has no attribute '_inbound_nodes' [enter image description here][1]I am trying to make a parallel ANN network.I plan to :input a 120X120 image.disintegrate it into 9 40x40 images.Run Convolutional Net.Merge output in same pattern.Run another conv-net on merged layer.def conv_net(): input_shape = [120,120,1] inp=Input(shape=input_shape) print(type(inp)) print(inp.shape) row_layers = [] col_layers = [] # fn = lambda x: self.conv(x) for i in range(0, 120, 40): row_layers = [] for j in range(0, 120, 40): # out = (self.conv(inp[:,i:i+39,j:j+39])) inputs = inp[:, i:i + 40, j:j + 40] x = Dense(64, activation='relu')(inputs) out = Dense(64, activation='relu')(x) print(out.shape) row_layers.append(out) col_layers.append(keras.layers.concatenate(row_layers, axis=2)) print((len(col_layers))) merged = keras.layers.concatenate(col_layers, axis=1) print(merged.shape) con = Conv2D(1, kernel_size=5, strides=2, padding='same', activation='relu')(merged) print(con.shape) output = Flatten()(con) output = Dense(1)(output) print(output.shape) model = Model(inputs=inp, outputs=output) # plot_model(model,to_file='model.png') return modelI am getting an error NoneType object has no attribute _inbound_nodes.I debug a little. And the error is becuase of this line.inputs = inp[:,i:i+40,j:j+40]Error:Traceback (most recent call last): File "C:/Users/Todd Letcher/machine_learning_examples/unsupervised_class3/slicing_img.py", line 83, in <module> conv_net() File "C:/Users/Todd Letcher/machine_learning_examples/unsupervised_class3/slicing_img.py", line 80, in conv_net model = Model(inputs=inp, outputs = output) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 91, in __init__ self._init_graph_network(*args, **kwargs) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 235, in _init_graph_network self.inputs, self.outputs) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1406, in _map_graph_network tensor_index=tensor_index) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1393, in build_map node_index, tensor_index) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1393, in build_map node_index, tensor_index) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1393, in build_map node_index, tensor_index) File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1365, in build_map node = layer._inbound_nodes[node_index]AttributeError: 'NoneType' object has no attribute '_inbound_nodes'Help appreciated. Thank youP.S.:I removed the slicing line inp[:,i:i+39,j:j+39] and it runs ok.Image shows what I intend to do. The only difference is that I want to split the image into 9 tiles. Here the same image is fed to all the parallel Conv-nets.[1]: https://i.stack.imgur.com/Z7nt0.png
Finally arrived at an answer. Although I am still wondering why my previous code threw error, I just add lambda layers to split. def conv_net(self): # Add dropout if Overfiting input_shape = [120,120,1] inp=Input(shape=input_shape) col_layers = [] def sliced(x,i,j): return x[:,i:i+40,j:j+40] for i in range(0,120,40): row_layers = [] for j in range(0,120,40): #out = (self.conv(inp[:,i:i+39,j:j+39])) inputs = Lambda(sliced,arguments={'i':i,'j':j})(inp) #inputs = Input(shape=input_shape_small) out = (self.conv(inputs)) print(out.shape) row_layers.append(out) col_layers.append(keras.layers.concatenate(row_layers, axis=2)) print((len(col_layers))) merged = keras.layers.concatenate(col_layers,axis=1) print(merged.shape) #merged = Reshape((3,3,1))(merged) print(merged.shape) con = Conv2D(1,kernel_size=5,strides=2,padding='same',activation='relu')(merged) con = (BatchNormalization(momentum=0.8))(con) print(con.shape) #con = Conv2D(1,kernel_size=5,strides=2,padding='same',activation='relu')(inp) output = Flatten()(con) output = Dense(1)(output) print(output.shape) model = Model(inputs=inp, outputs = output) #plot_model(model,to_file='model.png') print(model.summary()) plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) return modelThis works with no errors.
canopy/ipython run script - no output? I am very new to IPython, but not new to py itself. I am going through some code examples from a book called datadrivensecurity and trying to run one of the code examples. When i create a new file in IPython (using cannopy), then click run, i get the following output in the console window. In [9]: %run /Users/myuser/Documents/Notebooks/ch02.pyhighvulns int64name objectos objectdtype: objectIn [10]:When i copy/paste the code into the In[#] console prompt, i get the output expected. What am i doing wrong ?## name ch02.py## create a new data frameimport numpy as npimport pandas as pd# create a new data frame of hosts & high vuln countsassets_df = pd.DataFrame( { "name" : ["danube","gander","ganges","mekong","orinoco" ], "os" : [ "W2K8","RHEL5","W2K8","RHEL5","RHEL5" ], "highvulns" : [ 1,0,2,0,0 ] } )# take a look at the data frame structure & contentsprint(assets_df.dtypes)assets_df.head()# show a "slice" just the operating systmesassets_df.os.head()# add a new columnassets_df['ip'] = [ "192.168.1.5","10.2.7.5","192.168.1.7", "10.2.7.6", "10.2.7.7" ]# show only nodes with more than one high vulnerabilty assets_df[assets_df.highvulns>1].head()# divide nodes into network 'zones' based on IP addressassets_df['zones'] = np.where( assets_df.ip.str.startswith("192"), "Zone1", "Zone2")# get one final viewassets_df.head()highvulns int64name objectos objectdtype: objectOut[7]: highvulnsnameosipzones01danubeW2K8192.168.1.5Zone110ganderRHEL510.2.7.5Zone222gangesW2K8192.168.1.7Zone130mekongRHEL510.2.7.6Zone240orinocoRHEL510.2.7.7Zone2
As a convenience, if you type an expression at the prompt, the value of the expression will be printed. But if you just write the same expression in a python file, it will be evaluated, but the value will not be printed. You should print x if you want the value of x to be printed from a file that you are running.
pywinauto access methods from ListBoxWrapper I'm using pywinauto do automate some tests on a GUI app. There is a list box that I need to check for some data. The ListBoxWrapper class has these methods:ListBoxWrapper.GetItemFocusListBoxWrapper.ItemCountListBoxWrapper.ItemDataListBoxWrapper.ItemTextshttps://pywinauto.readthedocs.io/en/latest/code/pywinauto.controls.win32_controls.html#pywinauto.controls.win32_controls.ListBoxWrapperHow do I access these methods?Here's what I have till now:- I created an Application instance and used it to launch the program- I have a WindowSpecification instance for the listboxlistbox = programwindowspec.child_window(title="abcdefg", control_type="ListItem")From here how do I get to the ListBoxWrapper class methods?PS: I'm not an expert at the OOP side of Python so pls bear with meEDIT: I used the .children() method to get wrappers for all the controls on the window and then filtered out the list box from the children.window = app.window(handle=w_handle)for child in window.children(): if 'List' in child._control_types: print(child) text = child.texts() print(text)And this serves my purpose. But I'm thinking _control_types is a 'private' class attribute. Is it ok to access it directly from outside the class?
It looks like you use backend="uia" but the docs link you provided is for backend="win32". There are 2 different wrappers for these backends. This is correct docs for UIA List* related wrappers.Using control_type in child_window(...) search criteria is correct. For the WindowSpecification you can create a ListItemWrapper so:list_item = programwindowspec.child_window(title="abcdefg", control_type="ListItem")item_wrapper = list_item.wrapper_object()# list all available attributes for a list item wrapperprint(dir(item_wrapper))To create a ListViewWrapper (for UIA) you need to use control_type="List" or control_type="DataGrid" in a WindowSpecification (we use the same wrapper for these 2 control types).
Why is this Python script failing? (xml.etree) First: I know that anyone who wants to help will ask for code that demonstrates the error. That will require a ZIP of the project, and I don't see how to attach a file to a StackOverflow question. I'll be happy to upload the file when someone tells me how.This is one of those things where "I didn't change anything, but it broke." The environment is Windows 10, Python 3.8, and PyCharm 2019.3.5.I left the project in a fully debugged state a couple of weeks ago. Today I added a function definition and a call to it. Now the program fails when it tries to create a parser for an XML tree... before the new function is ever called.Early in the script I import etree from xml:from xml import etreeAt the point of failure I try to create a parser:_parser = etree.ElementTree.XMLParser(encoding="iso-8859-1")The messages I get are:Connected to pydev debugger (build 193.7288.30)Traceback (most recent call last): File "C:/Users/... /PartConfig/PartConfig.py", line 47, in <module> _parser = etree.ElementTree.XMLParser(encoding="iso-8859-1")AttributeError: module 'xml.etree' has no attribute 'ElementTree'I have an "except" block, but it never gets executed because its scope is etree.ElementTree.ParseError.Taken at face value, the error message is simply wrong. I know the script found etree.ElementTree because it ran past the import statement, and when I misspelled the module name as an experiment it failed right there. ElementTree is an element of xml.etree in the standard Python library, so I can't think of a way the script could fail the way it did. The message must be trying to tell me something, but what?
You have to use this syntax:from xml.etree import ElementTree_parser = ElementTree.XMLParser(encoding="iso-8859-1")As @Fred Larson explained in his comment, you have to import the module itself, and etree is a package.
Removing a particular pattern of a text file in python I have an input file named file1 which contains:Student 0 : Performed well but can do better. [76.50%]Student 1 : Brilliant performance. [98.50%]In this particular file I just want to remove the % part so that it produces output like:Student 0 : Performed well but can do better.Student 1 : Brilliant performance.I tried in this manner:with open('file1', 'r') as infile, open('file2', 'w') as outfile: temp = infile.read().replace("[[0-9]+]", "").replace("%","") outfile.write(temp)But this is only removing the %sign and giving output as:Student 0 : Performed well but can do better. [76.50]Student 1 : Brilliant performance. [98.50]
You still need regex:import rewith open('file1', 'r') as infile, open('file2', 'w') as outfile: temp = re.sub("\[[\d+\.%]+\]", "", infile.read()) outfile.write(temp)
makemigration - Create model and insert data only once I've a model as below:from django.db import modelsclass Country(models.Model): cid = models.SmallAutoField(primary_key=True) label = models.CharField(max_length=100) abbr = models.CharField(max_length=3)countries = {"AFG": "Afghanistan","ALB": "Albania","DZA": "Algeria","ASM": "American Samoa","AND": "Andorra","AGO": "Angola","AIA": "Anguilla"};for c in countries: row = Country(label = countries[c], abbr = c) row.save()Now whenever I run the following command:python manage.py makemigrationsThe first time, it creates the table and populates it. The 2nd, 3rd and so on times, it keeps inserting the same data (It is definitely possible that I will be using the makemigration command many times, so I don't want it to insert it everytime the command is run)Any way to achieve this? Create and insert once?
You can add data migrations that create data, these get run once when the migration is applied. This is an example where your data migration is added to the migration that also adds the modelfrom django.db import migrations, modelscountries = { "AFG": "Afghanistan", "ALB": "Albania", "DZA": "Algeria", "ASM": "American Samoa", "AND": "Andorra", "AGO": "Angola", "AIA": "Anguilla"}def create_countries(apps, schema_editor): Country = apps.get_model('myapp', 'Country') for c in countries: Country.objects.create(label=countries[c], abbr=c)class Migration(migrations.Migration): dependencies = [ ('myapp', '0000_previous'), ] operations = [ migrations.CreateModel( name='Country', fields=[ ('cid', models.SmallAutoField(primary_key=True, serialize=False)), ('label', models.CharField(max_length=100)), ('abbr', models.CharField(max_length=3)), ], ), migrations.RunPython(create_countries), ]
Text-based adventure game, attacking causes game to crash I have set up the function for a player to attack an enemy, which seems to work okay. The problem is the actual action of attacking. In my main game code, it throws an AttributeError.Here is the block of code that I think is the culprit (at least, this is the block that's referenced by the error):def choose_action(room, player): action = None while not action: available_actions = get_available_actions(room, player) action_input = input("Action: ") action = available_actions.get(action_input) if action: action() else: print("Invalid selection!")The game will run just fine, until we come across an enemy, and we go to attack it. Once I type the hotkey for attack, the game crashes with the following error:game.py", line 53, in choose_action action = available_actions.get(action_input)AttributeError: 'NoneType' object has no attribute 'get'I'm new to programming in general, and I'm using a book to help me create this game. I've got the code copied exactly as its written in the book, so I'm just trying to figure out what I need to change in order to make the attack action work properly.EDIT: As requested, here is the get_available_actions() function:def get_available_actions(room, player): actions = OrderedDict() print("Choose an action: ") if player.inventory: action_adder(actions, 'i', player.print_inventory, "Print inventory") if isinstance(room, world.EnemyTile) and room.enemy.is_alive(): action_adder(actions, 'a', player.attack, "Attack") else: if world.tile_at(room.x, room.y - 1): action_adder(actions, 'n', player.move_north, "Go north") if world.tile_at(room.x, room.y + 1): action_adder(actions, 's', player.move_south, "Go south") if world.tile_at(room.x + 1, room.y): action_adder(actions, 'e', player.move_east, "Go east") if world.tile_at(room.x - 1, room.y): action_adder(actions, 'w', player.move_west, "Go west") if player.hp < 100: action_adder(actions, 'h', player.heal, "Heal") return actions
It would be better if you added the get_available_actions(arg1, arg2) function. It appears that this function does not return a value or returns None (which is the same this).If you can add more of your code we can analyze this error further. Otherwise, you should try to change the return to something can use the method .get(arg1, arg2).Hope this helps!With new information from your edit... It looks like your return statement was intended to be indented one less tab, review the following code with this change made and see if this fixes your issue:def get_available_actions(room, player): actions = OrderedDict() print("Choose an action: ") if player.inventory: action_adder(actions, 'i', player.print_inventory, "Print inventory") if isinstance(room, world.EnemyTile) and room.enemy.is_alive(): action_adder(actions, 'a', player.attack, "Attack") else: if world.tile_at(room.x, room.y - 1): action_adder(actions, 'n', player.move_north, "Go north") if world.tile_at(room.x, room.y + 1): action_adder(actions, 's', player.move_south, "Go south") if world.tile_at(room.x + 1, room.y): action_adder(actions, 'e', player.move_east, "Go east") if world.tile_at(room.x - 1, room.y): action_adder(actions, 'w', player.move_west, "Go west") if player.hp < 100: action_adder(actions, 'h', player.heal, "Heal") return actionsGoodluck!
Appending multiple button clicks to a list using Flask First time posting. I appreciate any help. I'm taking a list of items and displaying them on a page using a for loop. Each item is a button instead of a hyperlink. I'm trying to make it so the user can click multiple buttons as "choices" and have the value for each button appended to a list for further processing. So far, nothing happens on click. I try to go to the page I've created as a test to view results and I get a 404.In my template I use this For loop to get the whole list on the page. Each list item's text is clickable like a button. But I'm just not sure where to go from here.<form action="/choices" method="post"> {% for i in range(0, toplen) %} <button class="choice-button" type="submit" name="{{ top[i] }}" value="{{ request.form.choice }}">{{ top[i] }}</button> <br> {%endfor%} </form>Here is what I have in Flask:@app.route('/choices', methods=["POST"])def choices(): choice_list = [] if request.method == "POST": choice = request.form.choice choice_list.append(choice) return render_template("choices.html", choice_list=choice_list) else: return render_template("restaurant_list.html")Right now I'm just trying to get the list to show on the choices.html page but I just get the 404. What I'd really like to do is have the user click as many buttons as they like, and have the results show up in real time on the same page. Like a confirmation of each choice.Sorry if this doesn't make sense. If more info is needed I can provide it.Thanks!
Use request.form['choice']Also just a note, if you need this route to handle the GET request, you need to add methods=['GET', 'POST'] as right now the route will only handle the POST request.
How to safely store users' credentials to third party websites when no authentication API exists? I am developing a web app which depends on data from one or more third party websites. The websites do not provide any kind of authentication API, and so I am using unofficial APIs to retrieve the data from the third party sites. I plan to ask users for their credentials to the third party websites. I understand this requires users to trust me and my tool, and I intend to respect that trust by storing the credentials as safely as possible as well as make clear the risks of sharing their credentials. I know there are popular tools that address this problem today. Mint.com, for example, requires users' credentials to their financial accounts so that it may periodically retrieve transaction information. LinkedIn asks for users' e-mail credentials so that it can harvest their contacts. What would be a safe design to store users' credentials? In particular, I am writing a Django application and will likely build on top of a PostgreSQL backend, but I am open to other ideas. For what it's worth, the data being accessed from these third party sites is nowhere near the level of financial accounts, e-mail accounts, or social networking profiles/accounts. That said, I intend to treat this access with the utmost respect, and that is why I am asking for assistance here first.
There’s no such thing as a safe design when it comes to storing passwords/secrets. There’s only, how much security overhead trade-off you are willing to live with. Here is what I would consider the minimum that you should do:HTTPS-only (all passwords should be encrypted in transit)If possible keep passwords encrypted in memory when working with them except when you need to access them to access the service.Encryption in the data store. All passwords should be strongly encrypted in the data store.[Optional, but strongly recommended] Customer keying; the customer should hold the key to unlock their data, not you. This will mean that your communications with the third party services can only happen when the customer is interacting with your application. The key should expire after a set amount of time. This protects you from the rogue DBA or your DB being compromised. And this is the hard one, auditing. All accesses of any of the customer's information should be logged and the customer should be able to view the log to verify / review the activity. Some go so far as to have this logging enabled at the database level as well so all row access at the DB level are logged.
Django - Reference data from another model using a foreign key I'm new to Django so please tell me if I'm not on the right track.I have a Django project that I'm building and just wanted to ask what is the correct Django way to retrieve data from one model and use it in another. I have a for loop to assign the required fields to variables but I was looking for a cleaner solution and if one exists, one that uses the foreign key.Here's an example of the code involved:#MODELSclass Class1(models.Model): tag_number = models.CharField(max_length = 300, null = True, blank = True) example1 = models.CharField(max_length = 300, null = True, blank = True) example2 = models.CharField(max_length = 300, null = True, blank = True)class Class2(models.Model): tag = models.ForeignKey(Class1, related_name = 'tag_foreignkey') example3 = models.CharField(max_length = 300, null = True, blank = True) example4 = models.CharField(max_length = 300, null = True, blank = True)#VIEWS - This view is for Class2 but referencing fields from Class1tag_number = str(instrumentform.cleaned_data['tag'])query_results = Class1.objects.filter(tag_number = tag_number)for query_result in query_results: example5 = query_result.example1 example6 = query_result.example2The above works but I assume it's not the Django way of doing things and is not taking advantage of the foreign key.If someone could give me a nudge in the right direction that would be greatly appreciated.
Still, not hundred percent know what you want, since there are some missing info out there.For you Class1, You should do what you have done to Class2, using foreign key to store tag.For you code at bottom, there is a easy way to do it. (Assume you have used foreign key)tag_number = int(instrumentform.cleaned_data['tag'])for query_result in Class1.objects.filter(tag = tag_number).values_list('example1', 'exmaple2'): example5, example6 = query_result
Putting objects into a string then into a list in Python The title may be a little confusing here, so let me explain.Firstly I have a model of a list of items which is a foreign key of another model. The foreign key object has access wh_item_id and wh_item_name. I am trying to put that information into this formatwh_item_id=wh_item_nameSo for example it will return as:102944=Hands of the LightNow the part where it gets tricky is that I wish each field in the model to be put into this string, and then into a list that can be accessed later. Minus any blank fields in the model. The original model:class ProtectionList(models.Model): character = models.ForeignKey(Character) main_hand = models.ForeignKey(Loot, related_name="Main Hand", blank=True, null=True) off_hand = models.ForeignKey(Loot, related_name="Off Hand", blank=True, null=True) head = models.ForeignKey(Loot, related_name="Head", blank=True, null=True) neck = models.ForeignKey(Loot, related_name="Neck", blank=True, null=True) shoulder = models.ForeignKey(Loot, related_name="Shoulder", blank=True, null=True) back = models.ForeignKey(Loot, related_name="Back", blank=True, null=True) chest = models.ForeignKey(Loot, related_name="Chest", blank=True, null=True) wrist = models.ForeignKey(Loot, related_name="Wrist", blank=True, null=True) hands = models.ForeignKey(Loot, related_name="Hands", blank=True, null=True) waist = models.ForeignKey(Loot, related_name="Waist", blank=True, null=True) legs = models.ForeignKey(Loot, related_name="Legs", blank=True, null=True) feet = models.ForeignKey(Loot, related_name="Feet", blank=True, null=True) ring1 = models.ForeignKey(Loot, related_name="Ring 1", blank=True, null=True) ring2 = models.ForeignKey(Loot, related_name="Ring 2", blank=True, null=True) trinket1 = models.ForeignKey(Loot, related_name="Trinket 1", blank=True, null=True) trinket2 = models.ForeignKey(Loot, related_name="Trinket 2", blank=True, null=True)So there could be anything up to 16 items in this list, however I need to remove anything in list that shows as None.For Example,main_hand returns an objectoff_hand returns Nonehead returns an object... (I'll just use 3 fields for now for simplicity)I wish the list to look like the following:item_list = [1234=main hand itemname,5678=head itemname]Missing out the off_hand. Loot model for referenceclass Loot(models.Model): wh_item_id = models.CharField(verbose_name="Wowhead Item ID", max_length=255) wh_item_name = models.CharField(verbose_name="Wowhead Item Name", max_length=255) gear_type = models.CharField(max_length=255, blank=True, null=True) lockout_tier = models.IntegerField(blank=True, null=True) EDIT: What I essentially am after is the following:item_list = [item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.off_hand.wh_item_id + '=' + item_list.off_hand.wh_item_name,item_list.head.wh_item_id + '=' + item_list.head.wh_item_name,item_list.neck.wh_item_id + '=' + item_list.neck.wh_item_name,item_list.shoulder.wh_item_id + '=' + item_list.shoulder.wh_item_name,item_list.back.wh_item_id + '=' + item_list.back.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.chest.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name,item_list.main_hand.wh_item_id + '=' + item_list.main_hand.wh_item_name]But I only want the items that do not return None to be in that list.
I believe Django has a model_to_dict feature which lets you iterate over the object as you would a dict.from django.forms.models import model_to_dictchar_dict = model_to_dict(Your_Model_Instance)You can then iterate over that dict and get what you're looking for in whatever way you prefer to ignore the None values. As an example -my_list = []for k, v in char_dict.iteritems(): if v is not None: my_list.append("{}={}".format(k, v))
Django fails to create superuser in other db than 'default' Is it a bug or am I wrong ?I am at the step to create a superuser, but django want a table in wrong db despite my router seems to work :settings.pyDATABASES = { 'intern_db': { 'ENGINE': 'mysql.connector.django', 'NAME': 'django_cartons', 'USER': 'root', 'PASSWORD' : '', }, 'default': { 'ENGINE': 'mysql.connector.django', 'NAME': 'cartons', 'USER': 'root', 'PASSWORD' : '', }}DATABASE_ROUTERS = ['web.routers.AuthRouter']routers.pyclass AuthRouter(object): """ A router to control all database operations on models in the auth application. """ def db_for_read(self, model, **hints): """ Attempts to read auth models go to auth. """ print("READ ",model._meta.app_label) if model._meta.app_label in ['auth', 'contenttypes', 'admin', 'sessions']: print(True) return 'intern_db' return None def db_for_write(self, model, **hints): """ Attempts to write auth models go to auth. """ print("WRITE ",model._meta.app_label) if model._meta.app_label in ['auth', 'contenttypes', 'admin', 'sessions']: print(True) return 'intern_db' return None def allow_relation(self, obj1, obj2, **hints): """ Allow relations if a model in the auth app is involved. """ print("REL ", obj1._meta.app_label, ' ', obj2._meta.app_label) if obj1._meta.app_label in ['auth', 'contenttypes', 'admin', 'sessions'] or \ obj2._meta.app_label in ['auth', 'contenttypes', 'admin', 'sessions']: return True return None def allow_migrate(self, db, model): """ Make sure the auth app only appears in the 'auth' database. """ if db == 'intern_db': return (model._meta.app_label in ['auth', 'contenttypes', 'admin', 'sessions']) elif model._meta.app_label in ['auth', 'contenttypes', 'admin', 'sessions']: return False return Nonecommand :$> ./manage.py createsuperuserREAD authTrueREAD authTrueUsername (leave blank to use 'leo'): adminTraceback (most recent call last): File "/usr/lib/python3.4/site-packages/mysql/connector/django/base.py", line 115, in _execute_wrapper return method(query, args) File "/usr/lib/python3.4/site-packages/mysql/connector/cursor.py", line 507, in execute self._handle_result(self._connection.cmd_query(stmt)) File "/usr/lib/python3.4/site-packages/mysql/connector/connection.py", line 722, in cmd_query result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query)) File "/usr/lib/python3.4/site-packages/mysql/connector/connection.py", line 640, in _handle_result raise errors.get_exception(packet)mysql.connector.errors.ProgrammingError: 1146 (42S02): Table 'cartons.auth_user' doesn't existAs you can see, it looks for 'cartons.auth_user' that doesn't exist (it should be 'django_cartons' aliased by 'intern_db' instead)However, my router is called and return the right result as we see "READ auth" and "TRUE" in the command output...Any Idea ?
The thing is the system is somewhat broken : it respects the config for some task but not for others (the 2 first "TRUE" in the output) but it doesn't for other and use default.This is perhaps intended even if weird (actually nothing forbid to have several admin db, and that permits not have automatic dark choices).Actually to create the SU in another db, and for any usage of these command, you must pass the database where you want to create it explicitly :./manage.py createsuperuser --database=intern_dbNOTE : the db name is the alias in the config.
How do I determine if a pixel is black or white in OpenCV? I have this code in Python:width = cv.GetSize(img_otsu)[0]height = cv.GetSize(img_otsu)[1]#print width,":",heightfor y in range(height): for x in range(width): if(img_otsu[y,x]==(255.0)): CountPixelW+=1 if(img_otsu[y,x]==(0.0)): CountPixelB+=1I want to convert this Python code to C++This is what I have so far:cv::threshold(img_gray,img_otsu,0.0,255.0,cv::THRESH_BINARY+cv::THRESH_OTSU);for(int y =0;y<=img_otsu.size().height;y++) for(int x=0;x<=img_otsu.size().width;x++) { //Check Pixel 0 or 255 This is Problem }How to I check if the pixel is black or white in C++?
You can use the at() function for Mat objects (see OpenCV docs).img_otsu.at<uchar>(y,x) will return the value of the element in the matrix at that position. Note that you may have to change uchar to whatever type of matrix img_otsu is (e.g., float or double). Once you get the value, simply compare it to 0 or 255.
How to encode string on Python 3.0 and decode it on Python 2.7 correctly over socket I'm writing an online multiple player console game by Python. The server uses Python 3.0 and the client uses Python 2.7(because I want to use my smartphone and I can only find Python 2.7 on it). However, I have trouble converting the encoding of string between server and client.I wrote two function, sendData and receiveData to send and receive a string from socket connection. The problem is that when I encode the string 你好 by 'utf-8' on server side and decode it on client side, I got this error on client: UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-1: unexpected end of dataI tried encode('utf-8') on both sides or decode*('utf-8') on both sides, but both not working. I also tried to use pickle, but got this error on client: ValueError: unsupported pickle protocol: 3So how should I encode and decode the string?Here is my code for server(Python 3.0, datatrans.py):def sendData(sock, data): ''' Send string through socket. ''' sock.send(struct.pack('Q', len(data))) sock.send(bytes(data.encode('utf-8'))) # This might be the cause of the errordef receiveData(sock): ''' Receive object from socket. ''' lengthLeft = struct.unpack('Q', sock.recv(struct.calcsize('Q')))[0] data = bytes() while lengthLeft > 0: block = sock.recv(lengthLeft) data += block lengthLeft -= len(block) return str(data)The main script for server(Python 3.0):import socketimport threadingimport socketfrom datatrans import sendData, receiveDataimport timeport = int(input('Listen on port:'))def log(string): return '[%s]%s' % (str(time), string)def handleRequest(sock): sendData(sock, '你好')s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)s.bind(('0.0.0.0', port))s.listen(5)try: while True: sock, addr = s.accept() print(log('%s entered the game' % str(addr))) #print sock.recv(1000) threading.Thread(target = handleRequest, args = (sock,)).start()finally: s.close()My code for client is this, with sendData and receiveData changed a little(Python 2.7):# -*- coding: UTF-8 -*-import socketimport structdef sendData(sock, data): ''' Send string through socket. ''' sock.send(struct.pack('Q', len(data))) sock.send(data)def receiveData(sock): ''' Receive object from socket. ''' lengthLeft = struct.unpack('Q', sock.recv(struct.calcsize('Q')))[0] data = '' while lengthLeft > 0: block = sock.recv(lengthLeft) data += block lengthLeft -= len(block) return data.decode('utf-8') # Error comes from herewhile True: try: ip = raw_input('Sever IP:') port = int(raw_input('Port:')) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((ip, port)) except socket.error as error: print('Error while connecting') print(error) print('') else: breakwhile True: print(receiveData(s))Also, I'm wondering what should I do when sending a string entered by user from client to the server so that the server won't complain about encoding errors? Python 2.7 uses different encoding on different system, so I have no idea on how to deal with it now. Thanks!
You have one problem that your Python2 program is dealing with Byte strings all the time (that is - not Unicode string) but for the payload you try to decode where you get the error.If this is a small application, maybe just skip the decode step, and program your client app to deal with utf-8 encoded byte-strings all the time. (But that is not feasible if you have to process text beyond getting input and sending it through the network there).Now, that is not the source of your UnidodeDecode error as you show us - since the Server is correctly encoding the data, and even if it was double-encoding it, this specific error would not happen.What happens is that on the server side you are calculating the length of the text string - pre-encoding, and then encoding it to UTF-8. With the class of characters you show us in your example, utf-8 takes up to 4 bytes per character.So, you make a payload announcing you have a length "2" string and then transmit 8 bytes - and the text decoder would need 4 of them to actually transform the character back.Just rewrite this:def sendData(sock, data): ''' Send string through socket. ''' sock.send(struct.pack('Q', len(data))) sock.send(bytes(data.encode('utf-8'))) To this:def sendData(sock, data): ''' Send string through socket. ''' encoded_data = data.encode('utf-8') sock.send(struct.pack('Q', len(encoded_data))) sock.send(bytes(encoded_data)) And you should eliminate this main error there.Also, the last line on the server-site receiver function can't be:return str(data) - make it return data.decode('utf-8') instead.
I'm trying to make an array in python but i cannot print all the elements in the array? Trying to make a function to print all the arrays that are dynamically stored inside. But I'm not able to make a function to print all the elements in the arrayimport ctypesclass myArray(object):def __init__(self): self.length = 0 self.capacity = 1 self.Array = self.make_array(self.capacity)def push(self, item): if self.length == self.capacity: self.resize(2*self.capacity) self.Array[self.length] = item self.length += 1 print("Hello")def getitem(self, index): if index >= self.length: return IndexError('Out Of Bounds') return self.Array[index]def resize(self, new_cap): newArray = self.make_array(new_cap) for k in range(self.length): newArray[k] = self.Array[k] self.Array = newArray self.capacity = new_capdef make_array(self, new_cap): return (new_cap * ctypes.py_object)()
Approach 1: Add a print_all() methoddef print_all(self): print(self.Array[:self.length])Approach 2: Create a string representation of the classdef __str__(self): return str(self.Array[:self.length])Simple test:arr = myArray()arr.push(5)arr.push(2)arr.push(3)arr.push(5)arr.push(4)arr.push(6)arr.print_all()print(arr)Output:HelloHelloHelloHelloHelloHello[5, 2, 3, 5, 4, 6][5, 2, 3, 5, 4, 6]Full definition of the class:import ctypesclass myArray(object): def __init__(self): self.length = 0 self.capacity = 1 self.Array = self.make_array(self.capacity) def push(self, item): if self.length == self.capacity: self.resize(2*self.capacity) self.Array[self.length] = item self.length += 1 print("Hello") def getitem(self, index): if index >= self.length: return IndexError('Out Of Bounds') return self.Array[index] def resize(self, new_cap): newArray = self.make_array(new_cap) for k in range(self.length): newArray[k] = self.Array[k] self.Array = newArray self.capacity = new_cap def make_array(self, new_cap): return (new_cap * ctypes.py_object)() def print_all(self): print(self.Array[:self.length]) def __str__(self): return str(self.Array[:self.length])
To extract content of 1st column (all rows) from an .xlsx file and replace it with the extracted information from each column I have to replace first entire column (all rows) with information extracted from each column itself. Last digit is missing for each column with my code.I have coded but had to save the output to a different file. I am unable to figure out how to replace the first column of the existing file itself. I need one file with the required output only.fname = 'output.xlsx'wb = openpyxl.load_workbook(fname)sheet = wb.activeprint('The sheet title is: ', sheet.title)row_a = sheet['A']d = []for cell in row_a: a = cell.value d.append(a)print(d)s = []for i in d: i = i[-1:-8] s.append(i)print('The list of account numbers is: ', s)wc = xlwt.Workbook()ws = wc.add_sheet('Sheet1')row=0col=0list_d = sfor item in list_d: ws.write(row, col, item) row+=1wc.save('FINAL.xls')
I suggest using python's builtin string.split method:import openpyxlfname = 'output.xlsx'wb = openpyxl.load_workbook(fname)sheet = wb.actived = [cell.value for cell in sheet['A']] # List comprehension to replace your for loop# str.split splits the 'Name' column data into an array of strings# selecting [-1] selects only the account numbers = [i.split('.')[-1] for i in d]s[0] = 'Account' # replace 'Name' with 'Account' for column headerrow = 1col = 1for item in s: sheet.cell(row, col).value = item row += 1wb.save(fname)I also added list comprehensions, which are a more Pythonic way of creating arrays from data in many cases.
How do I access class variables? In this program I want a user to enter credentials and then based on the inputs validate whether it is correct. I am using tkinter to provide a GUI. I want to be able to take the auth function outside of the class so I can shut the tkinter dialog once the account has been logged in, however, the problem here is that the auth function is within the class, I've tried various ways to retrieve the variable but I've had no luck.from tkinter import *import tkinter.messagebox as tmclass LoginFrame(Frame): def __init__(self, master): super().__init__(master) self.label_Email = Label(self, text="Email") self.label_password = Label(self, text="Password") self.entry_Email = Entry(self) self.entry_password = Entry(self, show="*") self.label_Email.grid(row=0, sticky=E) self.label_password.grid(row=1, sticky=E) self.entry_Email.grid(row=0, column=1) self.entry_password.grid(row=1, column=1) self.checkbox = Checkbutton(self, text="Keep me logged in") self.checkbox.grid(columnspan=2) self.logbtn = Button(self, text="Login", command=self._login_btn_clicked) self.logbtn.grid(columnspan=2) self.pack() def _login_btn_clicked(self): # print("Clicked") Email = self.entry_Email.get() password = self.entry_password.get() # print(Email, password) self.answer = auth(Email, password)root = Tk()lf = LoginFrame(root)if 'Bearer' in lf.answer: root.quit()root.mainloop()My auth function will return a bearer token for the next stage if the login is successful, therefore I am checking whether or not the answer variable has returned it. If it has then I will shut the tkinter dialog
You're accessing the instance variable correctly, just in the wrong "order". Meaning, the answer must be checked only after the button is clicked. Basically, when your GUI loads, or directly after making the frame, the button isn't clicked, so the variable isn't defined, yet you're trying to access it immediately One simple option is to not access the instance variable, and just use the passed in Tk object of the master def _login_btn_clicked(self): # print("Clicked") Email = self.entry_Email.get() password = self.entry_password.get() # print(Email, password) answer = auth(Email, password) if 'Bearer' in answer: self.master.quit()
python find elements using selenium.webdriver I want to find the following element:<input type="text" value="" action-data="text=邮箱/会员帐号/手机号" action-type="text_copy" class="W_input " name="username" ...And here is the html tags section, there are multiple input with the same name and class properties. So I want to find it using the normal_form div property.This code does not work:browser.find_element_by_css_selector('input[action-type="text_copy"]')I think the field action-type is not a standard field. What can I do?.Thanks.<div class="W_login_form" node-type="normal_form"><div class="info_list" node-type="username_box"> <div class="inp username "> <input type="text" value="" action-data="text=邮箱/会员帐号/手机号" action-type="text_copy" class="W_input " name="username" node-type="username" tabindex="1" maxlength="128" autocomplete="off"> </div></div>I am trying, and this way I can find the element.browser.find_element_by_xpath("//div[@class='W_login_form']/div/div/input")It finds the div with class W_login_form first, and looks for div and div step in, and last gets the input.Do you have any good idea about it?
Try this:browser.find_element_by_xpath("//div[@class='info_list']//input")
How to use python for curl alternative in curl i do this:curl -d "text=great" http://text-processing.com/api/sentiment/How i can do this same thing in python?
Using the requests library you can do somethng like this:from requests import getget("http://text-processing.com/api/sentiment/", data={"text": "great"})
Trouble Inserting DataFrame Into InfluxDB Using Python I'm trying to insert a very large CSV file into InfluxDB and am inserting it as such in Python:influx_pd = influxdb.DataFrameClient(host, port, user, password, db, verify_ssl=False)for frame in pd.read_csv(infile, chunksize=batch_count): frame.set_index(pd.DatetimeIndex(frame[date_pk]), inplace=True) frame.dropna(axis=1, how='all') influx_pd.write_points(frame, 'patients')However, on the first call to write_points, I'm receiving this error (truncated):raise InfluxDBClientError(response.content, response.status_code)influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'enroll_pd Pt Id=\"21.0\",Admit Date=\"2010-12-05\", ... MRSA Screening=\"Negative\" 1291507200000000000': invalid field format\nunable to parse ... (ellipses used to truncate)I had read about issues with InfluxDB and NaN values (which my CSV file does contain), so I tried inserting placeholder values for NaN values but receive the same result. Could someone please help me locate the issue in my code? It would be much appreciated.I'm using an InfluxDB 1.3 Docker image just FYI.
So I realized that I had to explicitly specify the protocol to be json, as such:influx_pd.write_points(frame, measurement='enroll_pd', protocol='json')in addition to filling in NaN values (JSON has no support for those) with an imputation method. I thought the docs I was under the impression that json was the default, I guess that was not the case.This, of course, might only be one solution. I welcome other, alternative solutions that work.
Faster estimation of logarithm operation I have a fairly simple function involving a logarithm of base 10 (f1 shown below). I need it to run as fast as possible since it is called millions of times as part of a larger code.I tried with a Taylor approximation (f2 below) but even with a large expansion the accuracy is very poor and, even worse, it ends up taking a lot more time.Have I reached the limit of performance attainable with numpy?import timeimport numpy as npdef f1(m1, m2): return m1 - 2.5 * np.log10(1. + 10 ** (-.4 * (m2 - m1)))def f2(m1, m2): """ Taylor expansion of 'f1'. """ x = -.4 * (m2 - m1) return m1 - 2.5 * ( 0.30102999 + .5 * x + 0.2878231366 * x ** 2 - 0.0635837 * x ** 4 + 0.0224742887 * x ** 6 - 0.00904311879 * x ** 8 + 0.00388579 * x ** 10)# The data I actually use has more or less this range.N = 1000m1 = np.random.uniform(5., 30., N)m2 = np.random.uniform(.7 * m1, m1)# Test both functionsM = 5000s = time.clock()for _ in range(M): mc1 = f1(m1, m2)t1 = time.clock() - ss = time.clock()for _ in range(M): mc2 = f2(m1, m2)t2 = time.clock() - sprint(t1, t2, np.allclose(mc1, mc2, 0.01))
Replace all of those exponentiations in f2 with multiplication:def f2(m1, m2): """ Taylor expansion of 'f1'. """ x = -0.4 * (m2 - m1) x2 = x * x x4 = x2 * x2 x6 = x4 * x2 return m1 - 2.5 * ( 0.30102999 + .5 * x + 0.2878231366 * x2 - 0.0635837 * x4 + 0.0224742887 * x6 - 0.00904311879 * x4 * x4 + 0.00388579 * x4 * x6)
How to represent the number like '1.108779411784206406864790428E-69', between 0-1 in Python I have a number that comes from Sigmoid function like '1.108779411784206406864790428E-69' but it's naturally should be between 0-1. How can I represent it in that way? Thanks
The number that you got is the scientific notation of this number: 0.0000000000000000000000000000000000000000000000000000000000000000000011087794117842064068647904281594To get the number like that, you need to do this:x = 1.108779411784206406864790428E-69print("%.100f" % x)"%.100f" is the string to format, where 100 is the number of floats you need to show.
Django OneToOneField initialization I'm building a django-based application that receives some information from client-computers (e.g. memory) and saves it into the database.For that I created the following model:class Machine(models.Model): uuid = models.UUIDField('UUID', primary_key=True, default=uuid.uuid4, editable=False) name = models.CharField('Name', max_length=256) memory = models.OneToOneField('Memory', on_delete=models.CASCADE, null=True) def save_data(self, data): if not self.memory: memory = Memory() memory.save() self.memory = memory self.save() self.memory.total = data['memory_total'] self.memory.used = data['memory_used'] self.memory.cached = data['memory_total'] self.memory.save()Now for the Memory, I have the following model:class Memory(models.Model): total = models.IntegerField(default=0) used = models.IntegerField(default=0) cached = models.IntegerField(default=0)To save data to the machine-model when I receive it from the client, I call the save_data()-method. There I test if there is already a self.memory object, and if not I create it first before adding the data.Now, even tho it's working as intended I was wondering if there was a better, more clean way to achieve this. Would it be possible to initialize all my OneToOne-Fields to an empty instance of the referenced type so I needn't do the if not every time?
Fields accept a default keyword argument. This can be a callable that returns a value. You can make a callable that returns the appropriate value; in this case, the primary key of a newly created Memory object.def default_memory(): mem = Memory() mem.save() return mem.pkclass Machine(models.Model): ... memory = models.OneToOneField('Memory', on_delete=models.CASCADE, null=True, default=default_memory)
Fastest way to update table nulls in postgresql from dataframe I have a pandas dataframe and matching postgresql table, where every cell in both is either null or a timestamp.For each cell in the table where the cell value equals null, and the corresponding dataframe cell value is a timestamp, I want to update the table cell value. What's the fastest way to do this?Currently I'm pulling in the whole table into a dataframe, comparing the two dataframes in python (cell by cell), entering those values into a 3rd dataframe (call it DFC), and then destroying the old table and building a new table from DFC. This seems inefficient.Example: **Data Frame** **Postgres Table** A B A B1 NaN 5 1 NaN NaN2 8 NaN 2 7 NaN**Goal State Postgres Table** A B1 NaN 52 7 NaNCurrent Code:import pandas as pdfrom pandas import DataFramed = {'A': ['None', 8], 'B': [5, 'None']}df = pd.DataFrame(data=d)out = {'A': ['None', 'None'], 'B': ['None', 'None']}outdf = pd.DataFrame(data=out)tbl = pd.read_sql_query('select * from "exampletable"',con=engine)for i, row in df.iterrows(): for j in ['A', 'B']: if df.at[i, j] != 'None' and tbl.at[i, j] == 'None': outdf.at[i, j] = df.at[i, j] else: outdf.at[i, j] = tbl.at[i, j]df.to_sql('exampletable', engine, if_exists='replace')print(outdf.to_string())
IIUC, you can merge the two databases but maintain a record of what records come from each. Then you can check if your A column is empty and fill in the B column with the B from df2.outdf = df1.join(df2, on=columns, how="outer", rsuffix='_df2', lsuffix='_df1')outdf['B'] = outdf.apply(lambda x: x['B_df2'] if pd.isnull(x['A']), axis=1)Edit: you'd want to filter back down to different rows.outdf = outdf.loc[:, [columns with _df1 suffix]]outdf.columns = [i.replace('_df1', '') for i in columns]outdf = outdf.sort_values(by='B')outdf = outdf.drop_duplicates([columns you're not filling in], keep='first')
Why is there a difference between binascii.b2a_base64() and base64.b64encode()? I'm trying to understand some divergent behavior I'm seeing with the following two functions:def hex_to_64(string): hex_string = binascii.a2b_hex(string) return binascii.b2a_base64(hex_string)def hex_to_64_2(string): hex_string = binascii.a2b_hex(string) return base64.b64encode(hex_string)If I pass in a hex string to the former, I get it back with a newline at the end, and the latter without. Is there a reason for that?
Nothing special, the implementators decided to do it that way. It is documented at binascii module. Convert binary data to a line of ASCII characters in base64 coding. The return value is the converted line, including a newline char. The length of data should be at most 57 to adhere to the base64 standard.If you don't feel comfortable with that just right strip it:hex_to_64('aa').rstrip('\n')>>>'qg=='Hope this helps!
Python Output to TKinter Entry has Float64 I am loading a CSV into a data frame, doing some calculations and then outputting the results to a grid of tkinter Entry boxes.This all works fine and the output is correct but it has a proceeding '0' and is followed by 'dtype:float64'. The data in the Entry looks like this (xxxx being the only data I want to display):0 xxxxxx dtype:float64screenshot of output tkinter window:To put the calculated data into the Entry box, I am using the command:BGO_Yin.insert(0,BGO_Y)Can I remove the extraneous parts somehow, or reformat the output variable?
The above comment answers solved the problem with my Tkinter GUI.I have subsequently upgraded to a PyQt5 QT Desinger GUI, the final code to send the formatted text the text_box in that case is:self.Q_BGO_Y.setText(str(BGO_Y.iloc[0]))self.Q_BGO_Y.repaint() #repaint to overcome known bug where text is not initially visible.
How to share my Tkinter app with other users? I have developed a tkinter application but I need other users (with Windows 10 OS) to use it. Some have a python interpreter installed but others don't. I tried to create an executable through py2exe and auto py-to-exe but none worked. I also tried to run the tkinter through pythonanywhere.com but it also didn't work.Is there a simply way to share the py files? Perhaps is there a click-to-run environment? What I don't need is to request users to install a full python application such as anaconda or WinPython, as this does not make sense for the users without a python interpreter.
I would use pyinstallerrun cmd.exe as Administartor and type:pip install pyinstallerthen run with:pyinstaller --onefile --noconsole --name your_script.pyThis creates a single file executable which also includes your Python with all dependencies of your script/project.If your projects contains external files like images/sounds you might need to edit the spec file as described here.
in python test this string("\x04\x01\x00PÀcö60\x00") with startswith or re, but returns false I am working on a webserver access log analysis tool. Sometimes i get malformed requests hitting the web server. I want to be able to identify these. However when trying to test whether this string "\x04\x01\x00PÀcö60\x00" starts with \x0. Python reports no match.I am doing:>>> t = "\x04\x01\x00P\xC0c\xF660\x00">>> t.startswith('\\x0')FalseWhat am i missing here? I tried regex as well, but no dice. :(I even tried to strip the slashes, but i cannot. What wizardry is this?>>> t.replace("\\", "")'\x04\x01\x00PÀcö60\x00'>>> t'\x04\x01\x00PÀcö60\x00'
The first character of the input string '\x04\x01\x00P\xC0c\xF660\x00' is '\x04' as the escape sequence has the format \xhh.'\\x0' in your example is actually a string composed of 3 characters: '\', 'x' and '0'. Compare:>>> len('\x04')1>>> len('\\x0')3So the correct check would be t.startswith('\x04'):>>> t = '\x04\x01\x00P\xC0c\xF660\x00'>>> t.startswith('\x04')TrueSee the Literals documentation for more details.
Unexpected result while looping through pandas DataFrame I load content of a csv to a dataframe.data = pd.read_csv("census.csv")Then I check data sizeprint( data.size) --> 633108Then I loop through DataFramecounter = 0for index, row in data.iterrows(): counter += 1Then I check the counter and datasize again.print( counter) --> 45222print( data.size) --> 633108They sould be same, I could not understand why they are not same. I would appriciate any help.
size isn't the correct attribute to use. size is the total number of elements.df = pd.DataFrame(np.zeros((3, 4)))df.size12size will coincidentally be correct if there is only one columndf.iloc[:, [0]].size3Instead, use df.shape[0] to get the number of rowsdf.shape[0]3Orlen(df)3I prefer len(df) because it is ever so slightly quicker access than df.shape[0]%timeit df.shape[0]%timeit len(df)1.58 µs ± 47.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)916 ns ± 21 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)You can replicate this for the 2nd dimension with len(df.columns)%timeit df.shape[1]%timeit len(df.columns)1.65 µs ± 67.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)679 ns ± 34.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)For grabbing the shape tuple, it's equivalent to grabbing both len of df.index and df.column. Avoid going to values for it's shape attribute as the call to form the values array is too much overhead. Unless of course you need that array for something else.%timeit df.shape%timeit df.values.shape%timeit len(df), len(df.columns)1.58 µs ± 75.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)5.78 µs ± 198 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)1.65 µs ± 35 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
How to give client mac for BOOTP, in DHCP scapy? clientMac = "00:00:01:00:11:03" bootp = BOOTP(op = opcode,chaddr = clientMac, ciaddr = "0.0.0.0",xid = 0x01020304,flags= 0x8000)Here, I try to create bootp part for a DHCP offer packet. But in the packet capture, the clientMac is shown as 30 30 3a 30 30 3a. I get a junk mac address.When I convert my original clientmac into ascii, its coming as 30 30 3a 30 30 3a.ie, ASCII: -> 3a (hex)0 -> 30 (hex)1 -> 31 (hex)Here how to give clientMac for BOOTP(), in DHCP scapy?
On BOOTP only (I assume for historical reasons), you need to pass the raw MAC value to chafe rather than the literal one.Use clientMac = str2mac("...")